id
stringlengths
3
9
source
stringclasses
1 value
version
stringclasses
1 value
text
stringlengths
1.54k
298k
added
stringdate
1993-11-25 05:05:38
2024-09-20 15:30:25
created
stringdate
1-01-01 00:00:00
2024-07-31 00:00:00
metadata
dict
7126699
pes2o/s2orc
v3-fos-license
Model Independent Tests for Time Reversal and CP Violations and for CPT Theorem in \Lambda_b and \bar\Lambda_b Two Body Decays Weak decays of beauty baryons like \Lambda_b (\bar\Lambda_b) into \Lambda (\bar\Lambda) and V (J^P = 1^-), where both decay products are polarized, offer interesting opportunities to perform tests of time reversal and CP violations and of CPT invariance. We propose a model independent parametrization of the angular distribution and of the polarization of the final resonances, by means of the spin density matrix. The transverse component of the polarization is sensitive to time reversal violations, moreover CP-and CPT-sensitive observables are defined. Introduction The interest in CP violations (CPV) and time reversal violations (TRV) has been increasing in last years [1][2][3][4][5][6][7][8][9][10]. The reason is that, although some CPV and also a direct TRV [11] have been detected experimentally, the nature of such symmetry violations has not yet been clarified. More precisely, the prediction of the size of the violation in some weak decays is strongly model dependent, which stimulates people to search for signals of new physics [1,2,5,4,10,12,13] (NP), beyond the standard model (SM). For example, the decays involving the transition b → s (1) present CPV parameters, like the B 0 − B 0 mixing phase [10,13] and the transverse polarization of spinning decay products of Λ b [1], which are very small in SM predictions, but are considerably enhanced in other models. In particular, recent signals of NP have been claimed in B decays: the CP violating phases of B → πK [10] and B s → ΦJ/Ψ [13] may be considerably greater than predicted by SM. Also Λ b decays [1,3,6,8] are suggested as new sources of CPV and TRV parameters, especially in view of the abundant production of this resonance in the forthcoming LHC accelerator. As regards direct TRV, only one evidence [11,14,15,16,17,18] has been given so far, and assuming the Bell-Steinberger [19] relation, which might be violated to few percent [20]. Lastly the CPT theorem, valid for local field theories, has been tested to a great precision in the neutral kaon decay [21], but not in other situations: for example, it has never been checked in decays involving the b-quark, furthermore a meaningful size of uncertainty remains in K ± → π ± π 0 [20]. The aim of the present paper is to suggest model independent tests of TRV, CPV and CPT invariance in hadronic Λ b and Λ b decays of the type V denoting a J P = 1 − resonance, either the J/ψ or a light vector meson, like ρ 0 , ω. Each resonance decays, in turn, to more stable particles, like, e. g., Λ → pπ − , J/ψ → µ + µ − , so that one has to consider a typical cascade decay. A previous paper [8] had been devoted to the subject. Now we parametrize, by means of the spin density matrix (SDM), the angular distribution and the polarizations of the decay products, without introducing any dynamic assumption at all. Then we study the behavior of these observables under CP and T, singling out those which are sensitive to T, CP and CPT violations. Our approach resembles the one proposed by Lee and Yang [22] and by Gatto [23] many years ago, to use hyperon decays for the same tests. However, as we shall see, a hadronic two-body weak decay involving two spinning particles in the final state -never proposed before -presents some advantages over hyperon decays [22,23,12], where one of the two final particles is spinless. In sect. 2 we derive the expressions of the spin density matrices, angular distribution and polarizations of the decay products in the above mentioned decays. In sect. 3 we present a parametrization of the angular distribution and of polarizations. In sect. 4 we suggest tests for TRV, CPV and CPT. Lastly we conclude with some remarks in sect. 5. Angular Distribution and Polarization Vectors of the Decay Products In order to deal with the angular distribution and the polarization of the intermediate resonances, Λ and V , coming from Λ b decay, the best suited method consists of applying the relativistic helicity formalism pioneered by Jacob and Wick [24] and reformulated later by Jackson [25] (see also [26,27]). This formalism presents some advantages: (i) thanks to its definition, λ = j ·p, where j = ℓ + s andp = p/| p| , the helicity of a particle of spin s and momentum p does not depend on its orbital angular momentum ℓ and it is rotationally invariant; (ii) λ equals the spin projection along p in the resonance rest frame. These physical properties can be applied just to cascade decays of the type described above, i. e., provided we take, in the rest frame of the resonance R 1 or R 2 , the quantization axis parallel to its momentum in the R 0 rest frame. The helicity of R i (i = 1,2), computed in the R 0 rest frame, is equal to the projection of its total angular momentum along this quantization axis in the R i rest frame. In our case we identify R 0 with Λ b , R 1 with Λ and R 2 with V . In the following, the formalisms of helicity and SDM will be intensively used by specifying different rest frames. Spin Density Matrices In the standard detector frame the z-axis is taken parallel to the incident proton beam. For our aims it is more convenient to define a different frame, through the three mutually orthogonal unit vectors Here p p and p b are, respectively, the proton momentum and the Λ b momentum. If produced by means of strong interactions -as usually assumed for Λ, Σ, Ξ, ... hyperons -, the Λ b is polarized along n. Therefore we find it suitable to choose the quantization axis along e z = n. We denote, here and in the following, the Λ b spin by J, with J = 1/2. Therefore the Here σ = (σ x , σ y , σ z ), σ i are the Pauli matrices and P Λ b is the polarization vector of Λ b . Defining a Λ b rest frame, whose axes are oriented like those in the laboratory frame, the components of P Λ b result in The components of the polarization vector are regarded as external parameters. Note that, if parity is conserved in the production process, we have SDM of the Λ-V System The intermediate state in a cascade decay of the type (3) is a composite one, consisting of the two spinning particles Λ and V. The SDM of this state is given by where M is the (unitary) operator which describes the decay considered. The matrix elements of the SDM ρ f are obtained from (6) by projecting the operators involved in that expression onto the initial and final states. The latter ones are characterized by a given three-momentum in the Λ b center-of-mass system and by a pair of helicities, λ 1 and λ 2 , corresponding to each resonance R 1 andR 2 . Therefore the SDM of this two-particle system is endowed with two pairs of indices, i. e., Here θ and φ are, respectively, the polar and azimuthal angle of the momentum of the Λ resonance in the Λ b rest frame. Note that the average value of a given operator O over the mixing of states defined by SDM (7) reads From now on we shall denote this sum over two pairs of indices with the usual symbol of "Trace", i. e., T r(ρ f O). Angular momentum conservation demands where Λ ′ , Λ ′′ = ±1/2. Therefore the expression (7) of the SDM can be conveniently transformed to The helicity formalism implies Here N J = (2J + 1)/4π, D J M,Λ ′ is a rotation matrix element and is the rotationally invariant decay amplitude, M b being the Λ b rest mass and p the momentum of Λ in the Λ b rest frame. Now we sum over the indices M and M ′ in the expression (11), taking into account eqs. (5), and recalling the well-known properties of the D-functions [27]. As a result we get Here we have dropped the index J from the A-amplitudes and the index 1 from helicities. Moreover we have set wherep is the unit vector in the direction of the Λ momentum and Note that the first term of the expression (14) corresponds to the cases where either . Conversely the second term corresponds to the cases where either λ ′ . We have to take into account such combinations in calculating average values of operators, according to eq. (9). In the case of observables connected to V it is convenient to re-express the SDM as with the constraint |µ + Λ ′ | = |µ ′ ± Λ ′ | = 1/2. Angular Distribution The angular distribution of the decay products, W (θ, φ), can be deduced from the SDM, according to the formulae Taking into account eq. (14) or (17), we get with We may also obtain the respective projections over the polar and azimuthal angles: It is worth noting the crucial role played by the initial polarization of Λ b in both the polar and azimuthal projections. In particular, the φ-dependence disappears if parity is conserved in the production reaction of this resonance. Polarization Vectors In order to compute the polarization vector of each resonance R i , a special frame has to be defined, by means of three mutually orthogonal unit vectors. For the Λ resonance we haveẑ The Λ polarization vector is decomposed like P Λ = P L e L + P T e T + P N e N , where P L , P T and P N are defined, respectively, as the longitudinal, transverse and normal In these particular frames [28] we have, for each resonance, where s ≡ (s x , s y , s z ) denotes the spin vector operator. Polarization Vector of Λ We calculate the components of the polarization vector of Λ by exploiting eq. (14) of the SDM and eq. (24). The longitudinal component reads where As to the transverse component, the previous formulae yield where Lastly, the normal component yields where Polarization Vector of V In order to calculate the components of the polarization vector of V we exploit eq. (17) of the SDM and take into account the expression of s for spin-1 particles [29]. We have Here Polarization Correlations Now we define the following four polarization correlations, similar to those considered by Chiang and Wolfenstein [30]: These observables are related to the angular correlations of the decay products of the Λ and V resonance, similar to those considered in refs. [31,32,33] and measured in experiments quoted in ref. [33]. Parametrization of Observables In this section we write a model independent parametrization, based on the previous formulae, of the angular distribution, of the polarization of Λ and of the polarization correlations P T T and P T N . In particular, we describe such observables in terms of a minimum number of independent parameters. The polarization components of V can be expressed as functions of such parameters, as is straightforward to see from eqs (34) to (40). The formulae of the angular distribution and of the polarization of Λ can be rewritten as with and As for the polarization correlations, we have where The parameters which appear in eqs. (51) to (60) are not all independent of one another, they fulfil the following relations: ( The first two equations allow to express some of the parameters just introduced as functions of a more restricted number of other, independent, parameters. We propose the following parametrization, similar to previous conventions in hyperon decays [22]: Then the angular distribution, the Λ polarization and the polarization correlations are expressed as functions of the 10 independent parameters P Λ b 1 , P Λ b 2 , P Λ b 3 , α W , ψ ± , β ± and ϕ ± . TRV, CPV and CPT Tests Here we illustrate properties of the observables illustrated in the preceding sections under discrete transformations and suggest possible tests for violation of relative symmetries. T Violations The rotationally invariant amplitudes introduced in sect. 2 transform under time reversal (TR) in such a way that [27] In this connection it is worth remembering that also the transverse polarization of the muon in K + decays to π 0 µ + ν µ and to γµ + ν µ has been indicated as a possible signature of TRV [21]. It is important to stress that, in order to get TRV observables, two different polarizations are needed, either Λ b 's and Λ's or V 's, or the simultaneous measurement of Λ and V polarizations. In particular we observe that these polarizations are connected to T-odd pseudoscalar triple products. For example, we have brackets denoting average. Similarly, by combining Λ b 's and Λ's polarizations, according to formulae (28) and (15), we can perform the following triple products: where we have set, for the sake of brevity, σ r = σ ·r and so on. Other authors already proposed T-odd triple products [31,38,1,4,5], but those considered here are rid of effects of final state interactions [31,39]. Moreover, we ascertain a posteriori that some T-odd pseudoscalar [31,32,30,33] triple products are unequivocally connected to TRV. CP Violations The CP transformation causes, according to the usual phase conventions [34,27], where the barred amplitude refers to the Λ b decay. Then, taking into account eqs. (55), (56), (59) and (60), together with the definitions given in subsects. 2.2 and 2.3 of the quantities defined in these expressions, we find that the following parameters are useful for detecting possible CP violations: Any nonzero value of the above defined ratios -defined conformally to the usual conventions [12,36,37,31] -would be a signature of CP violation and also, possibly, of NP [13]. The ratios not been considered, since the sums are CP-odd and the differences are CPT-odd, therefore both quantities may be, in principle, nearly zero. In any case, the sums may be used as further tests for CP violations. CPT Tests The ratios (77) to (81) are even under time reversal, therefore they can also be suitably employed in tests of the CPT theorem. Moreover it follows from the discussion above that also B T − B T and B T N − B T N are good parameters for testing the theorem. We note that polarization of muons from semileptonic decays of K ± had been proposed by Lee and Wu [35] as a possible test for CPT violation. Concluding Remarks We conclude this note with some remarks about the method suggested. A) Our analysis is completely model independent and is also independent of spurious effects [1,4,5,6,38] caused by final state interactions [31,39], which may flaw, in principle, other kinds of tests proposed [38,4,20]. In particular, we stress that our tests for TRV do not rely on any assumptions. Our calculation can be used as an input for calculating the model predictions of the observables considered here [8,9]. B) It is important to note that the TRV tests based on Λ b polarization are similar to those proposed for hyperon decays [23,12], However in our case we may also consider the polarization correlations [30,31], which provide a TRV test independent of the polarization of the parent resonance. Decays of the type (2) are very suitable for detecting possible TRV, as pointed out also by other authors in studying CP violations [4,5]. C) The observables considered in the present letter are very sensitive to NP, since they are rid of unpleasant effects of Wilson's coefficients [40]. These quantities have been considered even more convenient than B 0 − B 0 mixing phases [4]. D) Reactions similar to those studied here have been proposed also by other authors [41,42] in a different context, in occasion of LHC forthcoming run. Then it appears not unrealistic to suggest to measure also some of the observables considered in the present note, that is, the angular distribution and the polarization of at least one of the decay products.
2008-11-05T15:57:52.000Z
2008-05-27T00:00:00.000
{ "year": 2008, "sha1": "e5b9db3eb19872ecbe5201603f0b7d8368fa4332", "oa_license": null, "oa_url": "http://arxiv.org/pdf/0805.4171", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "5f789f3238697d18fee20be3bbcf2808a8db9dba", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
27122915
pes2o/s2orc
v3-fos-license
Brewers’ rice modulates oxidative stress in azoxymethane-mediated colon carcinogenesis in rats AIM: To investigate the mechanistic action of brewers’ rice in regulating the Wnt/nuclear factor-kappa B (NF-κ B)/Nrf2-signaling pathways during colon carcinogenesis in male Sprague-Dawley rats. NF- κ B expression was significantly lower between the AOM-alone group (1.000 ± 0.048) and those groups fed with diets containing 10% (w/w) brewers’ rice (0.255 ± 0.022), 20% (w/w) brewers’ rice (0.450 ± 0.045), or 40% (w/w) brewers’ rice (0.541 ± 0.027) (P < 0.05). Brewers’ rice improved the antioxidant levels, indicating that brewers’ rice can enhance effective recovery from oxidative stress induced by AOM. CONCLUSION: Our results provide evidence that brewers’ rice can suppress colon cancer via the regulation of Nrf2 expression and the inhibition of the Wnt/NF- κ B signaling pathways. a with signaling 20% (w/w) rice markedly the antioxidant level. These results strongly imply the potential of brewers’ rice in future applications to oxidative stress and colon cancer. INTRODUCTION Colorectal cancer has become the third most prevalent cancer after lung and breast cancers and contributes to nearly 10% of the total cases of cancer and approximately 8% of total cancer deaths worldwide [1] . It represents the third and second most commonly diagnosed cancer in males and females, respectively, with more than 1.2 million new cancer cases and 608700 deaths in 2008 worldwide [2] . The deregulation of Wnt/β-catenin signaling has been demonstrated to be associated with cancer, particularly colorectal cancer [3] . Chronic infection and inflammation promote the expression of nuclear factor-kappa B (NF-κB) [4] and inflammatory-associated genes, such as inducible nitric oxide synthase (iNOS) [5] . The NF-κB pathway is associated with colorectal cancer, and the inhibition of NF-κB activation can reduce chemoresistance [6] . The nuclear factor E2related factor 2 (Nrf2) transcription factor signaling pathway has become a target for chemoprevention. A previous study reported that Nrf2 regulates the expression of numerous detoxifying and antioxidant enzymes toward oxidative or electrophilic stress [7] . The health benefits of natural products have led to their recognition as sources of remedy [8] . Most studies have indicated that cancers may be prevented or delayed by treatment with natural dietary products or synthetic compounds [9] . Rice (Oryza sativa L.), an essential cereal crop grown in Asia, has become a major source of carbohydrates in the daily diet. Epidemiological studies have demonstrated that whole grain foods are recognized to be important for providing protection against cancer [10] . Brewers' rice, known locally as temukut, consists of broken rice, rice bran, and rice germ, which is a waste product of the rice industry. The production of brewers' rice during rice milling has been described in a previous report [11] . Our earlier study showed that the dietary administration of brewers' rice can reduce the risk of azoxymethane (AOM)-induced colon carcinogenesis in rats through the downregulation of β-catenin and cyclooxygenase (COX-2) [12] . However, the molecular mechanism underlying these effects remains obscure. We hypothesized that brewers' rice may provide chemopreventive or chemotherapeutic effects against colorectal cancer via regulation of multiple signaling pathways. The present study sets out to determine whether brewers' rice confers suppressive effects on the gene expression of β-catenin and key inflammation markers, such as NF-κB and iNOS, which are par-ticularly critical in the development of colon cancer. Glycogen synthase kinase 3β (GSK3β), a destruction complex that modulates the degradation of β-catenin, was also evaluated. Moreover, the potential roles of brewers' rice in the regulation of Nrf2-dependent transcriptional activity were assessed during AOMinduced colon tumorigenesis in male Sprague-Dawley rats. Nrf2 and heme oxygenase-1 (HO-1) were evaluated to determine the effect of brewers' rice in carcinogen metabolism against detoxification. The colon superoxide dismutase (SOD), malondialdehyde (MDA), and nitric oxide (NO) levels were also analyzed to assess the antioxidant effect of these treatments. Brewers' rice Freshly milled brewers' rice samples from rice variety MR 219 were obtained from the BERNAS Milling Plant at Seri Tiram Jaya, Selangor, Malaysia. The stabilization of brewers' rice was conducted as previously reported by Tan et al [13] . were housed in a well-ventilated room at 25 to 27 ℃ with 50% ± 10% relative humidity and 12-h light/ dark cycles. Hygienic conditions were maintained by weekly changes of woodchip beds. The animals were acclimatized for seven days and administered an American Institute of Nutrition (AIN)-93G diet and water ad libitum. The animals were randomly divided into the following five groups (six rats in each group): (G1) normal, (G2) AOM alone, (G3) AOM + 10% (weight (w)/weight (w)) brewers' rice, (G4) AOM + 20% (w/w) brewers' rice, and (G5) AOM + 40% (w/ w) brewers' rice. Beginning at six weeks of age, the rats were intraperitoneally given injections of AOM at a dose of 15 mg/kg body weight once weekly over a two-week period, whereas the rats in the normal group were given normal saline (vehicle control). The control groups (G1 and G2) were fed an AIN-93G diet, and the G3, G4, and G5 groups were given an AIN-93G diet containing 10%, 20%, and 40% (w/w) brewers' rice, respectively. The experimental diets were prepared weekly and kept at 4 ℃. The composition of the experimental diet (Table 1) was adjusted according to the nutrient content of brewers' rice with respect to moisture (11.36% ± 0.12%), ash (1.56% ± 0.26%), protein (9.01% ± 0.27%), fat (1.95% ± 0.11%), total available carbohydrates (72.42% ± 1.25%), and total dietary fiber (5.32% ± 0.04%) contents [12] . After twenty weeks of treatment, the animals were sacrificed after anesthesia with diethyl ether, and the colon tissue was removed, rinsed with PBS, opened longitudinally, and fixed with RNA Shield TM reagent or stored at -20 ℃ for further analyses. Total RNA extraction and cDNA synthesis The extraction of total RNA from colon tissue was performed using the HiYield Total RNA Mini Kit (Tissue). Initially, colon tissue disruption and homogenization were performed according to the manufacturer's protocols. The colon tissue was homogenized in a mixture of 100 µL of lysis buffer, 400 µL of RB buffer, and 4 µL of β-mercaptoethanol. The sample was then incubated for 5 min at room temperature and centrifuged at 15680 × g for 5 min. The supernatant was passed through the filter column and the collection tube. After centrifugation at 93 × g for 1 min, 400 µL of 70% ethanol was added and passed through the RB column. After adding 400 µL of W1 Buffer and 600 µL of wash buffer, the RNA was eluted with 50 µL of RNase-free water and kept at -80 ℃. Two microliters of nuclease-free water was added to the pedestal for a blank sample. After that, 1 µL of RNA sample was added. The RNA concentration was measured at 260 nm using a nanophotometer. Two micrograms of total RNA per 20 µL was reverse-transcribed using the High Capacity RNA-to-cDNA Kit, according to the manufacturer's protocols. The reverse transcription reaction was performed using an Authorized Thermal Cycler. The reaction was performed at 37 ℃ for 60 min followed by 95 ℃ for 5 min to denature the enzyme and then maintained at 4 ℃. The cDNA was then ready for use as a template for the amplification of real-time polymerase chain reaction (PCR). Quantitative real-time polymerase chain reaction analysis The nucleotide primer sequences of rat origin were obtained from the National Center for Biotechnology Information Gene Bank ( Table 2). The specific primers cycles); and annealing/extension at 60 ℃ for 30 s (40 cycles). All samples and controls were determined in triplicate using an Eco™ Real-Time PCR system, and the v4.0.7.0 software (Illumina, Inc., San Diego, CA, United States) was used for data analysis. The fold inductions of the samples were compared with the control (AOM-alone group). Beta-actin (ACTB), β-2 microglobulin (B2M), and ribosomal protein, large, P1 (RPLP1) were used as housekeeping genes to normalize the expressions of the target genes. Colon tissue preparation The colon tissues of rats were homogenized in ice-cold PBS. Supernatants were collected by centrifugation at 370 × g and 4 ℃ for 5 min and stored at -80 ℃ for the SOD [14] , MDA [15] , and NO [16] assays. Determination of superoxide dismutase The SOD levels in the colon homogenates were analyzed following the inhibition of the reduction of nitroblue tetrazolium (NBT). Tissue supernatant was mixed with 0.1 mol/L of ethylenediaminetetraacetic acid (EDTA), 0.15 mg/mL of sodium cyanide, 1.5 mmol/L of NBT, 0.12 mmol/L of riboflavin, and 0.067 mol/L of phosphate buffer in a 300 µL volume. The sample absorbance was read at 560 nm, and the percentage of SOD inhibition was compared with that of the blank. The concentration of the sample was calculated using the amount of protein required to achieve 50% inhibition and expressed as U/mg of protein. were validated for amplification specificity, amplification efficiency over a concentration range and consistency with the amplification efficiency of housekeeping genes. The mRNA levels of GSK3β, β-catenin, NF-κB, iNOS, Nrf2, and HO-1 were assayed using SYBR ® Select Determination of malondialdehyde Master Mix, CFX in a final volume of 20 µL, according to the manufacturer's protocols. Initially, the cDNA template, primers, and kit contents (SYBR ® Select Master Mix (CFX) and RNase-free water) were thawed on ice. Upon thawing, the reaction mix was prepared and thoroughly mixed. The qPCR reaction was then analyzed based on the following conditions: (1) uracil-DNA glycosylase (UDG) activation at 50 ℃ for 120 s (1 cycle); and (2) DNA polymerase activation at 95 ℃ for 120 s (1 cycle); denaturation at 95 ℃ for 2 s (40 Table 1 Determination of nitric oxide NO production in the colon was evaluated using a colorimetric Griess Reagent Kit, according to the manufacturer's protocols. A 100-µL aliquot of the colon supernatant was loaded in the microtiter plate, and 20 µL of Griess reagent [0.1% of N-(1naphthyl)ethylenediamine dihydrochloride and 1% of sulfanilic acid in 5% phosphoric acid] and 80 µL of deionized water were then added. The absorbance was measured at 540 nm using an ELISA Reader (BioTek Instruments, Inc., Tigan Street, Winooski, United States). Statistical analysis Data are expressed as mean ± SD, and statistical analyses were performed using one-way analysis of variance (ANOVA). Differences with P < 0.05 were considered significant. The statistical analyses were performed using the Statistical Package for Social Science (SPSS) version 19.0. Brewers' rice promotes GSK3β mRNA level in colon tumorigenesis In the current study, we determined the GSK3β mRNA level of the control (normal and AOM alone) groups and the treatment groups through quantitative realtime PCR analyses. We observed that the normal group (2.866 ± 0.058) had the highest GSK3β mRNA level compared with the brewers' rice-fed groups ( Figure 1). The administration of brewers' rice significantly increased the transcription of the GSK3β gene compared with AOM alone (P < 0.05). These findings clearly demonstrated that the dietary administration of brewers' rice in AOM-induced rat colon carcinogenesis resulted in a dose-dependent increase in the GSK3β mRNA level. Brewers' rice inhibits the β-catenin pathway in colonic tumors As shown in Figure 1, our results showed that the colonic tumors in the groups treated with AOM alone had the highest β-catenin mRNA levels, whereas the administration of 20% (0.611 ± 0.034) and 40% (0.436 ± 0.045) (w/w) brewers' rice markedly decreased the β-catenin mRNA levels. A significant reduction in β-catenin expression was found in the groups administered with 20% and 40% (w/w) brewers' rice compared with the group treated with AOM alone (P < 0.05). In brewers' rice-treated AOM-injected colon tumorigenesis rats, the phosphorylation and degradation of β-catenin increased in a dose-dependent manner. A very low β-catenin amount was observed in the normal group (0.011 ± 0.003) ( Figure 1). Brewers' rice inhibits the expression of NF-κ B in colon tumorigenesis We hypothesized that brewers' rice downregulates the expression of NF-κB. As expected, none of the rats exhibited NF-κB expression in the normal colon mucosa ( Figure 2). The overall analysis indicated that the colon tissue in the group treated with AOM alone presented the highest NF-κB expression (1.000 ± 0.048) compared with the groups treated with brewers' rice. A significant reduction in the gene expression of NF-κB was also observed in the rats of the groups treated with brewers' rice compared with the group treated with AOM alone (P < 0.05). This finding revealed that the administration of brewers' rice resulted in the inhibition of NF-κB expression, and the maximum effect was obtained with 10% (w/w) brewers' rice (0.255 ± 0.022). Brewers' rice upregulates the iNOS mRNA level in colon tumorigenesis In the present study, we observed a high expression of iNOS mRNA in the normal colon mucosa (9.134 ± 0.708). The data presented in this study demonstrated that the groups administered with 20% (9.090 ± 0.519) and 40% (8.582 ± 1.261) (w/w) brewers' rice exhibited significantly upregulated iNOS mRNA levels compared with the group treated with AOM alone (P < 0.05) (Figure 2). Brewers' rice activates the Nrf2 mRNA level in colon tumorigenesis As shown in Figure 2, in the normal group, which was administered saline but not treated with brewers' rice, prominent Nrf2 gene expression was observed in the normal colon mucosa (4.068 ± 0.155). Our results showed that treatment with 20% (w/w) brewers' rice (3.596 ± 0.308) effectively activated the gene expression of Nrf2 compared with AOM alone (Figure 2). Effect of brewers' rice on the SOD, MDA, and NO levels in colon homogenate The changes in the colon SOD, MDA, and NO activities after the dietary administration of brewers' rice on AOM-induced colon carcinogenesis are summarized in Table 3). In addition to the effects on the SOD level, our findings showed that the highest MDA level was obtained in the group treated with AOM alone (18.01 ± 1.43 nmol/g of protein) compared with the groups treated with brewers' rice. A significant reduction in the MDA level was found in the two treatment groups (20% (14.24 ± 0.58 nmol/g of protein) and 40% (8.14 ± 1.42 nmol/g of protein) (w/w) of brewers' rice) compared with that of the group treated with AOM alone (18.01 ± 1.43 nmol/g of protein) (P < 0.05). These findings indicated that the dietary administration of brewers' rice resulted in reductions in the MDA level in a dosedependent manner, and the maximum effect was obtained with a concentration of 40% (w/w) brewers' rice (8.14 ± 1.42 nmol/g of protein) ( Table 3). Consistent with the high levels of MDA observed in colon tumors, we also observed the highest NO level in the group treated with AOM alone (798.46 ± 30.45 µmol/mg of protein) compared with those of the other treatment groups (Table 3). After twenty weeks of treatment with brewers' rice, the NO level was reduced. The suppressive effect of brewers' rice on NO was notable in rats that received 20% (w/w) brewers' rice (533.40 ± 40.43 µmol/mg of protein). DISCUSSION The current study is an extension of our earlier work, which determined that brewers' rice was an effective dietary agent for the reduction of tumor incidence and multiplicity in rat colons induced with AOM [12] . We also determined that brewers' rice markedly suppressed β-catenin expression in both the cytoplasm and the nucleus [12] . In the present study, male Sprague-Dawley rats were given different doses [10%, 20%, and 40% (w/w)] of brewers' rice. A dosage of 10% (w/w) brewers' rice was administered as suggested by a previous study performed by Boateng et al [17] on rice bran and rice germ. This dosage has been reported to reduce tumor formation. Moreover, higher concentrations [20% and 40% (w/w) brewers' rice] were also used to determine the dose-dependent effect of brewers' rice as a dietary agent in a rat colon cancer experimental model. Our earlier study reported that the highest dose [40% (w/w) brewers' rice] was well-tolerated and did not suppress the growth of rats [12] . Targeting Wnt signaling upstream of T-cell factor (TCF)/β-catenin signaling is a critical therapeutic option. In the β-catenin destruction complex, GSK3β is one of the crucial components that modulates the degradation or accumulation of β-catenin in the nucleus. To ascertain whether brewers' rice modulated GSK3β via Wnt/β-catenin signaling, the GSK3β mRNA level was analyzed in the colon of rats induced with AOM. Overall, treatment with brewers' rice resulted in an increase in the GSK3β mRNA level, and the maximum effect was obtained with 40% (w/w) brewers' rice. To further verify whether the mechanisms of action of GSK3β observed in the colons of rats injected with AOM suppressed 8831 August 7, 2015|Volume 21|Issue 29| WJG|www.wjgnet.com The Wnt/β-catenin pathway plays a vital role in tissue homeostasis and cancer susceptibility. The dysregulation of β-catenin and other Wnt molecules results in the nuclear localization of β-catenin, stimulation of Wnt target genes, and tumor formation [18] . Mutations in the β-catenin gene are usually found in AOM-induced colon tumorigenesis in rats and mice [19] . These findings, which are supported by the current data, further indicate that the activation of the β-catenin gene plays a vital role in the development of colon tumors in rats. The finding that the depletion of β-catenin suppresses tumor incidence and multiplicity in brewers' rice-treated AOMinduced colon tumorigenesis suggests that brewers' rice may become a potential strategy for the therapeutic control of Wnt/β-catenin signaling in colon cancer. In the present study, treatments with brewers' rice resulted in increased GSK3β and decreased β-catenin, and the maximum effect was observed with 40% (w/w) brewers' rice. The effects observed in the treatment with 40% (w/w) brewers' rice could be explained by its higher concentrations of active compounds in brewers' rice, which may confer better functional properties in the regulation of Wnt/β-catenin signaling pathway. A very low β-catenin mRNA level observed in the normal group was consistent with the findings reported by Barker et al [20] , who found that Wnt/β-catenin signaling played an essential role in intestinal development, which is specific for the intestinal and mammary epithelia. A previous study also demonstrated that most of the β-catenin protein was present at very low amounts in the cytoplasm or nucleus of normal cells [21] . Cytoplasmic β-catenin was maintained at a low level for tissue homeostasis, particularly in strongly proliferative, selfrenewing tissues, such as the skin and gut [22] . However, Wnt pathway mutations are not the only factors that promote the activation of β-catenin [23] . A study reported that NF-κB also plays a crucial role in colorectal and colitis-associated tumorigenesis [24] . Aberrant NF-κB stimulation has been identified in more than 50% of colorectal and colitis-associated tumors [25] . Thus, the expression levels of NF-κB in response to brewers' rice were evaluated in the colons of rats induced with AOM. The NF-κB family is a group of inducible transcription factors that are involved in immune and inflammatory responses and inhibit cell apoptosis. A previous study revealed that cancer cells with activated NF-κB are resistant against chemotherapeutics and ionizing radiation and that suppression of NF-κB activity markedly increases the sensitivity of cells to chemotherapeutic agents [26] . The inhibition of NF-κB transcriptional activity resulting from the administration of brewers' rice was further supported by Biswas et al [27] and Xie et al [28] , who found that phenolic compounds inhibited NF-κB in cell cultures and promoted anti-inflammatory and antioxidant responses. Although the maximum effect was observed in 10% (w/w) brewers' rice, there was no significant difference between groups fed with 10% (w/w) brewers' rice and groups fed with diets containing 20% (w/w) brewers' rice or 40% (w/w) brewers' rice (P > 0.05). The reason for the lack of any clear dosedependence effects remains to be elucidated. One of the possible reasons may be due to the efficiency of brewers' rice involved in the inhibition of NF-κB transcriptional activity reached with 10% (w/w) brewers' rice. Collectively, the data presented in this study suggest that brewers' rice may modulate colon tumor development through NF-κB signaling. In addition to the effects observed in Wnt and NF-κB signaling, the role of iNOS in the suppression of colon tumorigenesis elicited by brewers' rice remains unknown. Therefore, we further determined the chemoprevention mechanism of iNOS on brewers' rice in this model. NO is produced during transcription and translation via iNOS, and once active, iNOS synthesizes high NO levels until substrate depletion [29] . However, our study shows contradictory results. It is possible that multiple cellular factors affect the sensitivity of NO, like specific NO metabolism pathways and interactions with other free radicals. The sensitivity of NO may also be associated with the expression of apoptosis-associated proteins, including Bcl-2, Bax, and Fas [30] . Excessive NO production can decrease the concentration of DNA repair enzymes and inhibit apoptosis through the nitrosylation of caspases [31] . The upregulation of iNOS mRNA levels in the current study was consistent with the results obtained by Radomski et al [32] and Dong et al [33] , who reported that the expression of iNOS was 8832 August 7, 2015|Volume 21|Issue 29| WJG|www.wjgnet.com inversely associated with metastatic activity in human colon cancer and murine melanoma (K-1735) cells. This finding was further supported by Shi et al [34] , who demonstrated that iNOS overexpression not only attenuated the proliferation and metastasis of human renal cell carcinomas and murine fibrosarcoma but also induced apoptosis. However, the study conducted by Shi et al [34] contradicted the results reported by Sheng et al [35] and Di Popolo et al [36] , who demonstrated that elevated iNOS mRNA and protein levels partially contributed to the inhibition of apoptosis in colon cancer cells. Therefore, the activation of iNOS at the mRNA level may play a critical role in growth inhibition and apoptosis in a human colorectal cancer (HT-29) cell line, as determined in our earlier studies [13,37] . A previous study showed that Nrf2 enhanced the basal expression of cytoprotective genes and suppressed cytokine-mediated inflammation [38] . Thus, the expression of Nrf2 in AOM-induced colon tissue was evaluated to determine whether brewers' rice could modulate Nrf2 at the mRNA level. Nrf2, which belongs to the Cap'n'Collar family of basic region-leucine zipper transcription factors, was shown to be a key element in the antioxidant response element (ARE)-mediated transcriptional machinery [39] . Nrf2 plays a crucial role in the regulation of phase Ⅱ detoxifying and antioxidant enzymes via AREs [40] . To determine whether brewers' rice decreased colorectal cancer by modulating the antioxidant-mediated pathway, we examined the transcription of Nrf2. Treatment with 20% and 40% (w/w) brewers' rice effectively activated the gene expression of Nrf2 and may be associated with the modulation of xenobioticmetabolizing enzymes and responsible for the balance of carcinogen metabolism against detoxification [41] . Nrf2 is stimulated by an oxidative signal in the cytoplasm, which allows its translocation to the nucleus where it interacts with DNA ARE regions and promotes the expression of cytoprotective enzymes, such as glutathione S-transferase (GST), SOD, HO-1, and NADPH-quinone oxidase (NQO) (ARE-regulated genes) [42] . Our findings indicated that the manipulation of brewers' rice in colonic tumor leads to changes in the gene expression of Nrf2-regulated HO-1, further suggesting that brewers' rice is a positive regulator of Nrf2 signaling. The transcriptional downregulation of β-catenin and NF-κ B in carcinogen-injected rats after treatment with a brewers' rice diet was hypothesized because the carcinogen metabolism may have been shifted via Nrf2 and HO-1 in the colon. Our present study suggests that the possible chemopreventive mechanisms of brewers' rice against colon carcinogenesis may be associated with both the phase Ⅰ and Ⅱ drug-metabolizing enzymes regulated by Nrf2, thus resulting in the detoxification of AOM and the rapid metabolism of AOM by P450. Collectively, this finding suggests that brewers' rice may represent a promising natural dietary agent for the transcriptional downregulation of β-catenin and NF-κB and the upregulation of Nrf2 and HO-1 levels. In addition to the effects observed in Nrf2 and HO-1 activation, the upregulation of Nrf2 and HO-1 activities in rats administered brewers' rice indicated that brewers' rice may be associated with an antioxidant enzyme. Therefore, the effect of treatments with brewers' rice on the SOD, MDA, and NO activities in AOM-injected rats was examined. The decreased SOD levels in the group treated with AOM alone illustrated that the defense mechanism may have been overwhelmed to alleviate the amount of superoxide produced by the carcinogen. The observed effect may also be due to the impairment of antioxidant enzymes, which act as safeguards for cells during reactive oxygen species (ROS) detoxification [43] . This finding implies that the group treated with AOM alone, in which carcinogenesis was induced but no brewers' rice treatment was administered, exhibited a reduction in SOD activity associated with a decreased antioxidative capacity. The group treated with AOM alone presented an increased MDA level and subsequently, increased lipid peroxidation, which was evident by the accumulation of β-catenin. Taken together, these findings suggest that the increased SOD and decreased MDA formation observed in the groups treated with brewers' rice may be associated with a high total phenolic content and the bioactive compounds present in brewers' rice, as reported by Tan et al [13] . Overall, the data obtained in this study suggest that brewers' rice has the potential to increase SOD levels and reduce the activities of MDA and NO. The transcriptional inhibition of β-catenin and NF-κB activities may lead to a suppression of colon cancer development, which implies that the observed effects can likely be attributed to the dietary compositions present in brewers' rice. Most studies have demonstrated the additive and/or synergistic effects of some phytochemicals and nutrients [44][45][46] . Therefore, in the current study, rather than isolated compounds, brewers' rice was administered to the rats. Results from our earlier study indicated that brewers' rice consisted of a phenolic antioxidant, phytic acid, vitamin E, and γ-oryzanol [13] . The synergistic/additive activities of these components in brewers' rice may contribute to a negative regulation of the Wnt and NF-κB signaling pathways to induce the phosphorylation and degradation of β-catenin and NF-κB expression, as observed in the present study. In addition to the effects observed in the Wnt and NF-κB signaling pathways, it is plausible that the bioactive constituents present in brewers' rice facilitates the modulation of Nrf2 and Nrf2-regulated HO-1 expression, which subsequently enhances the antioxidant enzyme to mediate oxidative stress in the carcinogen-treated brewers' rice-fed groups. In conclusion, this study provides clear evidence that brewers' rice offers great potential against colorectal cancer via the regulation of Nrf2 expression and the inhibition of the Wnt and NF-κB signaling pathways. However, this study has been limited to the use of brewers' rice in male Sprague-Dawley rats, and the duration of the treatment was only twenty weeks. Therefore, further studies are warranted in long-term animal studies or human clinical trials to confirm these findings. Uncontrolled signaling through the wingless/ Wnt pathway and overexpression of NF-κB have been reported to play crucial roles in the development of colorectal cancer. Nrf2 is responsible in the regulation of phase Ⅱ detoxification and antioxidant enzymes. Our findings showed that the dietary administration of 40% (w/w) brewers' rice modulated the Wnt signaling pathway. Feeding 20% (w/w) brewers' rice improved the antioxidant level, which indicated that brewers' rice can effectively enhance recovery from oxidative stress induced by AOM. Taken together, these results strongly imply the potential use of brewers' rice in future applications to combat oxidative stress and colon carcinogenesis. ACKNOWLEDGMENTS We acknowledge BERNAS, Seri Tiram Jaya, Selangor, Malaysia for supplying the brewers' rice sample and the laboratory staff of the Laboratory of Cancer Research (MAKNA) UPM for their technical assistance. Background The deregulation of Wnt/β-catenin signaling and overexpression of nuclear factor-kappa B (NF-κB) has been associated with colorectal cancer. Studies have reported that natural products exert many beneficial health effects. Brewers' rice, known locally as temukut, consists of broken rice, rice bran, and rice germ, which is a waste product produced in the rice industry. Although previous studies have demonstrated the anti-colon cancer activity of brewers' rice, the molecular mechanisms underlying these effects have yet to be studied. Research frontiers The authors aimed to investigate the mechanistic action of brewers' rice in regulating the Wnt/NF-κB/Nrf2-signaling pathways and assess the antioxidant effect of these treatments during colon carcinogenesis in male Sprague-Dawley rats. Innovations and breakthroughs This is the first study demonstrating that brewers' rice inhibited colon carcinogenesis via the modulation of multiple signaling pathways. The transcriptional inhibition of β-catenin and NF-κB activities and the activation of Nrf2 and HO-1 may be associated with the synergistic/additive effects of bioactive constituents present in brewers' rice.
2018-04-03T00:53:19.529Z
2015-08-07T00:00:00.000
{ "year": 2015, "sha1": "c923c277e8db0dfe296d68289dc3676a36b1945d", "oa_license": "CCBYNC", "oa_url": "https://doi.org/10.3748/wjg.v21.i29.8826", "oa_status": "HYBRID", "pdf_src": "Adhoc", "pdf_hash": "84536901bf85f671466cdd78f7f44a923fafecef", "s2fieldsofstudy": [ "Medicine", "Environmental Science" ], "extfieldsofstudy": [ "Chemistry", "Medicine" ] }
269714842
pes2o/s2orc
v3-fos-license
Acute Infrarenal Abdominal Aortic and Bilateral Common Iliac Artery Occlusions in an Elderly Female: A Case Report Acute aortic occlusions (AAOs) are rare vascular emergencies associated with high morbidity and mortality. Presenting signs and symptoms vary but typically involve the lower extremities and include mottled skin with diminished pedal pulses, paresis, and severe pain. Prompt recognition and imaging are necessary to prevent rapid deterioration, which can lead to loss of limb or death. Treatment includes surgical or endovascular interventions based on patient-associated risk factors and clot location. We present a 76-year-old female who arrived at the emergency department with an AAO involving the infrarenal abdominal aorta and bilateral common iliac arteries. Efficient physical examination and utilization of computed tomography with angiography of the abdomen and pelvis allowed for the appropriate recognition of the AAO and subsequent successful surgical embolectomy. This case report underscores the importance of an expeditious clinical and radiographic evaluation in patients presenting with lower extremity pain and weakness. Introduction Acute aortic occlusions (AAOs) are rare vascular events with an incidence of 3.8 per one million personyears, a statistic derived from the identification of cases in a nationwide vascular database maintained for the Swedish population [1].AAO typically presents with signs and symptoms of acute lower extremity ischemia, which include pain, paresis, mottled extremities, and diminished pulses [2].More proximal aortic clots can occlude the superior mesenteric artery (SMA), renal arteries, or spinal arteries and present with associated ischemic complications involving the gastrointestinal system, kidneys, and spinal cord [2].A retrospective study reported 30-day mortality after surgical intervention for AAO of 27.7%, patients requiring hemodialysis at 21.5%, and amputation at 15.4% [3].These life-threatening complications make a prompt and accurate diagnosis crucial.Without a thorough history and physical examination, this condition may be mistaken for more common causes of lower extremity pain and weakness, such as peripheral neuropathy, acute ischemic stroke, chronic claudication, or acute lumbar disc herniation. The treatment for AAO involves prompt surgical revascularization to the pelvis and lower extremities [3].Various modalities exist that can be employed to surgically treat an AAO, including, but not limited to, catheter-directed thrombolysis, axillobifemoral bypass, aortobifemoral bypass, aortoiliac thromboembolectomy, and aortoiliac stenting [3].Data comparing the various modalities are limited, and one of the most comprehensive comparative analyses took place at a single institution, encompassing AAO cases from 2006 to 2017 [3].Axillobifemoral bypass was shown to have lesser operative morbidity as a modality and the most commonly employed treatment mortality; however, this was at the expense of durability [3,4].The confidence in the axillobifemoral bypass modality is further strengthened by the procedure being utilized most commonly in the treatment of AAO as per a nationwide database in Sweden [1].Aortoiliac thromboembolectomy was reserved for patients considered high-risk and with minimally diseased aortoiliac segments at this institution [3].If limb amputation was indicated, it was typically performed prior to revascularization [3].Revascularization was prioritized when there was potential for a viable limb [3]. A comprehensive retrospective chart review at the aforementioned institution of patients presenting with an AAO between 2006 and 2017 has shown an overall postoperative 30-day mortality of 27.7% and a mortality of 26.2% when hospitalized [3].A vascular database maintained at a separate institution has shown that patients treated for AAO between the years 2005 and 2013 demonstrate a 30-day postprocedure mortality rate specifically for AAOs occurring distal to the renal arteries to be 18% [2].It was also shown that an age above 60, elevated lactate levels, and motor deficits of the lower extremities on presentation were associated with a higher 30-day mortality [3].Common complications of surgical intervention for AAO involved acute kidney injury, respiratory failure, and cardiovascular issues such as cardiac arrest, myocardial infarction, and new-onset atrial fibrillation [3].Finally, the median length of hospital stay after intervention for AAO was shown to be 12 days [3]. Case Presentation A 76-year-old female with a past medical history of obesity and atrial fibrillation controlled with warfarin presented to the emergency department (ED) with bilateral lower extremity weakness and pain.The patient noted that her symptoms began while making breakfast when she lost the ability to move her lower extremities. On presentation to the ED, her vitals included an afebrile temperature of 36.5°C,sinus tachycardia with a heart rate of 130 beats per minute, respiratory rate of 18 breaths per minute, pulse oximetry of 98% in room air, and notable systolic hypertension with a widened pulse pressure at 180/83 mmHg.During the physical examination, it was observed that the patient was in severe distress, which was evident from her overall appearance.Her skin was significant for a cool, mottled appearance of the lower extremities with duskyappearing toes.The cardiovascular examination revealed regular tachycardia without palpable femoral, popliteal, or pedal pulses and delayed capillary refill of both feet.The neurological examination was significant for flaccid lower extremities with diminished sensation.The rest of her examination was unremarkable.Due to high clinical suspicion, she was sent for a stat computed tomography (CT) angiogram of the abdomen and pelvis with a differential diagnosis inclusive of AAO and thoracic aortic dissection. Laboratory measurements performed included a complete blood count, blood chemistry studies, and coagulation studies ( CT angiogram of the aorta demonstrated occlusion of the infrarenal abdominal aorta along with occlusions of the right and left common iliac arteries with notable reconstitution of the right and left external iliac arteries secondary to collaterals (Figure 1).The celiac artery was also found to be occluded at its origin.Additionally, 50% stenosis was noted in the SMA due to plaque formation, and mild stenosis was observed in the inferior mesenteric artery secondary to calcified plaque.Plaque formation was also observed at the origins of both the right and left renal arteries without significant stenosis. FIGURE 1: Computed tomography with angiography of the abdomen and pelvis showing occlusion of the infrarenal abdominal aorta and right and left common iliac arteries (red arrow) The results of the CT angiogram warranted surgical intervention involving embolectomies of the abdominal aorta, bilateral iliac arteries, and bilateral femoral arteries.Her vascular surgery service estimated an ischemia time of four hours (time of symptom onset until presentation in the operating room).Following embolectomies, the operative note demonstrated strong femoral and pedal pulses.On postoperative day 1, her mild acute renal injury and lactate acidosis resolved.Ultimately, the patient did not require amputation and, on postoperative day 6, was subsequently discharged home with home-based physical therapy.At a sixweek follow-up visit with her cardiologist, they noted a normal gait and pedal pulses.They recommended continuing all home medications and following up with a pulmonologist regarding her insomnia. Discussion Primary mechanisms typically implicated in the development of an AAO involve embolism and thrombosis [5].Risk factors for embolism, as in this case, include the female gender and the presence of heart disease (e.g., atrial fibrillation), whereas risk factors for thrombosis involve smoking and diabetes mellitus [5].Furthermore, the elderly female patient in this case was noted to be obese, a condition that has been associated with hypercoagulability and can potentially lead to deviations in aortic blood flow [6].Future research efforts should address the potential link between obesity and AAO because obesity is a known risk factor for other more common causes of intra-abdominal vascular catastrophe, such as aortic dissection and aneurysms [7,8]. Nevertheless, the increased use and availability of CT angiography reflect a paradigm shift from traditional angiography to diagnose AAO formally [9].In fact, CT angiography is now considered the first-line imaging modality for the rapid diagnosis of primary aortic occlusions.This term refers to AAO without aortic atherosclerosis or aneurysm and further describes our case [9].CT angiography also offers the potential to rapidly rule out or identify unexpected causes of AAO that could have ramifications on the treatment plan and outcome for the patient [3]. Additionally, the surgical modality of embolectomy chosen in this case differs from the most common procedure typically chosen to treat AAO, the axillobifemoral bypass [3].This procedure was chosen after ensuring the patient was an appropriate candidate, as patients with chronic atherosclerosis involving the affected area or prior iliac stents are deemed not eligible for this option [3]. Conclusions This case report presents an AAO in a high-risk patient based on age, along with the presence of obesity and atrial fibrillation.Due to high clinical suspicion and prompt diagnostic testing, our patient received the most appropriate surgical treatment, allowing her to make a complete recovery and avoid dire complications such as amputation.Additionally, the patient was able to be discharged home in a timeframe that was significantly less than the median length of stay.Because AAO is very rare, a future meta-analysis and systematic review may be the only way to provide definitive diagnostic and treatment recommendations for this life-threatening condition.
2024-05-11T15:25:20.557Z
2024-05-01T00:00:00.000
{ "year": 2024, "sha1": "d80626117baecbdfc16a93b4e64e640b58c8c60d", "oa_license": "CCBY", "oa_url": "https://assets.cureus.com/uploads/case_report/pdf/217693/20240508-15371-1fts2pr.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "965b4491a2cd50cdf9e4dfa0b734fb98ced9a785", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
259930057
pes2o/s2orc
v3-fos-license
Proof of concept: Predicting distress in cancer patients using back propagation neural network (BPNN) Background Research findings suggest that a significant proportion of individuals diagnosed with cancer, ranging from 25% to 60%, experience distress and require access to psycho-oncological services. Until now, only contemporary approaches, such as logistic regression, have been used to determine predictors of distress in oncological patients. To improve individual prediction accuracy, novel approaches are required. We aimed to establish a prediction model for distress in cancer patients based on a back propagation neural network (BPNN). Methods Retrospective data was gathered from a cohort of 3063 oncological patients who received diagnoses and treatment spanning the years 2011–2019. The distress thermometer (DT) has been used as screening instrument. Potential predictors of distress were identified using logistic regression. Subsequently, a prediction model for distress was developed using BPNN. Results Logistic regression identified 13 significant independent variables as predictors of distress, including emotional, physical and practical problems. Through repetitive data simulation processes, it was determined that a 3-layer BPNN with 8 neurons in the hidden layer demonstrates the highest level of accuracy as a prediction model. This model exhibits a sensitivity of 79.0%, specificity of 71.8%, positive predictive value of 78.9%, negative predictive value of 71.9%, and an overall coincidence rate of 75.9%. Conclusion The final BPNN model serves as a compelling proof of concept for leveraging artificial intelligence in predicting distress and its associated risk factors in cancer patients. The final model exhibits a remarkable level of discrimination and feasibility, underscoring its potential for identifying patients vulnerable to distress. Introduction Distress among patients with cancer is characterized as a complex and unpleasant encounter encompassing physical, social, psychological (cognitive, behavioral, emotional), and/or spiritual aspects, which can impede effective coping with the disease, its associated manifestations, and treatment [1]. Studies reveal that a significant amount of patients with cancer (25%-60%) state to be distressed when evaluated, emphasizing the need for psycho-oncological services [2][3][4]. Consequently, all patients with cancer should be screened for distress [1,5,6]. A widely used instrument for distress screening is the distress thermometer (DT), combined with a problem list. Nurses regularly administer the DT during hospitalization and outpatient care as part of the standard procedure [7]. It is important to identify sources and predictors of distress, so that hospital staff has the opportunity to implement specific interventions, which reduce psychosocial burden. Previous studies have emphasized several factors associated with higher levels of distress. These include female sex [8,9], younger age [10,11], unmarried patients [10], patients diagnosed with specific types of cancer (e.g. of the breast, lung, colon, pancreas, brain, or head and neck) [12][13][14], low social support and increased fear of recurrence [15], as well as a lower level of life satisfaction [16]. Moreover, recent research identified several items of the DT's problem list as potential sources of distress. On the one hand, there are nonphysical predictors from the emotional domain [11,17,18], such as nervousness [19,20] or depression [20] and items from the practical and family domain [11], such as financial strain [21] or dealing with children at home [17]. On the other, physical problems such as pain [20], appearance [17] and fatigue [16] also predicted distress in cancer patients. The already extensive literature on factors predicting distress in patients with cancer provides an adequate basis for future research. While distress in cancer patients is influenced by various interconnected factors, previous research has predominantly relied on conventional prediction models, such as logistic regression. However, these statistical approaches often fall short in fully capturing the multifaceted nature and interdependencies underlying distress, leading to less accurate predictions. To tackle this challenge, more precise estimation of distress predictors can be achieved by applying innovative approaches, such as Back Propagation Neural Network (BPNN). BPNN derives from Artificial Neural Network (ANN), which emulates the human brain. By leveraging its self-organizing, self-learning and adaptive nature, ANN can effectively discern intricate non-linear associations among variables. Therefore, utilizing BPNN allows for a more comprehensive understanding of the complex relationships involved in distress prediction [22]. As a widely used and relatively advanced method, introduced over 40 years ago, BPNN has already proven its ability to contribute to the diagnostic workup of psychiatric and psychosomatic disorders, to predict length of psychiatric hospitalization and to find an accurate prediction model for suicide attempts [23][24][25][26][27][28]. As a novelty, this study is the first, which aimed to establish a prediction model for distress in patients with cancer based on BPNN. This study might serve as a proof of concept, possibly encouraging other researchers in this field to use artificial intelligence. Our prediction model using BPNN is better suited for identifying cancer patients at risk of distress, especially in large samples with diverse participant characteristics, surpassing the limitations of conventional methods [22][23][24][25][26][27][28]. Moreover, the results might help improve individual prediction accuracy and develop more precise prevention and intervening strategies for cancer patients vulnerable to distress. Subjects and data collection For this study, we retrospectively collected the data from the case files of oncological outpatients and inpatients (diagnosed and treated at the Comprehensive Cancer Center Zurich at the University Hospital Zurich from 2011 to 2019). The study sample selection process is depicted in Fig. 1. There were 13174 cases with a primary diagnosis of cancer between 2011 and 2019. Before treatment began, more than half of patients agreed to the reuse of their data for research purposes (general consent; 55% of the 13174 cases with Fig. 1. Study sample selection process. r = rejected; u = unknown; GC = general consent. a primary cancer diagnosis). Patients were informed on the utilization of their data for research projects (non-genetic) and on their right to object to their former consent without justification at all times. The patient's consent is saved in the hospital information system. Patients who objected to participate (n = 1728), with unknown general consent (n = 3088), and who were treated as outpatients (n = 1040) only, were also excluded from the study. The Ethics Committee of the State of Zurich, Switzerland, approved the study (BASEC NR. 2020-00977; June 2020). Of the remaining 7317 cases, 3063 were screened for distress and formed the final study sample. Distress screenings up to six months after the initial cancer diagnosis were included in the analysis. Table 1 illustrates the absolute and relative distributions of cancer entities in the final study sample. Measurements Distress levels were evaluated using the Distress Thermometer (DT), which employs an 11-point visual scale (range 0 (no distress) to 10 (maximum distress)). Past studies have proposed a cut-off score of ≥5 as a threshold for considering referral to psychooncological services [29,30]. All patients with a DT score of ≥5 were asked if they wanted to call on psycho-oncological services. If those patients demand help, the treating physician refers them to psycho-oncological services, provided by a psychiatrist or psychologist at the hospital. This process is also described in Ref. [29]. As candidate predictors for distress, we considered socio-demographic variables, such as age, sex, marital status, mother tongue (German) and nationality, cancer entities, stage of cancer at initial diagnosis, psychiatric disorders, the length of hospital stay, medication administered during the first six months after initial cancer diagnosis and all items from the problem list. Cancer stage was categorized in accordance with the guidelines established by the Union for International Cancer Control (UICC) [31]. Furthermore, we analyzed the age-adjusted Charlson comorbidity index (CCI) as a potential distress predictor. The CCI illustrates the impact of chronic comorbid diseases on mortality [32][33][34]. All candidate predictor variables were dichotomized (0 = No and 1 = Yes (one encoding)). Age and length of hospital stay were measured on a metric scale. Nationality was categorized into 1 = Switzerland, 2 = Europe (including Switzerland) and 3 = non-European. All data were accessed via the clinical management software (KISIM) and the institutional cancer register (OncoStar). Statistical analysis Relative and absolute distributions, mean scores, and standard deviation were computed to describe qualitative and quantitative data. Preliminary screening of predictor variables was performed with logistic regression analysis. BPNN was utilized to build the final prediction model for distress in patients with cancer. Level of significance (two-sided p-value) was p < 0.05. IBM SPSS Statistics, version 27 (Chicago, IL), was used for statistical analysis, and Matlab (R2019a. Mathworks Inc.) for BPNN computations. Information flows from the input layers, through the hidden layers, and eventually reaches the output layers in a feedforward neural network. This one-directional movement of information characterizes it as a feedforward network. The primary objective is to reduce disparities between the computed output and the desired output from the training sample. To achieve this, the network undergoes iterations by virtue of the error rate received in the preceding iterations. By continuously reducing the error rates, the reliability and generalizability of the backpropagation neural network (BPNN) model are enhanced. Consequently, BPNN serves as a widely adopted approach for managing and adaptively monitoring artificial neural networks [22,25]. Note: ICD-10: International Statistical Classification of Diseases and Related Health Problems, Version 10. Table 2 depicts sociodemographic and further characteristics of the final study sample. One third of the study sample is female. The average age (±SD) was 61.5 ± 13.9 years (range 18-95). Preliminary screening of potential predictor variables of distress on the basis of logistic regression analysis To preliminary screen possible predictor variables for distress, we applied binary and multivariable logistic regression analysis. In the regression analysis, a dichotomized distress score ≥5 versus <5 was the dependent variable. In total, there were more than 60 independent variables to be screened as candidate predictors for distress in patients with cancer. Scatter plots indicated that there was an approximately linear relationship between each independent variable and the dependent variable. Thus, logistic regression analysis is an adequate method to preliminary screen candidate predictor variables. Table 3 highlights the 13 significant independent variables derived from the significant logistic regression model for distress in cancer patients (X 2 = 1073.556, p < 0.001). Cox & Snell R 2 and Nagelkerke R 2 indicate a high model quality (0.295 respectively .396). Binary logistic regression indicated that among the large number of candidate predictors only certain items of the problem list predicted cancer patients' distress with statistical significance. The most significant predictor for distress was fear, followed by sadness. Patients reporting fear or sadness were about 206% and 308% more likely to feel distressed than those who did not indicate these emotional problems. Other emotional problems, such as depression, nervousness and worries predicted distress as well. Practical problems regarding work or school as well as physical problems, such as digestive problems, immobility, insomnia, fatigue and pain also predicted distress but to a lesser extent. The receiver operating characteristic (ROC) curve of the binary logistic regression model showcases a sensitivity of 78.4% and a specificity of 72.9%, along with an area under the curve (AUC) of 0.825 (Fig. 2). These results indicate a high level of discrimination in the model. Structure of the BP neural network In our study, we employed a standard three-layer backpropagation neural network (BPNN) architecture comprising an input layer, a hidden layer, and an output layer. The input variables for the BPNN prediction model, aimed at assessing distress, were derived from 13 significant independent variables identified through logistic regression. The dependent variable is a dichotomized distress score (1 = distress score ≥5, 0 = distress score <5), that was set as output variable. We calculated the number of neurons in the hidden layer by equation H = √M + N + α. α represents a constant from one to ten, N the number of input neurons, and M the number of output neurons. Fig. 3 displays the BPNN structure. BP neural network training The dataset was randomly partitioned into three subsets: the training data (70%), the validation data (15%), and the test data (15%). Varying the amount of neurons in the hidden layer yields distinct values for coincidence rate (π), positive and negative predictive value, sensitivity, specificity, and the number of iterations. A more accurate BPNN model is achieved by a higher coincidence rate and a lower number of iterations. In our BPNN model for assessing distress in cancer patients, the hidden layer neurons were explored within a range of four to thirteen. Through training of the BP neural network and performing repetitive data simulations for different numbers of hidden layer neurons using Matlab, it was determined, that the most precise evaluation indexes were obtained, when the number of neurons fell within a range of seven to nine (Table 4). During the training of the BPNN, varying the weight coefficients (w) and threshold values (b) results in distinct learning models. As the primary values of (w) and (b) are randomly chosen from a range of − 1 to 1, repetitive data simulations were conducted using Matlab, with distinct initial values of (w) and (b), along with a range of seven to nine neurons in the hidden layer. Evaluating the performance based on the evaluation indeces, Network 4, consisting of eight neurons in the hidden layer, emerges as the ideal BPNN model for predicting distress in cancer patients (Table 5). BP neural network verification The error histogramm of our BPNN model is displayed in Fig. 4, which suggests a high BPNN model quality with an excellent discrimination efficiency and predominantly small errors by virtue of the errors' concentration around the zero error line in most cases. Discussion In this study, using data from the DT and problem list, it was possible to build a precise prediction model for distress in cancer patients. To our best knowledge, this study is the first, which examines predictors of distress in cancer patients utilizing BP neural network analysis, thus serving as a proof of concept study for further research embracing advanced statistics (broadly referred to as machine learning or artificial intelligence). The majority of our final study sample (more than 55%) reported to be distressed and in need of psycho-oncological services, which is in line with past studies that explored the prevalence of distress in oncological patients [2][3][4]35]. Contrary to prior studies [8][9][10][11][12][13][14], Total coincidence rate. our study did not identify sociodemographic aspects, such as age, sex or marital status, or specific types of cancer predicting distress. There are multiple reasons possibly explaining this observation. First, our study uses data from Switzerland collected between 2011 and 2019. By contrast, prior studies on predictors of distress used more historic data collected in the United States of America or other parts of the globe, thus involving other treatment regimen and lifestyles. Second, some studies included a much smaller sample size with only a few hundred cases [8][9][10][11]14], or only specific types of cancer [10,11]. Our study comprises more than 3000 cases, all types of cancer and all currently available cancer treatments (including immunotherapy). Other examined variables, such as mother tongue, nationality, psychiatric diagnoses, length of hospital stay, administered medication and the CCI did not predict distress in oncological patients. To our knowledge, there is no other published data considering these variables. Future studies should include these variables to validate our results. However, in accordance with the already existing literature [15][16][17][18][19][20][21], many items from the problem list (physical, emotional and practical problems) predicted distress in our study sample. The largest risk factors for distress in our study sample were fear, sadness and worry from the emotional domain, which is consistent with previous studies [7]. Proof of concept is achieved by meticulously describing statistical procedures in the results section: In summary, after applying logistic regression analysis to filter potential predictors of distress, the 13 significant independent variables from the problem list were utilized as input variables for the BPNN prediction model. The output variable was a dichotomized distress score of ≥5 versus <5. Subsequently, the data was randomly divided into a verification sample, a test sample and a training sample. The most precise prediction model for distress in oncological patients is network 4, with 8 neurons in the hidden layer, as was demonstrated by our 3-layer Highlights confusion matrices with a coincidence rate (π) of more than 75% for each sample and a total coincidence rate of 75.9%. The ROC curve of the BPNN model is demonstrated in Fig. 6. The total AUC added up to 0.827 and is another indication of the high level of discrimination. BP neural network training and verification. The model featured a relatively high sensitivity (79.0%), a relatively high positive predictive value (78.9%), a total coincidence rate (π) of 75.9% and a specificity and negative predictive value above the 70% threshold. Thus, these results indicate a clinical significance of the constructed BPNN model. The model displays a high level of discrimination and could help users to easily screen, if cancer patients are vulnerable to distress. In sum, the BPNN model is slightly superior to our logistic regression model. However, our BPNN model with a relatively low number of 13 independent variables cannot reach its full potential. A higher quantity of independent variables would probably lead to more significant superiority compared to the conventional regression analysis. Criticisms of BPNN point out, that it might not be suitable to manage real-world data and is best used in combination with other methods [25,28]. The present study provides evidence to the contrary. While logistic regression analysis is able to preliminary screen a great number of potential risk factors, BPNN has the advantage, by virtue of its simulation of the human brain and its flexibility, of a high level of discrimination and precise data fitting. Thus, BPNN is more suitable to accurately predict outcomes for the individual patient [36]. The integration of both methods might simplify the detection of risk factors for distress in patients with cancer. Several limitations exist within our study. Firstly, the data utilized was derived from a single cancer care center (CCC) located in Switzerland, which may restrict the generalizability of our findings to other settings. However, it is worth noting, that our study encompassed various types of cancer and treatment options. Additionally, the procedures and instruments employed for distress screening, exhibit similarities across different countries worldwide [7]. Furthermore, record keeping of patient-related information in our study adhered to stringent institutional standards. As a result, the data's quality and the reliability of the obtained results are notably high. However, it is essential to acknowledge a significant limitation of our study, namely the underrepresentation of certain types of cancer, including gynaecological, urological, and endocrinological malignant neoplasms (see Table 1). This is a possible explanation, why female sex did not predict distress in cancer patients. Interinstitutional differences might be responsible for low screening rates for certain cancer entities, as was already suspected by Ref. [29]. Staff, time and skilled worker shortage may be reasons for these differences [37,38]. Since the largest part of our patients were male (two-third), prospective studies may want to include a greater share of female patients. Furthermore, different DT cut-off scores for specific patient populations were assumed by previous studies, such as lower cut-off scores for newly diagnosed patients [39] or higher cut-off scores for women recently diagnosed with breast cancer [40], which complicates comparability between study findings. However, only one specific DT cut-off score is used in the hospital, where our study was conducted. Lastly, there does not exist any standard process regarding the construction of the BP neural network. The network desgin is solely based on individual expert knowledge and repetetive data simulation processes. Dealing with a great number of input variables is another issue, which can be solved by combining BPNN with traditional methods. Conclusions In this study, the largest part of patients with cancer reported to be distressed. Overall, we identified 13 distress predictors in cancer patients: emotional problems, such as fear and sadness, physical problems, such as pain and fatigue, and practical problems regarding work or school. Furthermore, this study provided proof of concept for a BPNN distress prediction model in cancer patients, which improves individual prediction accuracy compared to conventional methods. Our work sets the theoretical foundation for artificial intelligence assisted prediction of risk factors, which are associated with distress. The integration of BPNN distress prediction algorithms in the hospital information system might be one example. Because this study was conducted in an institution similar in structure, treatment processes and patient population to those of other national and international CCCs, the results are generalizable. Prospective studies should try to replicate and extend our results. Inspired by our results, finding more precise prevention and intervention strategies for cancer patients vulnerable to distress may be another desirable goal of studies. Jan Ben Schulze: Conceived and designed the experiments; Performed the experiments; Analyzed and interpreted the data; Wrote the paper. Marc Dörner: Conceived and designed the experiments; Analyzed and interpreted the data; Wrote the paper. Moritz Philipp Günther: Conceived and designed the experiments; Analyzed and interpreted the data. Roland von Känel: Analyzed and interpreted the data; Contributed reagents, materials, analysis tools or data. Sebastian Euler: Conceived and designed the experiments; Analyzed and interpreted the data; Contributed reagents, materials, analysis tools or data. </p> Data availability statement Data will be made available on request. Additional information No additional information is available for this paper. Ethics approval and consent to participate The study underwent a thorough review and received approval from the Ethics Committee of the State of Zurich, Switzerland (Ref. -No. BASEC-NR. 2020-00977). It is important to note that this study is retrospective in nature, and therefore, formal consent from participants is not required. The Ethics Committee that approved the study also waived the need for informed consent (Ethics Committee of the State of Zurich, Switzerland, BASEC NR. 2020-00977; June 2020). The authors affirm that all procedures conducted in this study adhere to the ethical standards set by the relevant national and institutional committees for human experimentation. Furthermore, the study aligns with the principles outlined in the Helsinki Declaration of 1975, with its 2008 revision. Consent for publication Not applicable. Data and Materials Availability The datasets utilized and/or analyzed during the present study can be obtained from the corresponding author upon reasonable request. Funding This research received no specific grant from any funding agency, commercial or not-for-profit sectors. Declaration of competing interest The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper
2023-07-17T15:06:36.379Z
2023-07-01T00:00:00.000
{ "year": 2023, "sha1": "2d4efd53f369c593bde4c6afc8e2ed337926995d", "oa_license": "CCBYNCND", "oa_url": "http://www.cell.com/article/S2405844023055366/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "0d6b3427cfe7e4a0ac3154109bf9e73c941d9d31", "s2fieldsofstudy": [ "Medicine", "Computer Science" ], "extfieldsofstudy": [ "Medicine" ] }
269554827
pes2o/s2orc
v3-fos-license
MFP-DeepLabv3+: A Multi-scale Feature Fusion and Parallel Attention Network for Enhanced Bone Metastasis Segmentation ABSTRACT INTRODUCTION Bone metastasis is a complex biological process, whereby malignant tumor cells migrate from the primary tumor site through the bloodstream or lymphatic system to the bone tissue, forming secondary tumors.This process involves multiple steps including invasion, shedding, entry into the bloodstream, colonization, and growth of tumor cells [1].According to statistics, approximately 70% of cancer patients experience bone metastasis, with breast, prostate, lung, and renal cancers being the most common.Bone metastasis not only severely damages skeletal structure and function but also leads to clinical symptoms such as bone pain, fractures, and hypercalcemia, significantly impacting patients' quality of life and survival rates [2]. Currently, the predominant diagnostic methods for bone metastasis in clinical practice primarily rely on medical imaging examinations, such as X-ray radiography, CT scanning, magnetic resonance imaging (MRI), as well as serum biomarker detection [3].However, traditional medical imaging analysis methods are constrained by physicians' subjective experience and workload, leading to issues of diagnostic accuracy and consistency.Consequently, automated bone metastasis pathology segmentation technology has become a focal point of research, aiming to utilize computer vision and deep learning technologies to achieve automatic identification and segmentation of regions of bone metastatic lesions in medical imagery. Deep learning models have been widely used in medical segmentation tasks [4], which can be trained on vast amounts of medical imaging data to automatically extract features from images and achieve accurate identification and segmentation of regions of bone metastatic lesions.Compared to traditional manual segmentation methods, automated segmentation techniques based on deep learning offer advantages such as rapid recognition speed, high accuracy, and strong reproducibility [5].These techniques can alleviate the workload of physicians, improve medical efficiency and diagnostic accuracy, and provide an important auxiliary diagnostic tool for clinical medicine [6]. In recent years, significant progress has been made in the field of bone metastasis through deep learning-based medical image analysis techniques.Song et al. [7] optimized the holistically-nested edge detection (HED) network to enhance the recognition capability of tiny bone metastatic regions in CT images.By removing the terminal pooling layers and introducing additional lateral connection layers, more precise edge detection was achieved, thereby improving the perception and capture of small targets.Ntakolia et al. [8] proposed a lightweight network called LB-FCN light, focusing on the classification of bone metastases in prostate cancer patients.This network, through multi-scale feature extraction and residual connection techniques, effectively classified bone metastases, emphasizing its lightweight nature in terms of parameters and computational resources, making it suitable for resource-constrained scenarios.Lin et al. [9] proposed a semisupervised segmentation method based on deep learning, capable of automatically detecting and delineating metastatic lesions in bone scan images.This method utilizes a small amount of manually labeled samples for training, significantly reducing the human resources required for annotation, and providing an effective solution for medical image analysis tasks with high demands for annotated data.Noguchi et al. [10] proposed a bone segmentation network, candidate region segmentation network, and false-positive reduction network using deep convolutional neural networks such as U-Net and ResNet, aiming to achieve automatic segmentation of bone metastatic tumors in CT images.Liu et al. [11] developed an improved UNet3+ network model for the automatic segmentation of bone metastasis lesions on SPECT bone scan images.The model enhanced the feature fusion by modifying the full-scale deep supervision module and introduced an attention mechanism to focus on focal regions. Although the aforementioned methods have made significant strides in detecting and segmenting bone metastases, they still exhibit several limitations: 1.The network architecture exhibits a notably intricate structure, leading to substantial time consumption during both training and inference phases, alongside heightened computational demands.Such complexities impose limitations on the applicability of these networks, particularly in resourceconstrained medical and clinical environments. 2. During feature extraction, there is a potential oversight regarding the interaction between feature channels, resulting in inadequate extraction of channel information. 3. Insufficient extraction and integration of deep features across various hierarchical levels culminate in the loss of crucial semantic information. The Deeplabv3+ network [12] architecture is notably lightweight, leveraging an encoder-decoder framework, wherein the encoder network extracts deep features from input images.The atrous spatial pyramid pooling (ASPP) module utilizes dilated convolutions with varying rates to capture multi-scale contextual information, thereby enhancing the semantic representation capability of features.Subsequently, the decoder network is employed to restore the resolution of feature maps, enabling precise pixel-level segmentation.To overcome the aforementioned limitations and take advantage of DeepLabv3+, this paper proposes a multi-scale feature fusion and parallel attention network based on DeepLabv3+ (MFP-DeepLabv3+).The main contributions are as follows: 1. To address issues in the atrous spatial pyramid pooling (ASPP) module of DeepLabv3+, such as overlapping information extraction and detail loss, the adaptive feature fusion and pooling (AFFP) module is proposed to achieve multi-scale feature extraction more efficiently, thereby enhancing model performance. 2. To comprehensively extract channel information, the parallel spatial-channel attention network (PSCAN) is proposed to empower the network in intensifying its focus on both channel and spatial information simultaneously during image feature extraction. 3. To meet practical demands, this paper selected the lightweight MobileNetv2 [13] as the backbone network.Considering that different network layers convey distinct depths of information, a multi-layer skip connection strategy is proposed, the incorporation of multi-layer skip connections effectively integrates global semantic information, thereby enhancing the network's capability to tackle diverse image segmentation tasks. METHOD In this paper, we propose a multi-scale feature fusion and parallel attention network (MFP-DeepLabv3+) for enhanced bone metastasis segmentation.As shown in Figure 1. MFP-DeepLabv3+ utilizes the lightweight MobileNetv2 as its backbone network, incorporates AFFP for multiscale feature extraction, and introduces PSCAN for weighting the features obtained from AFFP.These weighted features are subsequently fused with the features from deep, intermediate, and shallow layers of the MobileNetv2 backbone network to enhance the performance of image segmentation. AFFP The ASPP module primarily relies on a series of dilated convolutional layers [14] with varying sampling rates for multiscale feature extraction.However, this approach can result in the extraction of overlapping information, leading to the generation of redundant features.These redundancies not only augment the computational burden of the model but also escalate the time and resource costs associated with both training and inference, thereby compromising the model's generalization capability.Moreover, ASPP employs global average pooling to aggregate information across the entire feature map, which is susceptible to losing detailed information pertaining to edges and local regions, consequently impeding the effectiveness of global feature integration. To address the aforementioned problems, this paper proposes the AFFP.AFFP comprises three components: the feature selection module (FSM), the feature reconstruction module (FRM), and the spatial pyramid pooling with maxpooling (SPPM).Initially, FSM adopts a cross-reconstruction approach to manage features of varying information densities, obtaining spatially reconstructed features.This technique aims to preserve crucial feature information while mitigating redundancy.Subsequently, the spatially reconstructed features are fed into FRM, facilitating efficient multiscale feature extraction.Additionally, SPPM is utilized to effectively retain intricate image details, thereby enhancing model performance. FSM FSM employs a cross-reconstruction approach for feature reconstruction, thereby obtaining spatially reconstructed features, as shown in Figure 2. Initially, it utilizes group normalization to assess the information content of different feature maps using scaling factors, thereby quantifying and evaluating the importance of each feature map.Subsequently, the obtained information content weights undergo normalization to obtain , reflecting the significance of various feature mappings.Following this, the feature maps are reweighted using , and the weights are mapped to the (0,1) range using the sigmoid function, with a threshold gating process (threshold set to 0.5).We assign weights above the threshold to 1, obtaining information weights , and weights below the threshold to 0, obtaining non-information weights .Then, we multiply the input features by and separately, obtaining two weighted features: feature-rich information and feature-scarce information .We partition and equally based on the number of channels and enhance the information flow between them by employing cross-reconstruction operations.Finally, the two cross-reconstructed features are concatenated to obtain spatially reconstructed features.FRM employs multi-scale rich feature extraction on the spatially reconstructed features.As shown in Figure 3, initially, the spatially reconstructed features are evenly divided into two parts according to the number of channels, A 1 × 1 convolution kernel is then applied for channel compression to obtain features and respectively.For feature , a dual-branch feature extraction process is utilized.One branch employs dilated convolution to expand the receptive field for capturing broader spatial information, while the other branch employs a 1 × 1 pointwise convolutional layer.The outputs of these branches are concatenated to obtain feature .As for feature , a single branch with a 1 × 1 convolutional layer is employed.This branch is then concatenated with the original feature residual branch to obtain feature .Subsequently, both and undergo global average pooling to aggregate global spatial information and channel-wise statistics.Softmax is then applied to the pooled results to derive feature weight vectors and .Finally, the output is obtained by weighting the original features with the feature weight vectors, resulting in ⊗ and ⊗ .These refined features are then combined, effectively reducing common spatial and channel redundancies found in standard convolutions, thereby enhancing model efficiency and performance. SPPM ASPP employs global average pooling layers to conduct averaging operations across the entire feature map, aiming to extract comprehensive contextual information from the image and integrate it into the feature representation.However, this conventional pooling approach, which directly averages feature values, leads to blurring or overlooking of fine-grained details within edges and local regions.Consequently, there is a loss of emphasis on the intricate details of image features.To address this challenge, this paper proposes SPPM, which aims to enhance the capture of detailed global feature information.4, the input features undergo dimensionality reduction through a 1 × 1 convolutional layer initially.Subsequently, these features are fed into three maxpooling modules with different kernel sizes for further processing.To maintain the output size matching the input size and avoid cropping of the input feature boundaries, padding is applied during the pooling operation.Each max-pooling module is dedicated to extracting the most prominent features from individual regions.The processed features are organized into a list and merged with the residual branch, which preserves the original features through a 1 × 1 convolutional layer.Compared to traditional global average pooling, SPPM demonstrates enhanced preservation of image details, resulting in improved model performance. PSCAN At the forefront of computer vision research, attention mechanisms, crafted to emulate the selective focus of the human visual system on particular elements of a visual scene, have markedly augmented the efficacy of various visual tasks.Channel attention mechanisms and spatial attention mechanisms, as two prevalent attention mechanisms, delve into the channel dimension and spatial dimension of image features, respectively, thereby effectively enriching the complexity and discriminative capability of feature representations. Channel attention mechanisms evaluate the importance of various channels and employ a weighted approach to augment channels harboring pivotal information while dampening irrelevant ones.This empowers the model to adeptly apprehend abstract and nuanced features.Spatial attention mechanisms concentrate on directing focus towards diverse regions within an image, enabling them to discern and intensify attention upon pivotal objects or areas of interest, thereby optimizing computational resource allocation and augmenting the model's acuity to local features. To empower the model to concurrently extract spatial and channel feature information from images, thereby facilitating comprehensive analysis and showcasing heightened resilience in tackling complex tasks, this paper proposes PSCAN.PSCAN consists of two parallel branches: spatial selfattention and channel self-attention.The spatial self-attention branch is tasked with capturing interactions among features within the spatial dimensions H and W, while the channel selfattention branch is dedicated to capturing interactions among feature channels. As shown in Figure 5, in the spatial self-attention branch, the input feature X is initially divided into Q (C/2 × H × W) and V (C/2 ×H × W) utilizing separate 1 × 1 convolutions to average the channels.For Q, both global average pooling and global standard deviation pooling operations are employed to aggregate features across the spatial dimensions H × W, obtaining a C/2 × 1 × 1 matrix.Subsequently, this matrix is then reshaped into a 1 × C/2 format for further processing.Due to the potential information loss from this compression operation, it is essential to enhance the compressed Q by applying the softmax function to retain crucial features.For V, its channel count remains C/2, and it is reshaped into a C/2 × HW format.Matrix multiplication is then performed between the enhanced Q (1 ×C/2) and the reshaped V (C/2× HW), resulting in a reshaped matrix of 1 × 1 × HW.The sigmoid activation function is employed to ensure that the weight parameter values are constrained within the range of 0 to 1.This obtains a vector containing weighted information from various spatial channels, which is subsequently utilized to weight the original feature X, resulting in spatially enhanced information. Figure 5. The overall framework of PSCAN In the channel self-attention branch, the initial step involves reshaping the input feature X into the shape of C × HW.Subsequently, global average pooling and global standard deviation pooling operations are separately executed on each channel, resulting in two tensors with a length equivalent to the number of channels C, which are then merged.This merged tensor is fed into a fully connected layer at the channel level, obtaining an output of size C × 1 × 1 .To facilitate subsequent convolutional operations, this output is reshaped into the shape of 1 × 1 × C. Following this, convolutional operations are conducted along the channel dimension, generating a new feature map containing inter-channel interaction information.The output of the convolutional operation is normalized and processed using the sigmoid activation function, resulting in a weight vector containing weights for each channel.These weights are applied to the original feature X, resulting in channel-enhanced information. In the final integration stage, the outputs from the channel self-attention and spatial self-attention branches are averaged to obtain the refined feature map.The specific equation is as follows: Multi-layer skip connection MobileNetv2, recognized as an efficient backbone network, typically consists of multiple hierarchical layers, each tasked with extracting features from input images at different levels of abstraction.The shallower layers of the network focus on capturing low-level features such as textures and edges, while the deeper layers are dedicated to extracting more abstract and semantically meaningful high-level features.In the original DeepLabv3+ network, only features from the shallow and deepest layers are typically utilized, under the premise that these layers inherently contain richer spatial information and local details. To optimally leverage the feature information across all layers of the backbone network, this paper proposes a novel multi-layer skip connection strategy.This strategy promotes effective fusion among features from deep, intermediate, and shallow layers.Not only does this approach retains finegrained details of the image, but it also integrates global contextual information, thereby significantly improving the model's ability to comprehend global information within images. During multi-layer feature fusion, the original DeepLabv3+ network utilizes the nearest-neighbor interpolation method to upsample low-level feature maps to the dimensions congruent with high-level feature maps, facilitating element-wise addition or concatenation operations.However, the nearestneighbor interpolation method, due to its simplistic replication of nearest-neighbor pixel values, often leads to the loss of detailed information. To address this limitation, this paper proposes the utilization of the lightweight dynamic upsampling method, DySample [15].DySample exhibits minimal parameters and reduces computational complexities, thereby ensuring more efficacious retention of fine-grained details within feature maps. Dataset The dataset utilized in this paper is BM-Seg [16], comprising 1517 CT images from 23 patients with bone metastasis, including 9 females and 14 males, ranging in age from 18 to 83 years.These scanning data were collected from November 2020 to June 2022 at the Hedi Chaker University Hospital Center in Tunisia.The dataset categorizes images into infected and non-infected classes, with each CT instance accompanied by corresponding bone and bone marrow (BM) masks. We performed data preprocessing operations such as CLAHE algorithm, adding salt and pepper noise, and horizontal mirror inversion on the dataset, which can help reduce overfitting and improve the performance of the network in asymmetric scenarios.This ultimately enhances the network's generalization ability by maintaining a consistent data distribution. The experiments were carried out using 5-fold crossvalidation.The dataset was randomly divided into training and validation sets at a ratio of 9:1. Experimental configurations The operating system is Ubuntu 20.04, using the PyTorch 1.10.1 deep learning framework with CUDA version 11.1.The programming language employed is Python 3.8.18.The central processing unit (CPU) is an 8-core PC with an Intel(R) Core (TM) i7-9700 CPU @ 3.00GHz, while the graphics processing unit (GPU) is an Nvidia GeForce RTX 2080Ti with 11.36 GB of memory. For our experiments, we selected the SGD optimizer with an initial learning rate of 0.01, a weight decay of 0.0005, and a momentum of 0.937 during model training.We resized the input image size to 512 × 512 for training purposes.Additionally, we utilized 3 worker threads to load data on the GPU GeForce RTX 2080Ti 11.36 GB during model training.The batch size was set to 8, and all models were trained for 200 epochs. Evaluation metrics To validate the performance of the model and compare it with other mainstream segmentation models, we employ multiple evaluation metrics, including mean intersection over union (mIoU), dice coefficient (Dice), mean pixel accuracy (MPA), mean precision (mPrecision).The calculation equations of these evaluation metrics are as follows: where, denotes the number of true positive samples predicted as positive class, denotes the number of true negatives predicted as negative class, denotes the number of false positives predicted as positive class, denotes the number of false negatives predicted as negative class. is the total number of samples. Ablation study of AFFP To validate the effectiveness of AFFP, we conducted ablation experiments, and the experimental results are shown in Table 1.Through the incremental integration of the AFFP module, all performance metrics exhibited a consistent improvement trend.Compared to the original ASPP module, our proposed AFFP module achieved improvements of 1.34% in mIoU, 1.90% in mPA, 0.28% in mPrecision, and 0.97% in Dice.Moreover, the parameter count of the AFFP module is 4,882,258, which is reduced compared to the original ASPP module with 5,813,266 parameters.These experimental results demonstrate the effectiveness of the AFFP module. Comparison of different attention mechanisms To validate the effectiveness of PSCAN, we conducted comparative experiments.The PSCAN was compared with seven other mainstream attention mechanisms, including CBAM [17], ECA [18], GAM [19], LSK [20], SGE [21], SimAM [22], and ParNet [23].The experimental results, as shown in Table 2, indicate that the PSCAN demonstrates superior or comparable performance across all evaluation metrics.Specifically, it achieved mIoU of 83.69%, outperforming the runner-up SGE by 0.33%.This series of improvements demonstrates the effectiveness of the PSCAN in enhancing the extraction and fusion of channel and spatial information.The specific segmentation results are depicted in Figure 6. Comparison of upsampling methods To validate the effectiveness of Dysample, we conducted comparative experiments, comparing Dysample with three other upsampling methods: Nearest-neighbor interpolation, DeConvolution (DeConv), and Caraffe [24].The experimental results, as shown in Table 3, demonstrate that Dysample outperforms the other methods across all metrics while concurrently possessing the fewest parameters. Comparison of different semantic segmentation networks We compared MFP-DeepLabv3+ network with various mainstream semantic segmentation networks, including HRNet [25], Non-local [26], EncNet [27], SegFormer [28], Mask-RCNN [29], U-Net [30], and TransU-Net [31].The experimental results shown in Table 4 are the mean number after 5-fold cross-validation of each network.It is evident from the results that our proposed network outperforms other networks across all metrics.Specifically, compared to the runner-up TransU-Net network, our network achieved improvements of 0.29% in mIoU, 2.5% in mPA, 0.21% in Dice.This comprehensive comparison demonstrates the significant superiority of our proposed network in semantic segmentation tasks, providing valuable insights for further optimization of semantic segmentation networks.The specific segmentation results are depicted in Figure 7.All improvement results in the final analysis are summarized, which is as shown in Table 5.Compared to the original DeepLabv3+, our proposed MFP-DeepLabv3+ achieved a reduction of 5% in parameters, and improvements of 2.11% in mIoU, 2.36% in mPA, 1.24% in mPrecision, and 1.52% in Dice.These experimental results demonstrate the effectiveness of our improvement approach and provide strong support for further research and application.Segmentation results are depicted in Figure 8. CONCLUSION This paper proposes an MFP-DeepLabv3+ network tailored for bone metastasis segmentation tasks.Initially, we enhance the multi-scale feature extraction capability of the network by introducing the AFFP module.By effectively capturing and integrating features across multiple scales, we enabled more accurate identification and segmentation of both subtle and significant pathological features.Furthermore, we introduce the PSCAN to refine the focus of the network on both channel and spatial information, enabling the network to more sensitively attend to prominent features in the image data.This refinement allows for a more precise differentiation between healthy bone tissue and metastatic lesions.Additionally, we employed a multi-layer skip connection strategy to integrate global semantic information.Experimental results demonstrate that the MFP-DeepLabv3+ network achieves significant improvements of 2.11% in mIoU, 2.36% in mPA, 1.24% in mPrecision, and 1.52% in Dice.Additionally, the GPU memory usage during training is 5.91G, with an inference speed of 0.0521 seconds per image.These results conclusively demonstrate the proposed MFP-DeepLabv3+ network possesses detailed and accurate segmentation capabilities for bone metastatic regions, providing substantial assistance to clinicians in treatment strategy determination.For cancer patients, early and precise detection of bone metastasis is crucial, aiding in timely treatment selection. Despite the significant improvements achieved by our proposed network, there are still limitations to be addressed.Although the improved network may perform well on specific datasets or tasks, its generalization ability might be limited, making it challenging to maintain efficiency across diverse clinical scenarios or different types of bone metastasis images.This limitation restricts its applicability in clinical practice.Future work will focus on exploring methods to transfer the trained model to other medical imaging tasks or diverse datasets, ensuring the model's performance and generalization across various scenarios. Figure 4 . Figure 4.The overall framework of SPPMAs shown in Figure4, the input features undergo dimensionality reduction through a 1 × 1 convolutional layer initially.Subsequently, these features are fed into three maxpooling modules with different kernel sizes for further processing.To maintain the output size matching the input size and avoid cropping of the input feature boundaries, padding is applied during the pooling operation.Each max-pooling module is dedicated to extracting the most prominent features from individual regions.The processed features are organized into a list and merged with the residual branch, which preserves the original features through a 1 × 1 convolutional layer.Compared to traditional global average pooling, SPPM demonstrates enhanced preservation of image details, resulting in improved model performance. FUNDING The research was supported by the Applied Basic Research Program Table 2 . Comparison results of different attention mechanisms Table 3 . Comparison results of upsampling improvement experiments Table 4 . Comparison results with other semantic segmentation networks
2024-05-04T15:41:00.427Z
2024-04-30T00:00:00.000
{ "year": 2024, "sha1": "5b0abc00142dd97c00ac3d8857f8d42838b70cc9", "oa_license": "CCBY", "oa_url": "https://www.iieta.org/download/file/fid/129609", "oa_status": "HYBRID", "pdf_src": "ScienceParsePlus", "pdf_hash": "160b6e8188811ae6f4731c880477355194ee1554", "s2fieldsofstudy": [ "Medicine", "Computer Science", "Engineering" ], "extfieldsofstudy": [] }
118896991
pes2o/s2orc
v3-fos-license
Longitudinal W boson scattering in a light scalar top scenario Scalar tops in the supersymmetric model affect the potential of the standard model-like Higgs at the quatum level. In light of the equivalence theorem, the deviation of the potential from the standard model can be traced by longitudinal gauge bosons. In this work, high energy longitudinal W boson scattering is studied in a TeV-scale scalar top scenario. O(1-10%) deviation from the standard model prediction in the differential cross section is found depending on whether the observed Higgs mass is explained only by scalar tops or by additional contributions at a higher scale. I. INTRODUCTION The recent discovery of the Higgs boson confirmed the standard model (SM) of particle physics [1,2]. Since then Higgs properties have been measured at the LHC and found to be consistent with the standard model prediction [3]; besides, there has been no sign beyond the standard model in the experiment. It is widely believed, however, that the standard model is not the ultimate theory. Superstring theory is one of candidates for the "theory of everything". It requires supersymmetry (SUSY) due to consistency, which gives rise to lots of phenomenological consequences beyond the standard model. For example, it provides a candidate for dark matter, and three gauge coupling constants are unified at the grand unification scale. Supersymmetry affects the Higgs sector too. In SUSY another Higgs doublet must be introduced for phenomenologically acceptable Higgs mechanism to work. In the supersymmetric Higgs sector, the electroweak symmetry breaking (EWSB) is induced by renormalization flow of parameters in the Higgs sector, which is a solution to the origin of the EWSB since in the SM it is induced by the ad hoc tachyonic Higgs mass term. In spite of such a drastic extension, the Higgs sector in the supersymmetric model reduces to the one in the SM below the electroweak scale when superpartners are much heavier than the electroweak scale. Considering the current status, i.e., no sign of a new particle so far, this might be the case, and then it might be difficult to observe a clue of supersymmetry even in future collider experiments. In such a circumstance, it is worth recalling that the observed 125 GeV Higgs mass cannot be explained in SUSY at tree level. It is explained by scalar top ("stop") loop contribution, for example, in the minimal supersymmetric standard model (MSSM) [4][5][6][7][8][9]. This fact indicates that stop has an impact on the SM Higgs potential at the quantum level, which is similar to the Higgs sector in classical scale invariant model. In a simple classical scale invariant model (a) SM singlet scalar(s) is (are) introduced. They affect the Higgs potential at quantum level, which induces the EWSB radiatively. In this framework, the singlet loop determines the curvature of the Higgs potential around the minimum, i.e., the Higgs mass. Although Higgs properties, such as mass, production and decay rates at collider experiments, are almost consistent with the SM values, the Higgs self-couplings are predicted to significantly deviate from the SM ones [10][11][12]. This means that Higgs potential is the same locally around the minimum but not in a global picture. Such an effect is imprinted in ficticious bosons in the Higgs doublet, which are absorbed into longitudinal polarization of the gauge bosons. While the measurement of the Higgs selfcouplings is one of main goals of the next-generation lepton collider, e.g., the International Linear Collider (ILC), the deviation from the SM in the Higgs sector can be also probed at the LHC in the gauge boson scattering process. It is pointed out in Ref. [13] that the differential cross sections of longitudinal gauge boson scattering pro- are changed by more than O(10%) in the model, which is described by off-shell Higgs. Namely the discrepancy between the classical scale invariant model and the SM can be found in off-shell Higgs in the propagator, for which the longitudinal gauge boson scattering a good probe. In supersymmetric model, stops are expected to play a role similar to the singlet scalars. In this paper we analyze the longitudinal gauge boson scattering in the framework of the supersymmetric model. Following the analysis in Ref. [13], we formulate the leading order amplitudes of the processes and discuss the deviation from the standard model prediction numerically. In the study we consider stops with a mass of less than a few TeV. Such light stop scenario is motivated by naturalness argument, and part of parameter space of the scenario has already been excluded by the direct search at the LHC. In Ref. [14] scalar top pair production is analyzed in both a simplified model and phenomenologically tempered SUSY models in conserved R-parity using Run 1 data. The updated studies at √ s = 13 TeV [15][16][17][18] have shown that lighter stop mass region mt 1 850 GeV is excluded at 95%CL when the lightest neutralino mass mχ0 1 is less than about 300 GeV. On the other hand, mt 1 400 GeV and mχ0 1 300 GeV (with mt 1 > mχ0 1 ) is still allowed. Another possibility is R-parity violation. Without R-parity the lightest neutralino decays to the standard model particles, and thus the above analysis cannot be applied. In R-parity violated scenario, where especially L i L j E c k or L i Q j D c k types with light flavor indices exists, the stop mass below 1 TeV is ex-cluded [19,20]. On the other hand, in U c i D c j D c k type R-parity violation, stop lighter than 1 TeV has not been excluded [21,22]. Thus, various possibilities have yet to be probed for the light stop scenario. The naturalnessinspired light stop scenario in the minimal supersymmetric standard model will be searched at the LHC with more data (see, e.g., Refs. [23][24][25] for recent studies). The electroweak precision test and future lepton collider may be other powerful options for the light stop search [26]. We show that high energy longitudinal gauge boson scattering is another tool for the indirect search of the TeV-scale stop. We note that the present work focuses on rather theoretical study of longitudinal W boson scattering. To discuss the discovery potential at collider experiments, one needs full simulation of the process, for example, pp → W W jj, which is not covered in this paper. It is known that the observation of high energy (over TeV) longitudinal gauge boson scattering would be challenging even in Run 2 at the LHC. We will discuss the issues in the last section, along with future prospects. II. THE LIGHT SCALAR TOP SCENARIO In this paper, we discuss two types of scenarios regarding the origin of the Higgs mass in the supersymmetric model: (a) Higgs mass is explained in the MSSM particle contents (b) Other contributions besides the MSSM particles make the observed Higgs mass We assume that the other contributions to the Higgs mass are provided in higher scale than stop mass, e.g., heavy vector-like matters for scinario (b) (see, for example, Refs. [27][28][29][30][31][32][33][34][35][36][37]). To be concrete, we consider mass spectra mt m for both cases. Here mt andm are stop mass scale (defined later) and the mass scale of the rest of superparticles, respectively. It is similar to the so-called split supersymmetry model discussed in Ref. [38]. In split supersymmetry gauginos are O(1-10 TeV), and the other superparticles are much heavier. In the present discussion we consider that stops (and the left-handed sbottom) are also around TeV scale. Just to keep the GUT multiplet structure we assume that the right-handed stau has TeV mass, 1 which does not affect the following analysis. Namely, our discussion comprises the SM-like Higgs with the scalar top and it is independent of the details of the other sector. In Appendix A we also discuss mt ∼m case for a reference, which is also useful for an analytic check of the later calculation. In this paper we do not argue the naturalness in the Higgs sector but focus on 1 For example, gauge coupling unification is kept at the level of 0.7-1% form = 10 6-12 GeV and mt = 1 TeV in one-loop calculation. the consequence of a TeV-scale stop in the gauge boson scattering. To define the relevant parameters for the Higgs mass, we give the MSSM superpotential along with soft SUSY breaking terms, where T are chiral superfields of the third-generation left-handed quark doublet, right-handed quark singlet (tilded fields are their superpartners), up-type Higgs doublet, and down-type Higgs doublet, respectively, and A · B ≡ A T B ( = iσ 2 ). An ellipsis indicates irrelevant terms in our following discussion. We assume that all parameters are real for simplicity. In the supersymmetric model, the stop loop contribution has a significant impact on the SM Higgs mass. In our study, we adopt renormalization group (RG) method to determine the Higgs mass [8]. In the reference, the matching conditions at the scale µ µt ∼ mt are given by where λ SM H (y SM t ) and λ SM H (y SM t ) are the Higgs quartic coupling (top Yukawa coupling) in the energy regions µ < mt and mt ≤ µ (<m), respectively. λ SM H must coincide with the Higgs quartic coupling in the SM. In Eq. (3) the second term on the right-hand side is the threshold correction by integrating out stops. In the numerical analysis we solve RG equations for the gauge coupling constants, top Yukawa coupling, and Higgs quartic coupling. (In the numerical study we will use more accurate expression for the condition (3). See later discussion.) For scenario (a), we need to determine X t for a given m L,R to obtain the observed Higgs mass. Thus we solve the RG equations in the region m t ≤ µ where m t is the top mass. We refer to Refs. [39] and [40] for m t ≤ µ ≤ µt and µt ≤ µ ≤ µ SUSY (∼m), respectively. The RG equations for µ SUSY ≤ µ are well known, e.g., see Ref. [41]. Here matching conditions at where g and g are the gauge coupling constants of U (1) Y and SU (2) L , respectively) should be used. (The solutions in this region are unnecessary for the computation of the scattering amplitudes. We use them for a check of the GUT unification.) We have checked that the obtained Higgs mass is consistent with the results by using the FeynHiggs package [42]; i.e., it agrees within about 2 (6) GeV in X t < 0 (> 0) region. This accuracy suffices for leading order analysis of longitudinal gauge boson scattering discussed below. On the other hand, for scenario (b), assuming an additional contribution to the Higgs quartic coupling at high energy, such as by vectorlike matters, we only need to solve the RG equations in µ ≤ µt in the SM particle contents. In the later analysis, we will take µt = mt and µ SUSY =m. Note that Eq. (3) corresponds to leading order computation in the order counting method shown in Ref. [13]. In the literature an auxiliary expansion parameter ξ is introduced to define the leading order term for each physical quantity. Following their analysis, we assign In this assignment any physical quantities, e.g., P, can be given as P = ξ n ∞ i=0 p i ξ i in perturbative expansion. Then we define p 0 as the leading order. Getting back to Eq. (3), both first and second terms in the right-hand side are counted as ξ 2 , which means that not only the first term but also the second term is the leading order. Thus we regard it as the leading order matching condition. In Eq. (5), we have additionally assigned the ξ counting for g and g for consistency, which is discussed later (see Eqs. (10) and (11)). With this assignment, we have neglected terms such as g 2 λ SM H and g 4 in Eq. (3), which are ξ 4 . In the following discussion we use this method to compute the scattering amplitudes at the leading order. Before performing the actual calculation, let us estimate the scattering amplitude. As pointed out in Ref. [13], the deviation from the SM in the amplitude high energy gauge boson scattering is written in terms of the off-shell region of the Higgs propagator. Although the model is different, scalar tops are expected to play a role similar to that of the singlet scalars in the reference. Then, the deviation from the SM at the leading order calculation is roughly estimated as for and p is the typical momentum of the scattering process. The logarithmic term, which is from divergent stop loop diagrams, is the dominant part for |p 2 | m 2 t , X 2 t , and it can be understood in terms of the RG flow of the Higgs quartic coupling. However, as emphasized in Ref. [13], detailed kinematics, such as energy dependence or angular distribution, of the scattering process cannot be described merely in RG computation. In addition, the second term of the bracket, which cannot be taken into account by RG computation, may also be comparable to the logarithmic term when |p 2 | ∼ m 2 t , X 2 t . Our main goal is to quantitatively show the behavior of the gauge boson scattering amplitudes in the existence of scalar tops in the SUSY model. III. NAMBU-GOLDSTONE BOSON SCATTERING A. Equivalence theorem Since we are interested in high energy longitudinal gauge boson scattering, the equivalence theorem can be applied in our calculation. The equivalence theorem tells us that the high energy longitudinal gauge boson (W ± L , Z L ) corresponds to Nambu-Goldstone (NG) boson (G ± , G 0 ). First we will check the validity of the equivalence theorem quantitatively. To this end we compare the differential cross section in center-of-mass frame for the processes W + L W ± L → W + L W ± L and G + G ± → G + G ± in the SM. The results are summarized in Table I. Here we use the tree-level analytic formulas given in Ref. [13] and take the same input parameters, i.e., m W = 80.385 GeV (W boson mass), m Z = 91.1876 GeV, m h = 125.03 GeV [43,44], and g = 0.65178. θ is the scattering angle. The deviations between G + G + and W + L W + L (cos θ = 0) are 14%, 4.9%, 1.2%, 0.19% 0.047% for center-of-mass energy √ s = 0.6, 1, 2, 5, and 10 TeV, respectively. On the other hand, for W + W − (G + G − ) scattering, the deviations are 21%, 10%, 2.5%, 0.40%, and 0.10% in the same √ s but for cos θ = 0.5. It is seen that the deviation gets smaller for larger √ s as expected. In the backward region, on the other hand, the differential cross section is suppressed due to a cancellation in the tree-level amplitude. In such a region the other one-loop contributions besides (scalar) top and bottom, i.e., electroweak corrections, including the Sudakov logarithm [45,46], become numerically important [47]. It is discussed in Ref. [47] that the finite decay width of W bosons must be taken into account by using the complex mass scheme [48] or considering the actual decay chains of W bosons [49] for consistent calculation. Since those issues are beyond the scope of the present study, we dis-card backward region. 2 In the later numerical analysis, we discuss the differential cross section in the SM and the supersymmetric model at the level of O(1-10%). Thus, to substitute the NG boson scattering for longitudinal W boson scattering at less than about 0.1% we will mainly consider √ s ≥ 2 TeV. Note that the number of events where the W boson system has the invariant mass over 2 TeV is expected to be limited even in Run 2 at the LHC. As mentioned in the Introduction, we try to show a potential of W W scattering for the study of beyond the SM in a long-term period, considering in the future a high energy frontier experiment, such as the Future Circular Collider. B. Scattering amplitudes In this subsection we will calculate the G + G ± → G + G ± scattering amplitude. The interaction terms which are relevant for the scattering processes in our current setup are where the couplings are defined in Eqs. (3) and (4) and θ W is the Weinberg angle. In the following calculation, we take MS scheme in dimensional regularization and use LoopTools [50] for the numerical study. Let us discuss G + G + → G + G + scattering first. The scattering amplitudes in the supersymmetric model and the SM are given by the form where "tree", "t-b", and "t-b" indicate the tree-level amplitude, top-bottom loop amplitude, and stop-sbottom loop amplitude, respectively, which are given by 2 Here note that we do not insist that the forward region is effective for our study. As we will see later, it is dominated by γ and Z boson exchange diagrams and not so efficient for seeing the deviation from the SM. (Central or semicentral regions are more promising.) Stop-sbottom loop diagrams which induce G + G + scattering. Time flows upward, and "crossed" means crossed diagram for final-state bosons. Circle-type, triangle-type, and crossed box-type diagrams correspond to At -b,cir G + G + , At -b,tri G + G + , and At -b,box G + G + in Eq. (14), respectively. where t = (p 1 −k 1 ) 2 , u = (p 1 −k 2 ) 2 (p i and k i (i, j = 1, 2) are momenta of incident and scattered particles, respectively), and B 0 is the loop function defined in Eq. (B.5) in Ref. [13] without 1/¯ . The couplings are renormalized ones and their µ dependence is implicit. Here we have taken the leading terms in the |t|, |u| m 2 Z limit. At -b G + G + consists of three types of diagrams, circle, triangle and box types, which are shown in Fig. 1. We can derive them straightforwardly as with Loop functions C 0 and D 0 are those defined in Ref. [50]. mb L is the left-handed sbottom mass. Since we consider that the right-handed sbottom mass is much larger than the third-generation left-handed squark mass, the lighter sbottom is mostly composed of b L ; thus, mb L m L . θ t is the mixing angle in stop sector defined as (t 1 ,t 2 ) T = Z (t L ,t R ) T with orthogonal matrix Z 11 = cos θ t ≡ c θt , Z 12 = sin θ t ≡ s θt . To be consistent with ξ expansion analysis, we have omitted terms such as (y SM t ) 2 g 2 Z and g 4 Z in Eq. (15), which are ξ 3 and ξ 4 , respectively, in ξ expansion. We note that the explicit µ dependence coming from B 0 function is canceled at the leading order by the RG flow of λ SM H (λ SM H ). Since our goal is to compute the deviation at leading order in ξ expansion, we take µ = mt in the amplitude hereafter. Another scattering amplitude for the process G + G − → G + G − can be obtained by replacing the Mandelstam variable u by s. Before going to the numerical analysis, let us check low-and high-energy limits. In the low-energy limit, the amplitudes A G + G + and A G + G − should coincide with those in the SM. To see this we define ∆A G + G ± Then, using the matching conditions (3) and (4), they are simply given by In the low-energy limit, s, |t|, |u| m 2 t (but s, |t|, |u| m 2 Z ), and taking mt 1 mt 2 m L mt the stopsbottom loop contribution behaves as which leads to Thus ∆A G + G − → 0, which means that the amplitude asymptotically approaches the SM one in the low-energy limit as expected. In numerical calculation mt 1 mt 2 mt is not always satisfied. Therefore, in the later analysis, we use the following expressions instead of Eqs. (19) and (3); In the high-energy limit s, |t|, |u| mt, |X t |, on the other hand, The first line on the right-hand side comes from circle diagram, which agrees with the native estimation (6) and can be understood in terms of the RG flow of the Higgs quartic coupling. Meanwhile, the others are derived in the explicit calculation of Feynman diagrams, which cannot be described by the RG equations and are necessary ingredients for the numerical analysis of the scattering processes. IV. NUMERICAL RESULTS Now we are ready to give the numerical result. To this end, we use the quantity: which corresponds to the deviation from the SM for the differential cross section. Fig. 2 shows the result for the G + G + process as a function of √ s for the fixed scattering angle, cos θ = 0. We take m L = m R = 0.5, 1, and 2 TeV (left) and m L = 2m R = 1 and 2 TeV (right) with X t = 0.5m L at µ = mt, which corresponds to scenario (b) discussed in Sec. II. Roughly speaking, left (right) panel covers the situation of the degenerate (split) mass spectrum in the stop sector. For scenario (a), the results for m L = m R = 1 TeV with X t = X fit t = 1.82 TeV (left), m L = 2m R = 1 TeV with X t = X fit t = 1.45 TeV (right) are given. Here we omit another larger value of |X t | to give the observed Higgs mass since it would not be phenomenologically acceptable due to vacuum instability bound X t / m 2 t1 + m 2 t2 √ 3 [51] (see also the earlier analysis to give the bound X t / m 2 t1 + m 2 t2 √ 7 [52].) 3 It is seen that the deviation increases monotonically as √ s gets large for fixed m L,R and X t . It is attributed to the logarithmic term (first term of Eq. (26)), which originates in the stop-sbottom loop and can be understood by RG running of the Higgs quartic coupling. 4 A smaller m L,R gives a larger deviation. For example, ∆ G + G + =16 (28)%, 7 (15)%, and 2 (6)% for √ s = 5 (10) TeV for m L,R = 0.5, 1, and 2 TeV with X t = 0.5m L , respectively. This is because the log( (26)), contributes constructively in the total amplitude for √ s > mt. It is true for the split mass spectrum (right panel). When X t = X fit t , on the other hand, ∆ G + G + gets smaller compared to the result with the same m L,R but X t = 0.5m L . To understand the behavior, we plot ∆ G + G + as a function of X t for various √ s in Fig. 3 for m L = m R = 1 TeV (left) and m L = 2m R = 1 TeV (right). It is found that ∆ G + G + decreases as X t increases for X t X fit t , which can be understood from Eqs. (24) (or (19)) and (26). The second term on the right-hand side of Eq. (24) is positive and destructively interferes with At -b G + G + for X t X fit t (∼ mt). For larger | cos θ| the deviation gets smaller since Z and γ exchange terms which are proportional to 1/u or 1/t dominate the scattering amplitude. For example, when cos θ = 0.5, ∆ G + G + = 11 (18)%, 4.8 (10)%, and 2 (4)% for √ s = 5 (10) TeV m L = m R = 0.5, 1, and 2 TeV with X t = 0.5m L , respectively. G + G − scattering has similar behavior except for a bump, which corresponds to a resonance at √ s 2mt in Fig. 4. This bump is due to discontinuity of the first derivative of B 0 (q 2 , m 2 1 , m 2 2 ) with respect to q 2 at (27) is negative, which is destructive in the total amplitude at the high energy range. For example, ∆ G + G − = −29 (−42)%, −11 (−25)%, and 6 (−9)% for √ s = 5 (10) TeV and cos θ = 0.5 for m L = m R = 0.5, 1, and 2 TeV with X t = 0.5m L , respectively (left panel). Qualitatively the same behavior is seen for the split mass case (right panel). Regarding X t dependence, it is seen that |∆ G + G − | gets smaller for X t = X fit t similarly to the G + G + case. Fig. 5 clarifies the behavior. It is found that |∆ G + G − | decreases in the X t mt region, which to attributed to the second term on the right-hand side of Eq. (24) as explained in the G + G + case. Thus, in both the G + G + and G + G − scattering processes, it would be difficult to observe the deviation from the SM in the parameter space |X t | ∼ mt, especially X t X fit t , since ∆ G + G ± is a few percent. In other words, scenario (a) is like a "blind spot" for the TeV-scale stop FIG. 4. Same as Fig. 2 but for G + G − scattering process taking cos θ = 0.5. search in the longitudinal W boson scattering processes. In scenario (b), on the other hand, O(1-10%) deviation is expected for √ s = 1-10 TeV. V. CONCLUDING REMARKS In this paper we have studied high energy longitudinal W boson scattering with a light scalar top of which the mass is a few hundred GeV to a few TeV. They affect the SM Higgs potential at quantum level, and consequently the deviation from the standard model in longitudinal gauge boson scattering is expected from the equivalence theorem. Applying the equivalence theorem, we have computed charged Nambu-Goldstone boson scattering processes and substituted them as high energy W + L W ± L scattering processes. In the study, we consider two scenarios: (a) Higgs mass is explained in the MSSM particle contents, and (b) other contributions besides the MSSM particles make the observed Higgs mass. It has been found that O(1-10%) deviation in the differential cross section is predicted depending on stop mass and kinematics. As an example of scenario (b), for √ s = 5 (10) TeV and cos θ = 0, the deviation in the W + L W + L process is 16 (28)% and 7 (15)% when both left-and right-handed stop masses (m L and m R ) are 0.5 and 1 TeV with the mixing parameter X t = 0.5m L , respectively. Similarly, in W + L W − L , it is −29 (−42)%, and −11 (−25)% but for cos θ = 0.5. For scenario (a), on the contrary, it has been discovered that the deviation gets smaller, e.g., 2 (4)% and 4 (−4)% for m L = m R = 1 TeV with the appropriate X t for √ s = 5 (10) in W + L W + L and W + L W − L , respectively. The same behavior is seen for the m L = m R case. Thus in such a case it would be challenging to see the existence of stop in W L W L scattering. High energy longitudinal gauge boson scattering has started to be measured at the LHC [60,61]. However, the observation of O(10%) deviation would be difficult even in Run 2 at the LHC. This is because the number of events which has over a few TeV invariant mass of W boson system is suppressed due to gauge cancellation [62]. (We have checked this by using the MadGraph package [63]. 5 ) Thus at least an upgraded program, such as the High Luminosity LHC, would be necessary. Or the Future Circular Collider, which is planed to operate at 100 TeV center-of-mass energy, would be more promising for the study of the gauge boson scattering. In such a high energy experiment, the observation of stop or sbottom pair production might be more direct and easier way to observe a clue of the supersymmetry. As mentioned in the Introduction, however, there are model dependence in the data analysis, e.g., details of the decay modes, or violation of R-parity. High energy longitudinal gauge boson scattering would be complementary to the direct searches. We have provided the theoretical ingredients for the numerical study and discuss feasibility for the discovery of scalar tops in the longitudinal gauge boson scattering. The next step will be to perform full simulation for hadron or lepton collider experiments with various energies, for which Refs. [47,49,[64][65][66][67][68][69][70][71][72][73] are useful. We leave it to future work.
2017-07-18T04:28:22.000Z
2017-03-21T00:00:00.000
{ "year": 2017, "sha1": "0816a3b4090363e1021f1fe549a2d803c40f240c", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1703.07112", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "0816a3b4090363e1021f1fe549a2d803c40f240c", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
255865270
pes2o/s2orc
v3-fos-license
Quantification of confocal fluorescence microscopy for the detection of cervical intraepithelial neoplasia Cervical cancer remains a major health problem, especially in developing countries. Colposcopic examination is used to detect high-grade lesions in patients with a history of abnormal pap smears. New technologies are needed to improve the sensitivity and specificity of this technique. We propose to test the potential of fluorescence confocal microscopy to identify high-grade lesions. We examined the quantification of ex vivo confocal fluorescence microscopy to differentiate among normal cervical tissue, low-grade Cervical Intraepithelial Neoplasia (CIN), and high-grade CIN. We sought to (1) quantify nuclear morphology and tissue architecture features by analyzing images of cervical biopsies; and (2) determine the accuracy of high-grade CIN detection via confocal microscopy relative to the accuracy of detection by colposcopic impression. Forty-six biopsies obtained from colposcopically normal and abnormal cervical sites were evaluated. Confocal images were acquired at different depths from the epithelial surface and histological images were analyzed using in-house software. The features calculated from the confocal images compared well with those features obtained from the histological images and histopathological reviews of the specimens (obtained by a gynecologic pathologist). The correlations between two of these features (the nuclear-cytoplasmic ratio and the average of three nearest Delaunay-neighbors distance) and the grade of dysplasia were higher than that of colposcopic impression. The sensitivity of detecting high-grade dysplasia by analysing images collected at the surface of the epithelium, and at 15 and 30 μm below the epithelial surface were respectively 100, 100, and 92 %. Quantitative analysis of confocal fluorescence images showed its capacity for discriminating high-grade CIN lesions vs. low-grade CIN lesions and normal tissues, at different depth of imaging. This approach could be used to help clinicians identify high-grade CIN in clinical settings. Background Cervical cancer represents a significant global cancer threat, particularly in low-and middle-income countries where the disease incidence is highest and cervical malignancies are the third leading cause of cancer death amongst women [1]. In 2007, there were more than 500,000 new cervical cancer cases worldwide and the number of cervical cancer deaths was 310,000 [2]. Improved testing accuracy and reduced screening costs could have significant positive impacts in developed nations with established cervical screening infrastructure. Although the Pap smear has reduced the incidence of cervical cancer worldwide [3], it's low specificity results in an excessive number of colposcopy procedures performed, which in turn due to its low specificity, results in the acquisition of a large number of unnecessary biopsies.. The accuracy of colposcopy is highly dependent on the physician's expertise. As shown in previous analyses, colposcopic examinations performs well in a diagnostic setting but poorly in a screening setting [4]. In particular, colposcopy has been reported to have a high sensitivity (96 %) and a low specificity (48 %) when differentiating abnormal tissues (squamous intraepithelial lesions (SILs)) from normal tissues (normal squamous epithelia and inflammation) [5]. There is room to improve the effectiveness of our current system that relies on histopathological review for colposcopically-guided biopsy specimens [6]. Optical measurements can be performed in a non-invasive manner to automatically identify CIN with high sensitivity and specificity, potentially reducing the frequency of unnecessary biopsies, and providing real time diagnosis with the possibility of immediate treatment by less experienced practitioners. In the last 10 years, new imaging and optical technologies have been developed to try to improve standard colposcopic examination of the uterus cervix, such as Optical Coherence Tomography [7] and Raman microscopy [8], Among these new technologies, confocal microcopy, either in reflectance or fluoresence mode, has been under development for almost two decades. In confocal reflectance microscopy, images represent the natural differences in refractive indices of cellular structures, whereas in confocal fluorescence microscopy, the tissue can be stained with a fluorescent contrast agent (fluorochrome) to improve the cellular contrast [9]. Precancerous lesions exhibits cell morphological and tissue architectural changes, including increased nuclear size, increased nuclear-cytoplasmic ratio, and decreased cell-to-cell distance [10]. Confocal microscopy is a non-invasive tool that can image epithelial tissues to provide information related to these epithelial changes. It does this by acquiring multiple images of cell nuclei at different focal depths. These images have sufficient contrast and resolution to allow the visualization of individual cells and nuclei, which; in turn, has driven the application of confocal microscopy to be useful in a variety of clinical contexts [9,[11][12][13][14]. Confocal microscopy has previously been used to study changes related to the grade of Cervical Intraepithelial Neoplasia (CIN) lesions. This work was pioneered by Dr Rebecca Kortum and her team at Rice University [15][16][17][18]. In these studies, features such as cell density, nuclear morphology (nuclear size, nuclear-cytoplasmic ratio), and tissue architecture (average distance between cells, etc.) were calculated from sample images to delineate disease states. With histopathology as the gold standard, these studies demonstrated the ability to discriminate high-grade dysplasia (CIN2 and CIN3) from low-grade dysplasia (HPV-associated changes and CIN1) at high sensitivity (86-100 %) and specificity (62-100 %). Later, another team confirmed these results using confocal endomicroscopy [19]. Four years ago, Kortum's group has validated their technology and shown the same performance in vivo in clinical settings [20]. In the recent years, two other teams [21,22] have also investigated and studied the feasibility of using endomicroscopy in clinical settings for screening or diagnostic purposes. Values of sensitivity and specificity of these studies to detect high-grade lesions, even in small cohorts of women, confirmed its potential for real time in vivo pathology [23]. Nevertheless, more work needs to be done to assess the true value of this technology in clinical settings. In the lats 10 years, our group has been facing numerous challenges, mostly related to instrumentation, implementation, quality control, and biological variability to implement reflectance and fluorescence spectroscopy in clinic [24]. We had underestimated the difficulty and complexity of moving optical technology from bench to clinic. We need to leverage this experience and better measure the capability of confocal microscopy. In this context, the objective of this work was to confirm and validate findings from previous studies with independent proprietary instrumentation as the ability of confocal microcopy to identify high-grade lesions and second to investigate the effect of varying imaging depth in the performance of our apparatus. Figure 1 outlines the design of the study described herein. Patient recruitment and specimen accrual All patients gave informed consent, and the study was approved by the UBC BCCA Research Ethics Board and the Vancouver Coastal Health Research Institute (Protocols H09-03303 and H03-61235). We collected cervical biopsy specimens over a period of 22 months (April 2013 to February 2015) from baseline patients who were scheduled to have a colposcopic procedure performed at the Women's Clinic at Vancouver General Fig. 1 Study design. Based on the colposcopic impression, one biopsy is taken from an abnormal area and one biopsy from a normal area. Quantitative tissue phenotypic analysis of confocal fluorescence microscopy images and Feulgen-stained histological images is performed. Correlation between QTP features, colposcopic appearance and histopathological grades is investigated Hospital. Patients were referred to the clinic based on a prior abnormal Pap smear result. All cases were diagnosed as normal, low-grade CIN, or high-grade CIN [25]. Over a period of 62 clinic days, 143 patients were scheduled, and 50 patients were eligible and accepted to participate in this study (rate of recruitment = 34 %). Three of these patients were excluded from the study due to unsuccessful image acquisition. Fortyseven patients were successfully recruited, and normal and abnormal biopsies were collected from them. Participating patients were 20 to 51 years old with an average age of 31.7 years (SD = 8). None of these women were pregnant. In each case, following topical application of 5 % acetic acid to the cervix, normal and abnormal uterine cervix areas were identified by colposcopic examination by a trained gynecologist and biopsies were collected from these clinically selected sites. Biopsy specimens were immediately placed in saline and the colposcopic impression (normal, low-grade CIN, or high-grade CIN) was noted for each. Each biopsy was ~4 mm in diameter and ~2 mm in depth. Confocal fluorescence microscopy Biopsies obtained in the clinic underwent confocal imaging at the British Columbia Cancer Research Centre (BCCRC) within 1 h of collection.. The biopsies were stained with Acriflavine fluorescence stain for 2 min while shaking on a shakerat low speed, as previously described [7]. The stain was prepared by dissolving 0.05 % Acriflavine Hydrochloride (Fluka) in 10 % phosphate-buffered saline. Biopsies were washed with saline for 1 min following staining. A 5 % solution of acetic acid was then added to each sample. The biopsies were placed on a microscope slide and cover-slipped. Confocal fluorescence microscopy was performed using a bench-top Carl Zeiss Axio Imager Z1 equipped with a custom laser-scanning confocal attachment. The custom confocal attachment employed a resonance scanner and galvanometer (Cambridge Technology) for laser scanning; an Avalanche photodiode (Hamamatsu) for detection; and a frame grabber (Matrox) to digitize the signal. Laser excitation was provided by a 488 nm laser (Coherent). All images were acquired using a 25X/0.80 water-immersion objective lens (lateral resolution of the system = 0.87 µm). The time frame from staining to completion of confocal imaging was approximately 10 min. Gray-scale confocal images were acquired and were 512 × 512 pixels in size. En face images were acquired every 5 µm. The first image of this image stack was acquired from just below the epithelial surface (z = 0 µm) while the depth of the last image (z = 80 µm) was limited by the working distance of the objective lens (250 µm) minus the thickness of the cover slip (170 µm). Confocal image analysis We developed an image processing algorithm in MATLAB (Release 2014b, The Mathworks Inc., Natick, MA, USA) to detect cell nuclei in confocal images (Fig. 2). This algorithm consisted of two steps: first, the derivative of a Gaussian filter was applied to compute the intensity gradient of the image and a Canny edge detector [26] was employed to detect edges of nuclei by finding local maxima of gradient (see Fig. 2b); second, thresholds were defined in Canny edge detector for detecting strong and weak edges and then a binary image of the edges, which mostly represent the nuclei boundaries, was obtained. An algorithm based on morphological reconstruction (imfill function in MATLAB) [27], and two sets of image erosion and image dilation were used to fill the nuclei boundaries and segment the nuclei (see Fig. 2c). The sequences of erosion and dilation were applied to remove the edges that do not represent nuclei boundaries. Then a Region of Interest (ROI) was selected on the confocal image to exclude unwanted noise that occurred at the time of imaging (e.g. bright, dark, defocused, or saturated regions on the image) (Fig. 2d). By reviewing the segmented objects (nuclei) in the ROI marked on the confocal image, we removed any objects that did not fulfill minimum quality requirements such as out of focus objects or objects with incorrect binary masks. Histopathological review After confocal imaging, biopsies were fixed in formalin and transferred to BC Cancer Centre Pathology labfor sectioning.. Each biopsy was embedded in paraffin and nine 4 μm transverse sections were cut. Slides 1, 5, and 9 were stained with Hematoxylin and Eosin (H&E); and slide 2 was stained with Thionin-Feulgen stain, a stoichiometric stain for DNA [28]. The H&E stained sections were reviewed by an expert pathologist to establish disease grade. Once complete, all slides and results (including normal, reactive atypia, CIN1, CIN2, and CIN3) were returned to the study. Imaging of Thionin-Feulgen stained sections The Thionin-Feulgen stained slides were scanned in absorbance mode using our inhouse, high-resolution imaging system, Getafics [29]. This system consists of a 12-bit, double-correlated sampling MicroImager 1400 digital camera (pixels 6.8 μm 2 ) placed in the primary image plane of the microscope with a 20X 0.75 NA Plan Apo objective lens (system resolution = 0.58 μm). For each Feulgen-stained slide, the worst histological diagnosis was found and imaged. Basal and superficial membranes were manually delineated, defining the ROI as described in another companion paper [30]. A semi-automatic, thresholding segmentation algorithm was used to detect cell nuclei located within the ROI. This thresholding algorithm separated objects (nuclei) from the background based on pixel intensity. Then, auto-focusing and an edge-relocation algorithm [31] was applied to the nuclei to precisely and automatically place the edge of the object at the region of highest local gray-level gradient. Digital gray-level images of the nuclei were stored in a gallery. All objects were manually reviewed by a technician to remove objects that did not fulfill the minimum quality requirements related to masking, focus, etc. Quantitative tissue phenotype (QTP) analysis QTP analysis of histological and confocal images refers to the measurement of both the phenotype of the nuclei and the overall tissue architecture, as described below. For all digitized nuclear images and recorded nuclear centers of gravity, we evaluated ~200 features associated with tissue architecture, nuclear and cellular shape, size, DNA amount, and chromatin texture organization [30,32]. Based on preliminary work (unpublished data), we have restricted our analysis to four features (Table 1): (1) nuclear area; (2) cell density; (3) estimated nuclear-to-cytoplasmic (ENC) ratio; and (4) average distance between a nucleus and its three nearest Delaunay neighbors (3NDND). "Nuclear area" refers to the mean area of all segmented nuclei in µm 2 . "Cell density" refers to the number of nuclei per µm 2 . To calculate ENC ratio and 3NDND, we applied a Voronoi tessellation and Delaunay graphs, as follows. Given a set of points S (center of gravity of nuclei) in a plane, a Voronoi tessellation of the set S is the partition of the plane into polygons such that each polygon V(p) is associated with each point p of S. This is done in such a way that all locations inside V(p) are closer to p than to any other point in S (Fig. 3). The Voronoi polygon associated with a specific cell nucleus can be interpreted as an approximation of the cytoplasm of the cell [33]. The nuclear cytoplasmic ratio can then be approximated by the estimated nuclear-to-cytoplasmic (ENC) ratio, which is the ratio between the nuclear area and the Voronoi polygon area. The Delaunay diagram is the dual graph of the Voronoi diagram [34]. Two points of S are Delaunay neighbors if their Voronoi polygons share a common edge. A line segment joins each pair of Delaunay neighbors; the sum of these segments forms the Delaunay graph (see Fig. 3). Several features measuring the characteristics of the spatial 3NDND Average distance between a nucleus and its 3 nearest Delaunay neighbors Fig. 3 Voronoi tessellation of a set of points (a) and its dual Delaunay graph (b). Delaunay neighbors are connected if they share a common Voronoi edge distribution of the nuclei can be computed from the Delaunay graph. Based on previous studies (data not shown), only one of these features was selected for this analysis; this feature (3NDND) measures the average distance between a nucleus and its three nearest Delaunay neighbors (in µm). All statistical significance was assessed using ANOVA and Fisher's least significant difference (LSD) post hoc test was performed using STATISTICA software (StatSoft Inc., Tulsa, OK, USA). Results Analyses were restricted to cases in which confocal images were of sufficient quality and histopathological review was completed. We considered three tissue groups: normal, low-grade CIN (CIN1), and high-grade CIN (CIN2 and CIN3). A total of 46 biopsies collected from 33 patients were used; 13 were classified as negative (i.e. normal), 18 as low-grade lesions, and 15 as high-grade lesions (9 CIN2 and 6 CIN3). Table 2 shows the confusion matrix between the colposcopic impression and the histopathological diagnosis. The results from two main analyses are presented in this section. First, we analyzed the confocal images imaged at a depth of 15 µm below the epithelial surface for all biopsies (as nuclei were clearly visible at this depth for all cases). Figure 4 shows the confocal and histological images of three cases. Second, to investigate the effect of depth of confocal imaging, we compared images of 21 specimens obtained at the surface of the epithelium, at 15 and at 30 µm below the epithelial surface. Figure 5 shows the confocal images of one specimen obtained at these three depths. Figure 6 shows the confocal and histological images with corresponding Voronoi tessellations for a CIN1 lesion. Quantitative tissue phenotype analysis of confocal images We studied the correlation between histology and QTP features of confocal images calculated from 46 cases at 15 µm below the epithelial surface. We observed that cell density was higher in high-grade CIN than in CIN1 or normal specimens (Fig. 4). We also observed that the penetration depth (maximum depth at which images can be obtained) was larger in normal biopsies than in abnormal biopsies (with an average of ~80 µm in normal cases vs. ~50 µm for abnormal cases-data not shown). Figure 7 shows the distribution of nuclear area, cell density, ENC ratio, and 3NDND values for the three histological groups. The nuclei of normal specimens were significantly smaller than the nuclei of CIN1, CIN2, and CIN3 lesions (respectively p = 0.006, p = 0.002, and p = 0018) (Fig. 7a); in contrast, there were no differences observed between the different grades of dysplasia. Cell density increased regularly from normal to high-grade lesions with the cell density of high-grade lesions being twice as large as the cell density of low-grade lesions (Fig. 7b). Similarly, the ENC ratio increased from normal to high-grade lesions, reflecting an increase in the nuclear area relative to the cytoplasmic area (Fig. 7c). The distance between nuclei, as measured by 3NDND, decreased from normal to high-grade specimens (Fig. 7d). For these three features, the difference between low-grade and high-grade lesions were statistically significant (p = 2 × 10 −7 , p = 8 × 10 −6 , and p = 1 × 10 −6 , respectively). Neither cell density nor ENC ratio showed a statistical difference between normal and low-grade lesions (p = 0.136 and p = 0.052, respectively). However, there was a statistically significant difference in 3NDND between normal and low-grade lesions (p = 0.005). Figure 8 shows the scatter plot of the ENC ratio vs. the nuclear area for the different histopathological groups. There was a better separation seen between high-grade and low-grade lesions than between low-grade lesions and normal specimens. Quantitative tissue phenotype analysis of histological images Out of 46 biopsies we were able to successfully image 40 (12 normal, 15 low-grade CIN, and 13 high-grade CIN lesions). The other specimens were discarded from the analysis due to insufficient quality. Figure 9 shows the distribution of mean nuclear area and 3NDND for the different histopathological groups. There was no statistically significant difference in the nuclear area between the three groups (p = 0.12). On the other hand, 3NDND decreased regularly as dysplasia grade worsened. The difference was statistically significant between low-grade and high-grade lesions (p = 0.02), but not between normal and low-grade lesions (p = 0.08). Comparison of QTP analysis of confocal images with colposcopic appearance We compared the QTP features with colposcopic impression, as identified by the clinicians. Figure 10 plots the relationship between colposcopic impression and ENC ratio calculated from confocal images (obtained at 15 µm) for different histopathological groups. We observe that 13 of the 31 colposcopically-defined high-grade CIN (41 %) were either low-grade CIN or normal cases, and 4 colposcopically-classified low-grade CIN were in fact high-grade lesions. The Spearman correlation coefficient between histopathological diagnosis and colposcopic impression was 0.43. The correlation between histopatholgical diagnosis and nuclear area, cell density, ENC ratio, and 3NDND were 0.56, 0.77, 0.81, and 0.77, respectively. To assess the potential of confocal fluorescence microscopy for detecting cervical dysplasia, we calculated the sensitivity and specificity of the ENC ratio and 3NDND values for detecting either high-grade lesions (i.e. CIN2 or CIN3) or all lesions (i.e. CIN1, CIN2, or CIN3) ( Table 3). Based on the feature distribution illustrated in Fig. 7, biopsies were classified into three groups: normal if the ENC ratio was lower than 0.08, low-grade if the ENC ratio was larger than 0.08 and smaller than 0.18, and high-grade if the ENC ratio was higher than 0.18. Similarly, biopsies with a 3NDND larger than 25 µm were classified as normal; biopsies with a 3NDND value between 21 and 25 µm were classified as low-grade; and biopsies with a 3DDND value smaller than 21 µm were classified as high-grade. The sensitivity and specificity of colposcopic impression for detecting any grade of CIN was respectively 75 and 69 % ( Table 3). The sensitivity and specificity of detecting any grade of CIN for both ENC ratio and 3NDND values were comparable to the values seen with colposcopic impression. The sensitivity and specificity of ENC ratio for detecting high-grade CIN alone were 93 and 96 %, respectively. Quantitative tissue phenotype analysis of confocal images at different depths To investigate the performance of our technology at different depths, we imaged 21 cervical biopsies at three different depths: at the epithelial surface, 15 µm beneath the surface, and 30 µm beneath the surface. Figure 11 shows the scatter plot of ENC ratios measured at these three depths vs. colposcopic impression. Table 4 shows the sensitivity and specificity of detecting dysplasia using either the ENC ratio or 3NDND measured on images, which were sampled at different depths from the epithelial surface. For the classification of biopsies based on their corresponding ENC ratio and 3NDND values, we applied the same criteria as described in "Comparison of QTP analysis of confocal images with colposcopic appearance" (e.g. biopsies with an ENC ratio <0.08, 0.08 < ENC ratio < 0.18, and ENC ratio >0.18 were respectively classified as normal, low-grade, and high-grade). The sensitivity and specificity of colposcopic impression for detecting highgrade CIN were respectively 57 and 64 %. The sensitivity and specificity of detecting any dysplasia using ENC ratio and 3NDND at any of the three depths were comparable to those of colposcopic appearance ( Table 4). The sensitivity and specificity of the ENC ratio for detecting any dysplasia at 15 µm beneath epithelial surface was higher than the ones measured at the surface or at 30 µm. Interestingly, we observed that low-grade lesions that showed a high ENC ratio value (measured at 15 µm [see Fig. 11b] and comparable to high-grade lesions) have an ENC ratio value similar to other low-grade lesions when imaged from the surface or 30 µm deep. In addition, we noticed that a confocal image obtained at a 15 µm depth was saturated due to the maladjustment of the parameters of confocal microscopy at the time of imaging, explaining the overestimation of the nuclear size. Discussion Currently, colposcopic examination of the cervix is used to detect preneoplastic lesions after an abnormal Pap test and to guide biopsy selection for disease diagnosis and staging. Unfortunately, challenges exist with regards to the accuracy and reproducibility of this approach. Pre-neoplastic lesions are associated with a variety of morphologic and tissue architectural alterations; and confocal microscopy has previously been used to non-invasively detect changes in cell morphology and tissue architecture, which may be used to confirm histopathological diagnoses and assess progression likelihood [2,[11][12][13]. Ultimately, clinical application of confocal imaging will lead to a new breed of "clinical pathologists" who can detect and grade preneoplastic lesions in vivo using real-time optical imaging [35]. In this study, we collected confocal fluorescence images of fresh cervical biopsy specimens and calculated ~200 features (including morphologic and architectural) for 46 specimens using our automatic software (written in MATLAB). The results indicated that, two features, in particular, (ENC ratio and 3NDND values) could be used to classify dysplasia grade in confocal images. We demonstrated that these features were capable of differentiating high-grade CIN from normal and low-grade CIN with a high sensitivity and specificity as compared to results seem from colposcopic impression analyses (see Table 3 and previous studies [15,17,19]). We compared the scatter plot of nuclear area vs. ENC ratio presented in Fig. 8 with that of a previous study by Collier et al. [15], who studied the reflectance confocal microscopy images of normal and abnormal cervical biopsies obtained at a depth of 50 μm. These two scatter plots show a similar distribution of normal specimens, low-grade, and high-grade lesions. Our preliminary results suggest the potential for use of confocal fluorescence imaging as a tool in clinical settings for biopsy site selection. Currently, many unnecessary biopsy specimens are obtained in clinic due to the relatively low specificity of the colposcopy [5]. The use of confocal microscopy for guiding the biopsy excision process could lower the cost of unnecessary diagnostic procedures and, more importantly, improve the patients' experience. In addition, we have shown that quantitative features measured from confocal fluorescence images compare well with features calculated on histological images. These features are also consistent with classical descriptions of cervical dysplasia grading based on histopathological criterion of H&E-stained slides (e.g. increased nuclear-cytoplasmic ratio or decreased cell-cell distances [9]). Naturally, the apparent advantage of confocal images over histological images is that confocal images delineate cell structures in vivo, removing the need to acquire and process tissue specimens (which can be costly in terms of time and funds). Images obtained in this study were of a higher resolution than those of previous studies done using reflectance microscopy [15,17]. This may be due to the use of a contrast agent in confocal fluorescence microscopy, which improves the cellular contrast by staining the nuclei of the epithelium. In this study, Acriflavine dye was used to stain the nuclei for confocal fluorescence microscopy, a process that leads to clearly visible nuclei in resulting images [13]. However, the images obtained in this study were taken at a depth (from the epithelium surface) that was shallower than those of previous studies [15,17]. Although Acriflavine strongly stained the superficial epithelial cell nuclei, the dye penetration into deeper layers was limited, as already observed by Tan et al. [19]. Fortunately, unlike in reflectance microscopy, nucleo-architectural features can be robustly measured even at such limited penetration depths. This is due to the fact that high-grade lesions are characterized by the presence of dysplastic cells in the superficial layers of the epithelium. Our results show that the optimal depth of imaging to discriminate high-grade lesions from other lesions is 15 µm. It was challenging, however, to differentiate normal tissue from mild dysplasia based on nuclear and cellular features at 15 µm below the surface of the tissue. In the future, additional experiments based on a larger sample size will be necessary to statistically establish the optical imaging depth for in vivo real time confocal microscopy use. It should be noted that other studies that used deeper confocal imaging were also unable to accurately differentiate normal tissue from mild dysplasia [15,18]. In conclusion, we showed that first, the ability to use only a few morphological and architectural features to differentiate normal tissue and low-grade lesions from highgrade dysplasia; and second, a similar performance was achieved at different epithelial depths (see Table 4). This is a significant finding since it is likely that assessing the exact depth of imaging in vivo will be challenging. Furthermore, by imaging and comparing features at different epithelial depths, we can reach a more accurate and reproducible quantification of dysplastic changes. Indeed, we can foresee that a clinical could use different depth information to increase his confident in case the images are not good enough, or as it happens in the cervix in the presence of keratin at the surface of the epithelium, or when the contact is not adequate. By moving up and down with the probe, our data has shown that results will be affected. Our experimental design suffers from one limitation. In Feulgen-stained sections, an experienced cytotechnician carefully chooses the Region of Interest corresponding to the diagnostic area as defined by the study pathologist (Dr. van Niekerk) on the H&E section. This area corresponds to the worst dysplastic area present on the section. Toavoid any selection bias, the region of interest on confocal images were randomly selected without any input from the pathologist; this means it is possible that the confocal area does not exactly match the corresponding histopathological diagnosis and area selected by the pathologist. Unfortunately, this is a limitation of this study. Nevertheless, we believe that our approach is valid and justified as we are trying to study the average correlation between this technology and clinical appearance. By choosing random areas, we are more likely to fail to spot dysplastic areas when the lesion is small (low-grade) than when the lesion is large (high-grade), resulting in an underestimation of the "true" sensitivity of this technology for detecting low-grade lesions. In the future, we believe that a new breed of pathologists/clinicians will become skilled in selecting and identifying the precancerous lesions on confocal images. Our results may actually underestimate the power of confocal microscopy for detecting cervical dysplasia. Conclusion We have studied the potential of nuclear-architectural features measured from ex vivo fluorescence confocal images of fresh cervical biopsy specimens. By correlating these features with histopathology diagnosis, we confirmed that confocal fluorescence microscopy can differentiate high-grade lesions from low-grade lesions and normal tissues. More importantly, we have shown that these results were comparable when images were collected at different depths. New experiments involving deeper tissue examinations by improved staining techniques will help to better differentiate normal tissues from lowgrade lesions. This study serves as a critical step towards utilizing confocal approaches in an in vivo clinical application. This study was part of an ongoing project with the ultimate goal of integrating advances in cancer biology and optical technology to develop cost-effective tools to aid in early detection of cervical cancer [36].
2023-01-17T14:49:52.480Z
2015-10-24T00:00:00.000
{ "year": 2015, "sha1": "4fbdc93da84aaa0dad9330b7978ff784fdfd1470", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1186/s12938-015-0093-6", "oa_status": "GOLD", "pdf_src": "SpringerNature", "pdf_hash": "4fbdc93da84aaa0dad9330b7978ff784fdfd1470", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
270488258
pes2o/s2orc
v3-fos-license
A needs assessment for enhancing workplace-based assessment: a grounded theory study Objectives Workplace-based assessment (WBA) has been vigorously criticized for not fulfilling its educational purpose by medical educators. A comprehensive exploration of stakeholders’ needs regarding WBA is essential to optimize its implementation in clinical practice. Method Three homogeneous focus groups were conducted with three groups of stakeholders: General Practitioner (GP) trainees, GP trainers, and GP tutors. Due to COVID-19 measures, we opted for an online asynchronous form to enable participation. An constructivist grounded theory approach was used to employ this study and allow the identification of stakeholders’ needs for using WBA. Results Three core needs for WBA were identified in the analysis. Within GP Training, stakeholders found WBA essential, primarily, for establishing learning goals, secondarily, for assessment purposes, and, lastly, for providing or receiving feedback. Conclusion All stakeholders perceive WBA as valuable when it fosters learning. The identified needs were notably influenced by agency, trust, availability, and mutual understanding. These were facilitating factors influencing needs for WBA. Embracing these insights can significantly illuminate the landscape of workplace learning culture for clinical educators and guide a successful implementation of WBA. Introduction In medical education, the need for accountability and for reassurance of competence should make learning and assessment inextricably linked [1].Good assessment of clinical competence affects and involves multiple out independently in professional practice.In an educational context, WBA implies gathering data about trainees' performance in an authentic context and providing relevant feedback for further improvement.The literature illustrates a number of existing WBA instruments, which aim to assess various aspects of competence (mini-Clinical Evaluation Exercise, Case-based Discussion, Direct Observation of Procedural Skills, Multisource Feedback, etc.) [6].These instruments aim, on the one hand, at evaluating trainees in authentic assessment environments, and, on the other hand, at providing feedback and fostering reflective practice. Although WBA has seen extensive implementation, its educational value has been questioned in postgraduate medical education [7].Prevailing perception among stakeholders often leans towards negativity, labelling WBA as a bureaucratic burden.This adverse outlook is rooted in several key factors, including lack of time, poor trainers' engagement, misunderstanding of WBA objectives, and inadequate feedback quality [8].However, research has mainly focused on exploring stakeholders' perceptions about WBA, largely disregarding the vital aspect of stakeholders' educational needs in clinical practice [9].By employing a needs assessment framework, we can identify implementation facilitators and barriers, and, subsequently, enhance implementation success.Therefore, the aim of this study was to explore what stakeholders need from WBA, and to identify factors that hinder or foster these needs in the context of a General Practitioner (GP) Training Program. Educational context This study focused on WBA activities embedded in the postgraduate GP Training in Flanders, Belgium.The GP Training is a 3-year postgraduate curriculum, where trainees must take a series of WBAs during their two GP internships, which last 12-and 18-months respectively.All WBAs should be registered and documented in an electronic (e-) portfolio available for all the stakeholders involved.At the clinical workplace, trainees closely work with workplace-based trainers.These trainers are experienced GPs who mentor trainees in their own practice.To become trainers, they must have completed specific educational programs, such as courses on providing feedback, guiding daily practice, and assessing trainees.They are responsible for providing hands-on, practical training and sharing their day-to day experiences with their trainees.Their duties include teaching and showcasing necessary knowledge, skills, and attitudes to their trainees.They offer direct observation and guidance, provide feedback, and facilitate the learning process at the clinical workplace.According to the official assessment regulations of the GP training, trainers have to perform at least 5 mini-clinical evaluation exercises with their trainees within an academic year, either by directly observing them performing clinical work or through videorecorded clinical encounters [10].Additionally, trainers should annually perform 3 evaluations of clinical events in which trainees receive feedback on different competency domains, specifically based on the Canadian Medical Education Directions for Specialists framework (Can-MEDS) [6,11].Also, trainees often have case-based discussions with their trainers on a daily and weekly basis.Trainees present a patient case to their trainers who critically evaluate the decision-making procedure, communication with the patient, and consultation management. Besides the workplace-based trainers, university-based tutors, who are compensated for their roles and have received specialized training, also support trainees in groups.The aim of these groups is to provide peer feedback and support to the trainees biweekly.Tutors are also experienced GPs and affiliated with the collaborating universities.They are usually involved in overseeing and ensuring trainees' progress during clinical internships.Tutors are predominantly responsible for ensuring that the training objectives are met within each but also across various internships.Although they may not be involved in direct day-to-day training, they play a crucial role in WBAs.Tutors must conduct case-based discussions with trainees, where trainees present patient cases illustrating clinical decision making, clinical reasoning, communication skills, and diagnostic skills.Also, tutors should enhance multi-source feedback, by promoting peer-feedback on the videorecorded encounters, and by compiling feedback information and sources, including trainees' e-portfolio, to identify areas for improvement.There is officially no stipulated minimum number of WBAs that tutors should perform with their trainees.Nevertheless, the compiled feedback should be discussed at least 3 times per academic year between tutor and trainee. Participants Three different groups of stakeholders participated in this study: GP trainees, GP trainers, and GP tutors.All participants had at least one year of experience with WBA.Trainees were either at the end of the second or the third year of the GP Training.To explore the needs of the different groups, purposeful sampling was used to recruit participants [12].First, the lead investigator invited tutors by email.Afterwards, trainees were recruited through an open call via social media groups.Lastly, trainers were invited by the principal investigator and member of the research team. Study design We employed a qualitative study design following a constructivist grounded theory approach [13].We chose this approach to delve into the subjective experiences of trainees, trainers, and tutors regarding WBA.Constructivist grounded theory recognizes that knowledge is co-constructed between interviewee and interviewer, allowing a deeper understanding of individual and contextual influences on participants' perspectives [13].We conducted online asynchronous focus groups as they encourage interaction among the participants [14].The online design was preferred because of the measures against the spreading of COVID-19.To facilitate participation throughout the pandemic, we chose to administer the focus groups asynchronously.The asynchronous format provided the necessary flexibility to the participants without hindering their clinical practice [15].The stakeholders were divided into different groups based on their role in the GP Training: trainees, trainers, and tutors.We chose homogeneous instead of heterogeneous focus groups to give the freedom to the participants to express freely their opinion, and to avoid potential power relationships influencing their opinion.To facilitate the focus groups and to guarantee data richness, we set a minimum of 6 and a maximum of 8 participants per group [16]. Data collection and analysis To achieve methodological rigour, we developed an interview guide with main and supplementary questions, ensuring a logical and thorough investigation of participants' needs about WBA.Initially, we engaged in discussions as a research team.We are a team of four researchers, consisting of two researchers with a background in education and two researchers with a medical background.After establishing a thorough understanding of the context, we created an analytical diagram, structuring the main and the supplementary questions to ensure that the data collection was systematic and logical.All questions were open-ended in order to capture the depth and breadth of participants' needs on WBA.This guide was iteratively refined after each focus groups, based on ongoing analysis, aligning with the constructivist grounded theory approach [13].The main questions were open-ended and focused on WBA: (a) What does WBA mean for you?, (b) How would you describe WBA?, (c) For which purposes would you use WBA?.To collect our data, a researcher guided the focus groups, while the principal investigator, a GP and clinical teacher, participated as observer to monitor the process.To ensure consistency of data collection, three members of the research team discussed the different procedures before each focus group. The data were collected using an online software called FocusGroupIt [17].This tool provided the opportunity for the participants to participate anonymously.Anonymity allowed an increased ease to discuss recurring problems of WBA [15].Before starting, participants received an e-mail invitation to register to the platform.They could choose their own pseudonym that was seen by other participants.The participants' real names were only visible to the moderator.The focus groups lasted from 2 to 3 weeks each.Questions were posted online by the moderator, while enough time was given to the participants (approximately 3 days per set of questions) to respond to the questions and interact with each other.Reminders were sent if the moderator thought it was necessary.When more clarifications were required, subquestions were posted to delve more in depth and elucidate participants' reactions.Data collection and data analysis took place between June 2020 and October 2020. To analyse the data, we used the Qualitative Analysis Guide of Leuven (QUAGOL) [18].The coding process was done by two researchers separately [19].Discrepancies in coding were discussed until consensus was reached, and a third researcher was advised, when necessary.For data analysis, we used NVivo QSR International (Release 1.0).Following constructivist grounded theory, memos were firstly written before the coding started [13].The coding process happened in three phases.During initial coding, we focused on small units of analysis, coding line-by-line.During focused coding, we focused on frequent earlier codes to navigate through the data, and we discerned initial codes with the most analytical strength [13].During axial coding, we focused on relations between categories and subcategories of codes [13]. Results Three online asynchronous focus groups(N = 3) were conducted, one with trainees (n = 6), one with trainers (n = 7), and one with tutors (n = 8).The results showed that trainees, trainers, and tutors need WBA to fulfil three different educational needs: (1) need for establishing learning goals, (2) to construct an idea of trainees' clinical competence, (3) to give or receive feedback.There was no hierarchical order among these three needs, but they rather displayed a continuous circular relationship.That implies that they alternated depending on the context and educational purpose.Also, these needs seemed to be influenced by four different factors, namely availability of trainers and tutors, mutual understanding, trainees' agency, and trust between trainer and trainee.These factors seemed to facilitate and augment the educational value of WBA. Figure 1 illustrates the three needs and the important factors influencing them along the three stakeholders' groups.In the following section, these different needs and the different factors are presented with examples of verbatim quotes. Need for establishing learning goals. Consensus among all participants affirmed the necessity for WBA to effectively support and enable trainees in establishing their learning goals.Trainees emphasized the importance of WBAs enabling them to proactively participate in defining their personal learning goals, as they felt as primarily responsible for shaping their learning process.Through WBA, trainees expressed their need to feel in control of their learning.For trainees, WBA should provide enough performance evidence in order to define their further learning agenda.This sentiment was echoed by both trainers and tutors, who underlined that the learning and delineating the learning trajectory rested squarely on the trainees themselves. Of course I expect a good trainee to engage in selfreflection after an (workplace-based) evaluation.A trainee has to formulate learning goals based on this feedback and actively act further on it.My job (as tutor) is to follow-up these goals in the long run, compare them with other trainees, and help with the overall functioning of the trainee as a doctor in clinical practice (GP tutor 2). I formulated my learning goals for this internship at the beginning, after a couple of (workplace) evaluations in the (GP) practice where I am now, and I have adjusted them along the way. I formulate them based on difficulties I experience myself at the work-place, feedback from my trainer or my tutor after evaluations (GP trainee 6). However, some trainees found this task troublesome.They expressed the need for support after WBAs in order to be able to be in charge of their learning development.Specifically, they admitted that formulating efficient learning goals was a skill that needs to be trained and exercised. My learning goals (for my traineeship) in the beginning were pretty vague and not concrete enough, and were much less helpful in directing my learning. Only in my second year -at the request of my trainer -I started to formulate my learning objectives SMART(Specific, Measurable, Attainable, Relevant, and Time-bound) and became more successful in planning my learning process (GP trainee 2). The frequency of WBA posed significant obstacles to this need, as reported by trainees.Several trainees expressed their concern that WBA activities lacked a systematic and regular schedule, making it difficult for them to maintain a consistent pace in their learning. trainer, is, in my opinion, a must in an educational setting. Whether this actually happens now still depends on who your trainer is and whether they consider this important, which all too often is not the case (GP trainee 3). Moreover, trainees faced an additional challenge due to the absence of a well-defined learning plan for their training.This lack of guidance led to frustration as trainees remained uncertain about the adequacy of their learning and professional development. There is no listing of specific learning objectives to be achieved or concrete topics within the different Can-MEDS roles (GP trainee 6). 2. Need for assessing or being assessed. Most importantly, WBA was conceived by all the stakeholders as a necessary assessment to get insights into trainees' clinical competence.Provided consistency, it could yield a holistic representation of trainees' performance.Nevertheless, trainees' perspectives about the essence of clinical competence were diverging from those of trainers and tutors.For trainees, clinical competence encompassed the practical aspects of functioning as a doctor on a day-today basis.In this context, WBA played a crucial role in identifying their specific learning needs.Trainees used phrases such as "to help you further", and "daily evaluation moment to support the learning process" when articulating their requirements for WBAs. For trainers and tutors, WBA was necessary to foster continuous learning and developmental progression.This continuity of evaluation would provide a better idea of how trainees function as doctors and aimed at enhancing learning growth by giving direct feedback linked to actual performance at the workplace.By using WBA instruments frequently, trainers believed that they would detect gaps in trainees' clinical performance: In my opinion, this means a more continuous evaluation… where the daily function in the clinical practice is closely observed (GP trainer 1). Need for giving or receiving feedback. WBA was also conceived necessary for streamlining the feedback process.All participants valued the feedback opportunities that it provided.Each group put emphasis on different facets of the feedback process.In particular, trainees highlighted the comprehensive nature of feedback following WBAs, which could contribute to their development by fostering self-reflection on their performance.Trainees explained that feedback should contain concrete steps on how to improve their performance, and, eventually be linked to on patient-cases. Most feedback should not be offered during official feedback moments, but in-between, during daily meetings about specific (patients) cases (GP trainee 5. Also, creating a conducive environment for feedback after WBAs was crucial for the trainees, with a key prerequisite feeling safe within their relationship with their trainer.A foundation of trust between trainee and trainer could elevate trainees' engagement, supporting a safe environment where they felt comfortable seeking clarification and engaging in discussions.Additionally, the opportunity for active participation in shaping their own feedback was underscored as highly significant.Trainees agreed that being able to play a proactive and constructive role in the feedback process could enhance their engagement in the assessment process. That (being able to participate in the feedback process) is highly dependent on the GP practice.In the practice where I did my first-year traineeship, there was a very hierarchical relationship (between me and my trainer) and during (our) discussions it was discussed mainly what my trainer wanted to and most often that was only one thing, which he did not like, so all the rest was not mentioned.In the practice where I am now, we are conceived as equals.Everything is open for discussion, everyone is flexible.The trainer gives (me) feedback without imposing things on me (GP trainee 3). Trainers and tutors also elaborated on the condition of trust in their relationship with their trainees.They perceived it as their role to establish this security feeling in their relationship.Most trainers and tutors mentioned that this trust relationship was a necessary component of a learning culture within their clinical practice.A relationship based on mutual trust also facilitated giving negative feedback.The more open this relationship was, the easier it was for trainers and tutors to discuss mistakes detected in clinical practice during a feedback moment. Discussion This study aimed at exploring stakeholders' needs regarding WBA in a postgraduate GP Training.The findings unveiled a triad of educational needs that GP trainees, GP trainers, and GP tutors wished to address through WBA.Notably, these needs do not solely relate to assessment of clinical competence, but encompass a broader scope that includes workplace learning and culture.Which need predominates in WBA hinges on the specific learning requirements of the trainees at that moment. In this study, WBA needs were discussed from different standpoints to explore underlying foundations.As assessment involves different stakeholders, incorporating different perspectives allows a deeper understanding of the common ground.Overarching requirements could assist to refine WBA practices and to shed light on the interplay between learning culture and assessment activities. A shared need across all the three focus groups was that WBA is needed to support establishing learning goals.Particularly among trainees, the value of WBA was emphasized as a potent tool for shaping and refining learning goals tailored to their unique clinical contexts.Nevertheless, trainees experienced the lack of well-defined learning plan for their clinical practice as an obstacle.This hurdle underscores the impetus towards embracing competency-based medical education [20].Shifting towards a competency-based assessment framework within the workplace would empower trainees to progress through the program by progressively showcasing specific competencies.Defined outcomes would provide a structured framework for WBA, outlining essentials competencies trainees must attain at different levels of their training [21]. Additionally, availability, trust, and mutual understanding among the stakeholders were also highlighted in this study.Within the trainer-trainee dynamic, these three factors emerged as catalysts for the organic integration of WBA in clinical practice.Especially, trust seemed to be a pivotal factor within the trainer-trainee relationship.Our findings align with other qualitative studies about how perceived trust influences WBA [22].This personal trust facilitated, on the one hand, trainees to require more assessment by the trainers, and, on the other hand, trainers to provide more meaningful and, most importantly, negative feedback, when necessary. This study also found that student agency influenced how trainees view WBA.Trainees were keener on using WBA, when they felt that they could constructively contribute to the assessment and feedback process.The importance of agentic engagement in the learning process has recently been demonstrated in the literature as well [23,24].Trainees overwhelmingly valued opportunities to be involved in their own evaluation either by initiating an assessment moment and by asking specific feedback or by engaging in broader conversations about medical guidelines and evidence-based medicine.The student agency in the assessment process was also acknowledged by trainers [25].Through supportive mentorship and open guidance, trainees were encouraged to seek feedback and learn from their mistakes. Limitations We acknowledge that some limitations should be considered, when interpreting our data.First, while we used a constructivist grounded theory approach, we limited the number of our focus groups to three.This limitation was a consequence of the COVID-19 pandemic.Our study population consisted of medical professionals who experienced an enormous workload depending on the varying imposed countermeasures for COVID-19.Also, because of newly imposed measures against COVID-19, five trainers decided to drop out of the study.Subsequently, we invited five new trainers to participate in the trainers' focus group.Second, we collected a limited range of demographic data from our participants to respect the GDPR policy.This might impede a potential generalization of our study.However, the purpose of grounded theory in qualitative research is to explore, understand, and review the concepts emerged in the data, rather than making inferences to the population [13].Third, we used only one method to gather data, while conceptions are complex, multi-dimensional, and have to be elicited [26].Nevertheless, by applying data and investigators triangulation and including multiple perspectives, we tried to produce a more comprehensive view on which needs WBA should answer to [27].Coding the data by two different researchers allowed the emergence of different observations and interpretations [19].Moreover, the asynchronous format of the focus groups could hamper a more fluid discussion and interaction between participants.However, we opted for this format, since participation at a moment of convenience was more important for our study population during COVID-19 lockdowns.A last limitation of this study is that recruitment took place on voluntary basis.Consequently, stakeholders who did not participate might hold other concepts about WBA.To mitigate this selection bias, we purposefully announced the study through different communications channels for GPs, including social media, inviting various stakeholders to spontaneously contribute to the findings. Conclusion The critique on WBA makes a strong case for exploring what stakeholders need for using WBA in clinical practice.The needs that emerged significantly enrich the realm of the qualitative research on WBA.Stakeholders need WBA for supporting learning development, for assessment, and for feedback purposes.These requirements do not solely relate to assessment of clinical competence, but also exhibit the influence of workplace culture on assessment.Nevertheless, further research should focus on a deeper understanding on how trust embedded in learning culture can configure and shape WBA during clinical training.Clinical rotations and dynamic clinical environment might impair WBA by inhibiting the development of trust relationships between the stakeholders.Further qualitative research is needed to explore how agentic engagement influences WBA, and how it affects trainees' assessment practices.
2024-02-09T14:05:50.421Z
2024-06-13T00:00:00.000
{ "year": 2024, "sha1": "5745bec762f4e0d144f7461c3e9fea81d5c130ab", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "8d6afb6bfad07111408f9d33020245b84b3eda12", "s2fieldsofstudy": [ "Medicine", "Education" ], "extfieldsofstudy": [ "Medicine" ] }
260363511
pes2o/s2orc
v3-fos-license
Conditional Generative Models for Dynamic Trajectory Generation and Urban Driving This work explores methodologies for dynamic trajectory generation for urban driving environments by utilizing coarse global plan representations. In contrast to state-of-the-art architectures for autonomous driving that often leverage lane-level high-definition (HD) maps, we focus on minimizing required map priors that are needed to navigate in dynamic environments that may change over time. To incorporate high-level instructions (i.e., turn right vs. turn left at intersections), we compare various representations provided by lightweight and open-source OpenStreetMaps (OSM) and formulate a conditional generative model strategy to explicitly capture the multimodal characteristics of urban driving. To evaluate the performance of the models introduced, a data collection phase is performed using multiple full-scale vehicles with ground truth labels. Our results show potential use cases in dynamic urban driving scenarios with real-time constraints. The dataset is released publicly as part of this work in combination with code and benchmarks. Introduction Autonomous vehicles architectures today depend heavily on the high-definition (HD) maps, especially on the planning aspect. Figure 1 shows an example architecture where a lane-level HD map is used in both global planning and motion planning. For global planning, the shortest path can be estimated by running A* [1] on the lane graph. The path, which consists of centimeter-level accurate waypoints and speed limits, along with the objects derived from perception and traffic control information from HD maps, is then utilized in the motion planning stage. The motion planning stage determines the vehicle's driving state (e.g., forward, following, and stop) and constrains the vehicle speed. However, the architectures that rely on HD maps are not scalable as HD maps are costly to create and require constant maintenance. These maps with accurate lane-level geometry, speed limit, and traffic control information often require human labeling or inspection, and infer additional overhead when expanding the maps to new areas. Additionally, road closures and construction sites can render maps obsolete and thus previously mapped areas need to be maintained constantly. Existing research attempts to find solutions to this problem from two directions: offline automatic HD map generation and online HD map generation. Nevertheless, the works that focus on automatic HD map generation using aerial imagery [2,3] or ground vehicle data [4] are offline or are limited by coverage and still require constant updates. On the other hand, building HD maps online [5][6][7][8] reduces the priors needed. Recent works explore online vectorized HD maps [6][7][8] but the number of map elements generated is still limited for them to be integrated with traditional HD map based architectures. Moreover, performance of these methods on real-time autonomous driving frameworks currently remains an open research question. To tackle these problems, we propose an alternative strategy and architecture to leverage coarse maps and local semantic representations for planning, as shown in Figure 1. Coarse maps, such as open-sourced OpenStreetMaps (OSM) and proprietary maps like Google Maps, are lightweight. These maps generally provide road segments and intersection labels but lack lane and trajectory information with centimeter-level geometry. They offer scalable solutions for global planning. The generated global plan encoded in the graph captures information such as "turn right at the next intersection". This high-level global plan requires context to guide the vehicle driving. We use a local semantic map to provide such context. With the global plan and local context, we focus on generating a local trajectory dynamically that can be referenced by the downstream motion planning task. This work explores methodologies for the dynamic trajectory estimation task by utilizing coarse global plan representations. To include high-level instructions in the planning process, such as turning left or turning right at intersections, we evaluate various rasterized and vectorized representations that are generated by using the lightweight and open-source OSM. We then utilize these representations and propose a conditional generative model that can explicitly capture the multimodal features of urban driving. A benefit of our approach is that it can be utilized with existing autonomy stacks. In Figure 1, we can observe the key differences between an HD-map-based planner and a coarse map based planner. Although the input representations and priors vary, the reference paths generated from the planners can serve downstream application tasks, such as motion planning. To assess the effectiveness of the proposed models, we conduct multiple data collection phases that involve multiple full-scale vehicles with ground truth labels. This paper unifies two of our previous contributions [9,10] by providing additional details on experiments, strategy, and data collection. We additionally make our complete datasets, code repositories, and benchmarks open and publicly available. In Section 2, we discuss related work in the field and the relevance of recent real-time strategies for urban driving navigation. The strategies for dynamic trajectory generation and planning are then introduced in Section 3. We specifically discuss the technical aspects of the method, implementations and code released. We then provide an in-depth description of the data collection process, metrics, and ablation studies in Section 4. Lastly, we discuss results and make note of potential future work directions in Sections 4.4 and 5, respectively. In summary, our contributions include: • A formulation for dynamic trajectory generation utilizing nominal global plan representations and local semantic scene models. • We provide an in-depth analysis on the performance benefits of using graphical methods to represent coarse plans. • We release two datasets with over 13,000 synchronized global plan and semantic scene representations. In contrast to existing datasets, our data can enable research directions for path planning with fewer priors. Related Work In this section, we review related research on HD map generation. We then look at research related to two key components of our approach: lightweight maps and scene representations. HD/Vector Maps Over the years, many different methodologies have been developed for autonomous navigation in urban environments. Many of these methodologies use HD maps to facilitate the process of path planning. Examples of classical motion planning and control architectures that utilize HD maps include Autoware [11] and Apollo [12]. These types of architectures have shown promising results for micro-mobility applications over relatively extended periods of time [13]. Similarly, end-to-end strategies [14][15][16][17][18][19][20][21] have increasingly gained popularity in the research community. These methods generally rely on large training datasets [22] or simulation environments [23,24] for development and validation. Recent work additionally explores building HD maps automatically to remove the limit of scalability imposed by human labeling. The data representations are often captured from aerial imagery or ground vehicles. Works from aerial imagery [3] can be limited by the availability of images and will not be able to capture traffic signs; however, vehicles can naturally provide more effective coverage. Homayounfar et al. [5] and Zhou et al. [4] build lane-level HD maps in highway and urban areas, respectively. VectorMapNet [6] proposes to predict vectorized representations directly, and MapTR [7] improves prediction performance by using permutation invariant representations and loss. TopoNet [8] further considers the association between lane-to-lane and lane-to-traffic elements (signs and traffic lights), which are essential elements to downstream tasks. However, in contrast to these efforts, our work focuses on the direct trajectory prediction task that is amenable for downstream applications instead of the mapping process itself. Lightweight Maps Methods that reduce the dependencies and priors on maps have also been an active area of research in recent years. The underlying representations that encode turn-by-turn directions for the global planning task are comparable to Google's proprietary maps or the publicly available OSM. For instance, generative models have been developed to encode the multimodal characteristics of driving implicitly [25] and explicitly [26] by utilizing coarse maps, such as OSM [27]. An advantage of these representations is that the overhead associated with curating and updating maps can be reduced by relying more on raw sensor data, such as camera image streams. Other related work involves utilizing a discrete action space, such as "turn left," "turn right,", or "go straight", to shift towards "mapless" strategies. This idea also focuses on minimizing the priors and reliance on maps; examples include Light Detection and Ranging (LiDAR) based methods [28] and camera-based works [29] by using one-hot encoding to represent the desired action specifically for intersections. Scene Representations Recent developments in the semantic segmentation and sensor fusion literature have enabled methods for scene understanding that seek to build scene representations with highly detailed and accurate localization of road features without manual input. For example, [30,31] focuses on generating 2D bird's eye view semantic scene representations from monocular camera streams. While these present advantages for real-time scene understanding, they still present considerable limitations in terms of occlusions and localization errors of road futures. Alternative offline methods [32] can address some of these limitations through a spatiotemporal fusion process that leverage camera and LiDAR fusion. Nevertheless, these efforts still require additional post-processing to extract lane-level trajectories that can be ingested by downstream modules. To address this challenge, our work seeks to align coarse representations (similar to Section 2.2) with 2D bird's eye view semantic maps to estimate traversable lane-level trajectories. The motivation for utilizing coarse maps is centered around providing high-level global planning information with minimum priors and the use of semantic maps is to utilize scene representations that are accurate, up-to-scale, and that can be generated automatically. Methods In this section, we introduce the approach for aligning nominal global plans with semantic scene representations to predict egocentric trajectories that align with both the global plan and the lane-level features provided by the semantic map. We explore various representations for encoding the global plans, including rasterized and vectorized representations. The semantic scene representations are generated by utilizing a probabilistic semantic process with accurate localization to provide the context in an egocentric frame. Finally, the alignment is performed by utilizing a Conditional Variational Autoencoder (CVAE) to model the distribution of trajectories that can be executed given the global plan and semantic scene representation. We introduce the global planner with its rasterized and graph representations in Section 3.1. Then in Section 3.2 the local scene representation is described. Finally, we introduce the conditional generative models and their loss functions used to generate the nominal trajectory in Section 3.3. Global Planning A global planner is implemented based on traditional graph search algorithms. This is performed by utilizing Global Navigation Satellite System (GNSS) traces to fetch and download open-source OSM data (https://www.openstreetmap.org/, accessed on 26 July 2023). Each OSM is saved in an Extensible Markup Language (XML) format, which encodes map connectivity information. This format is parsed and post-processed to populate a graph with full road connectivity, where distance and road element information is preserved. In practice, we perform a projection from a geographic coordinate system into a Cartesian space by using the Haversine distance formula to approximate the relative distance on a sphere (earth). This approximation allows us to characterize a graph search strategy that is performed on a plane and assign weights based on units of meters, and also perform an egocentric transformation based on the position and orientation of the agent on the map. An example of this transformation can be seen in Figure 2, where we visualize the vehicle-centric rasterized (left) and vectorized (right) representations that are utilized in our work. The rasterized representation is generated by defining a blank image canvas and drawing straight line segments that represent the road. In this representation, the road segments are represented by white lines and the planned road segments are denoted in green. The size of the image is 200 px × 200 px with a 0.5 m/px resolution. Given that this is a raster, the information provided in Cartesian space is discretized to convert the map information into an image and encoded using a Convolutional Neural Network (CNN)-based encoder. In contrast, the graphical representation of the road network elements is directly extracted from the original graph by performing a local search within the vehicle location and state. The nearby elements are additionally decorated with various attributes from nearby features on a per-node basis, such as stop signs, crosswalks, and traffic lights. This representation considers every node in the graph as a 3D vector-where the first two dimensions represent the 2D coordinates of the node element in an egocentric frame and the last dimension denotes if the node corresponds to a traffic signal, pedestrian crosswalk, or a stop sign. This compact global plan is represented by a 2D tensor of shape D × 3, where D = 40 waypoints are utilized to represent the global plan. The first D 2 waypoints represent the trajectory directly behind the vehicle and the last D 2 waypoints represent the relative plan ahead of the vehicle. A self-attention mechanism [33] is then applied on this input representation, as shown in Equation (1); where C = 3 and Q, K, V are linear projections of the path. In Section 4.4.2, we perform an ablation study on the most relevant global planner features. An important note in the design of our approach for the global planner is that the egocentric transformation is performed after a global plan is determined with respect to the map frame. This permits the egocentric representation to encode relative target information from a local perspective up to a fixed horizon, which applies to both representations: the rasterized and vectorized representations. Our implementation for the planner is open-source and implemented using the Robot Operating System (ROS) framework (https: //github.com/AutonomousVehicleLaboratory/gps_navigation, accessed on 26 July 2023). Semantic Scene Representation Although the data provided by the OSM-based planner can provide certain information about the underlying road geometry for a particular scene, various features and important semantic cues, such as lane-level connectivity information and boundaries, are not available. To incorporate this information, an additional representation that can provide context is necessary. Hence, we leverage a spatio-temporal camera-LiDAR fusion method to generate local semantic scene representations in an egocentric perspective. This process consists of a three-stage pipeline which is used to project semantics from image based semantic segmentation onto a 2D bird's-eye view (BEV) as outlined in Figure 3. To represent the 2D BEV semantic map, we use an occupancy grid-based representation with three dimensions: (H × W × C). The height H and width W of the grid represent spatial dimensions, and the three channels C represent the color of different semantic classes, such as roads, sidewalks, and buildings. When a semantic point cloud is created, it is projected onto the semantic occupancy grid. Each point in the point cloud is then assigned to the corresponding grid cell and the semantic label of each point is then determined by utilizing a probabilistic approach based on the known image semantic segmentation confusion matrix. Additional details can be found in [32]. In practice, these maps are generated automatically offline and utilized during training with the known vehicle poses (estimated from localization). The position and orientation of the agent is used to extract local regions with respect to the rear-axle of the vehicle and perform a rotation such that the semantic features are aligned with the longitudinal axis of the vehicle, i.e., the front of the vehicle points up in the 2D BEV local semantic scene representation and the center of the image represents the rear-axle of the robot. Finally, we utilize a sequence of CNN layers to process the local semantic scene representation, as shown in Figure 4. Conditional Generative Models Motivated by the multimodal characteristics of urban driving navigation, we formulate the dynamic trajectory generation process by utilizing a Conditional Variational Autoencoder. A CVAE can be considered a conditional generative model that is capable of capturing and approximating a conditional distribution explicitly, namely p(y | s, g). In this work, we let y = {(x 1 , y 1 ), . . . , (x H , y H )} represent the lane-level trajectory of interest and s and g are the local semantic scene representation and the local global plan, respectively. To simplify notation and the derivation for a CVAE, we let m = {s, g} jointly represent the semantic and local plan information and rewrite the probability distribution as p(y | m). An important step in the derivation for CVAEs as described in [34] involves approximating the distribution by introducing a latent variable z that is drawn from p(z | m). If z is assumed to be discrete, p(y | m) can be rewritten by marginalizing over z, as shown in Equation (2). To characterize the objective function desired as part of the optimization process, the CVAE approach also introduces a recognition model denoted by q ψ (z | m, y). This distribution is utilized only during the training process of the model and is not used in testing. The intuition behind this distribution is that during training, it has access to ground truth trajectory information (y) and can be jointly optimized with p θ (z | m) by utilizing the Kullback-Leibler (KL) divergence. This process guides p θ (z | m) in the training process and can be used at test time without ground truth. The pipeline for training and testing is shown in Figure 4; where we base our architecture in a similar way as [35]. The overall objective function used in practice is derived as a combination of the standard CVAE formulation with an added Mean-Squared-Error (MSE) term to minimize displacement error, as shown in Equation (3). At test time, we decode a trajectory by using the mode z * that maximizes p θ (z | m). In other words, we utilize Equation (4) to decide which of the |Z| modes to utilize to make a prediction. The trajectories are regressed by a Gated Recurrent Unit (GRU) [36] module, which parameterizes H Bivariate Normal Distributions. Experiments and Data Our approach utilizes synchronized global plan and local semantic scene representations to condition the CVAE. The synchronized data samples are generated as part of a data collection process and can be visualized in Figure A1. First, we set a destination using the OSM planner described in Section 3.1. This planner utilizes GNSS and inertial measurement unit (IMU) data to estimate and correct the position of the ego vehicle over time and perform the necessary egocentric transformations. The output generated from the planner consists of a 2D raster and the graph representation ( Figure 2). Similarly, a local semantic scene representation is generated in an egocentric frame by utilizing a LiDARbased localization method. These representations are extracted from full semantic maps. Both representations are synchronized based on the nearest timestamps. The intrinsic and extrinsic parameters that are used for estimating egocentric transformations are estimated offline utilizing standard calibration methods such as checkerboards and plane fitting methods with feature matching strategies. Additional details on the process for calibration and localization can be found in [13]. We use the reported ego-vehicle poses from a LiDAR-based registration method to characterize ground-truth trajectories. Multi-sensor fusion estimates are generally also accurate enough to provide pose estimates as additionally noted in [37]. However, since the position and the orientation of the vehicle is reported at 10 Hz, the data points recorded during data collection can be inconsistently spaced apart. Hence, we interpolate as noted in Figure A1 and sample new trajectories during training and evaluation. This allows us to sample trajectories with varying waypoint density and analyze the limitations of each. Datasets With the data collection process described, various datasets are curated at the UC San Diego campus. The NominalScenes dataset [9] includes urban driving data with global plan and semantic maps in various driving scenarios such as curved roads, loops, and intersections. On the other hand, the IntersectionScenes dataset [10] includes global plans and semantics for intersection-specific scenarios such as three-way and four-way intersections. We use these datasets to train and test the performance of our methods. We additionally make our implementation and benchmark publicly available (https://github. com/AutonomousVehicleLaboratory/coarse_av_cgm.git, accessed on 26 July 2023). The visualizations of the point cloud maps used in the semantic mapping process are shown in Figure 5 alongside the output semantic maps. Platform and Hardware Requirements The data collection process is completed using two identical vehicle platforms with similar sensor arrangements. A high-level overview of the sensor arrangement and data collection process is shown in Figure A1. Each of the platforms comprises six Gigabit Ethernet Mako G-319 cameras, a Velodyne VLP-16 LiDAR, a Garmin GNSS system, and an IMU. The computer platform on board includes an Intel Xeon E3-1275 CPU, an NVIDIA GTX 1080Ti GPU, and 32 GB RAM. This system is used to collect the sensor data using the ROS framework's bag file format. The LiDAR and the front two cameras are utilized in the semantic mapping framework. This mapping process is performed offline using an Intel i9-7900 CPU, an NVIDIA Titan Xp, and 128 GB RAM. Once the semantic map is generated for the regions of interest, the map can be reused for training or online inference. The GNSS and IMU are used as part of the OSM planner; the plans are automatically synchronized with the local semantic maps based on nearest Unix Epoch timestamps. The training process for the CVAE is additionally performed offline using the same system. The hardware requirements for the training process are relatively low and can be met with less than 12 GB GPU VRAM. Metrics We measure the quality of each trajectory produced based on two criteria: Driveable Area Compliance (DAC) and Displacement Error (DE). DAC evaluates the model's capacity to generate trajectories that stay within drivable regions. This measurement is derived by averaging the trajectories that coincide with drivable areas, including crosswalks, lane markings, and road surfaces as defined in the local semantic map. If any waypoint of a trajectory overlaps with a sidewalk or vegetation area, it is deemed non-compliant. We present the error associated with half of the trajectory (DAC H ALF ), as well as the entire predicted trajectory (DAC FULL ), as shown in Equations (A1)-(A7). Both metrics characterize compliance in a range between [0, 1] by utilizing an indicator function to evaluate if a waypoint predictionŷ h i lies within the semantic set of drivable regions C d . We additionally employ metrics commonly used in the field of road user prediction research to characterize the performance of trajectories: Average Displacement Error (ADE) and Final Displacement Error (FDE) [38,39]. These metrics allow us to assess the average error of each trajectory across all H waypoints (ADE) and the average error specifically related to the last predicted waypoint (FDE). To provide a more comprehensive analysis, we extend the evaluation of ADE by measuring the error associated with half of the trajectory (ADE H ALF ), considering that the waypoints closest to the autonomous agent are executed first during navigation. The worst-case errors are also captured by calculating the average maximum displacement error (MDE) along each predicted trajectory. Results For all the experiments performed, we set the number of discrete states within the CVAE to 12, i.e., |Z| = 12. The rationale for choosing 12 as the number of distribution modes is not only motivated by performance results but also by the various navigation behaviors including (i) making left turns, (ii) making right turns, (iii) lane following, (iv) driving straight across intersections, (v) driving along curved roads, and (vi) making u-turns. This equates to utilizing two modes to model each driving behavior explicitly. Ablation Study: Waypoint Density In our initial experiments, we compare trajectory predictions with varying waypoint densities up to a 30 m horizon. The first type H10 is defined by 10 waypoints spaced 3 m apart and the second H15 defines 15 waypoints spaced 2 m apart. The performance of these two characterizations of the model are shown in Table 1. From the results, we observe that increasing the number of waypoints regressed also increases the likelihood of encountering compound errors. We hypothesize that this is due to the dependency of the next waypoint prediction on the previous cell state within the GRU decoder of the network. Therefore, for the remainder of the experiments, we utilize H10 as the underlying trajectory representation. Ablation Study: Rasterized and Graph Representations To further evaluate the performance implications of the underlying global plan representations, we compare the rasterized and the graph-based global plan representations discussed in Section 3.1. In Table 2, we evaluate four different models using the Nom-inalScenes dataset and benchmark. The first method evaluated utilizes the rasterized global plan, and the last three leverage the graphs directly while aggregating various node features. For graphical models, we analyze the value of incorporating information about the road segments traversed (P), the planned road segments to be traversed (F), traffic signals and stop signs (S), and crosswalks (C). In these experiments, we observe that the attention mechanism that makes use of all node features except crosswalks outperforms the rasterized method and the methods that only make partial use of the node features. However, we note that not all the node features necessarily boost performance. For example, crosswalk features slightly deteriorate performance when we compare the two models that make use of the full history, planned trajectory, and traffic signals and stop sign features. While this may be unexpected, we find that OSM crosswalk information is not unique to intersections and as a result may present difficulties distinguishing between intersections with crosswalks and straight road segments with crosswalk features. On the other hand, this is not the case when we incorporate stop signs and traffic signals, since they are consistently defined at intersections. In a subsequent set of experiments, we evaluate the performance of our methods for intersection-specific scenarios using the IntersectionScenes dataset; these scenarios include left turns, right turns, and driving straight across intersections for three-way and four-way intersections. The results consistently show that explicitly utilizing the graphical representation can boost performance. On average, we observe an 1.5 m error across the full trajectory generated, and an average error of 3.1 m associated with the last waypoint of every trajectory predicted. We additionally find that the worse case predictions average to 3.2 m. Similarly, DAC indicates that 90% of the trajectories estimated overlap with a drivable region, including pavement, crosswalks, and lane marks. For additional details, please see Table 3. Table 3. The graph-based and rasterized plan representations are evaluated an intersection-specific dataset, IntersectionScenes. This includes three-way and four-way intersections for urban driving scenarios specifically. Error is given in meters, see Equations (A1)-(A7) for definitions and details on the indicator function to estimate drivable area compliance (DAC). Bolded results indicate improvement. In contrast, the errors associated with the first half of each trajectory predicted are considerably lower. These results are particularly more relevant given that a motion planner will utilize the nearby waypoints first. For instance, in an urban scenario with a 11 m/s (25 mph) speed limit, the ego-vehicle can utilize the first H/2 waypoints to formulate a trajectory that spans 15 m without needing to replan or generate a new estimate before the 1 s mark. Even though a motion planner would still execute collision checking as part of a downstream process, this illustrates the potential in real-time applications for urban driving. In fact, we find that our approach can keep up with real-time compute requirements by achieving an inference time of approximately 6.18 ms using the attentionbased encoder and 6.22 ms using the CNN-based encoder. As an added benefit, the selfattention mechanism uses 31% fewer parameters than the CNN-based encoder, which comprises 16.8 M learnable parameters. Discussion A number of visualizations from various intersection scenarios are shown in Figure 6. These visualizations are generated using the SPF graph-based model and represent prediction outputs from three-way and four-way intersections that capture the multimodal properties of the CVAE. An important benefit of this approach is that the trajectories generated from the formulation can be quickly evaluated based on their semantic map projections and determine if the trajectories are adequate candidates for downstream motion planning. Nevertheless, the benchmarks indicate that potential improvements can be performed for longer horizon predictions as shown in Figure 7. The performance gap observed in this scenario can be the result of the complex intersection configuration that falls outside of the training dataset as the road geometry is unique. These performance gaps require further investigation for future work. Nevertheless, given that fully dynamic methods are inherently challenging due to the drastic variations in road topologies and occlusion, another research direction can entail unifying traditional planning stacks with fully dynamic path generation methods. This research direction can leverage map change detection similar to [40] to determine if a conventional stack should shift from using mapped futures to dynamic path generation methods similar to ours. This work additionally enables automatic updates to offline maps without hindering the performance of an autonomy stack in the presence of outdated mapped features. The mainstream strategies rely on HD maps for planning [11,12] or explore methods to automate map generation [4,[6][7][8] and maintain the HD map by change detection [40]. We argue that a more scalable substitution for HD maps is a combination of coarse maps and local environment models. Our work explores leveraging a coarse map and local semantic map for planning. We break down the planning system into global planning, local reference path generation, and dynamic motion planning and focus on the reference path generation aspect. While our approach is a step towards planning free of HD maps, further investigation is needed in various aspects. First, our work does not consider the dynamic motion planning aspect, which currently also relies on HD maps that provide the lane boundaries position/type, speed limit, and traffic control. Second, the local semantic map is built from fusing point clouds and images. In the case of using a sparse LiDAR to provide the point cloud (e.g., VLP-16), pre-map the environment to ensure point density is necessary [32]. Last but not least, an important aspect of planning is its interpretability, which relates to whether the method can be certified. Further investigation on the interpretation and certification [41] of machine learning based planning is needed for large-scale adoption. Conclusions In this work, a method for aligning coarse global plan representations with semantic scene models was explored. Various datasets and benchmarks with open implementations for the planner and CVAE formulation were made available. The contributions indicate potential use cases for urban driving navigation with fewer map priors, such as the widely used HD maps. This additionally presents directions for unifying dynamic path generation strategies with existing frameworks that leverage offline maps in combination with map detection methods. For future work, we plan to further reduce the complexity of the semantic scene model and evaluate the performance of such strategies in real-time autonomous driving software architectures. Funding: This work was funded by Qualcomm, the National Science Foundation, and Nissan. We sincerely appreciate their support. Institutional Review Board Statement: Not applicable. Informed Consent Statement: Not applicable. Conflicts of Interest: The authors declare no conflicts of interest. Abbreviations The following abbreviations are used in this manuscript: Appendix A. Data Data Collection Process Figure A1. The figure outlines the data generation process for the global plan and semantic scene representations. The OSM planner generates egocentric global plans using a combination of GNSS and IMU. On the other hand, LiDAR-based scan matching is utilized to localize precisely and extract 2D egocentric semantic maps.
2023-08-02T15:06:40.841Z
2023-07-28T00:00:00.000
{ "year": 2023, "sha1": "9efa5b9a0e2da2048aa9ba852a07482366f0e67f", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1424-8220/23/15/6764/pdf?version=1690550169", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "2a40a50fb72aaece41328edd4de866406d87d477", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [] }
78469092
pes2o/s2orc
v3-fos-license
Re‐evaluation of locust bean gum (E 410) as a food additive Abstract Following a request from European Commission, the EFSA Panel on Food Additives and Nutrient Sources added to Food (ANS) provides a scientific opinion re‐evaluating the safety of locust bean gum (E 410) as a food additive. Locust bean gum (E 410) is an authorised food additive in the EU. Locust bean gum (E 410) as specified in the Commission Regulation (EU) No 231/2012 is derived from the ground endosperm of the seeds of the strains of carob tree, Ceratonia siliqua (L.) Taub. (Family Leguminosae). An acceptable daily intake (ADI) ‘not specified’ was allocated by the Joint Food and Agriculture Organization/World Health Organization Expert Committee on Food Additives (JECFA) in 1981. Although not evaluated by the Scientific Committee for Food (SCF), it was accepted by the SCF in 1991 for use in weaning food, and in 1994, in infant formulae for special medical purposes. Locust bean gum is practically undigested, not absorbed intact, but significantly fermented by enteric bacteria in humans. No adverse effects were reported in 90‐day toxicity and carcinogenicity studies in rodents at the highest doses tested and there was no concern with respect to the genotoxicity and to reproductive and developmental toxicity of locust bean gum (E 410). The Panel concluded that there is no need for a numerical ADI for locust bean gum (E 410), and that there is no safety concern for the general population at the refined exposure assessment for its reported uses as a food additive. However, infants and young children consuming foods for special medical purposes may show a higher susceptibility to gastrointestinal effects of locust bean gum due to their underlying medical condition. The Panel concluded that the available data do not allow an adequate assessment of the safety of locust bean gum (E 410) in these foods for infants and young children. locust bean gum; however, it considered that the specification and origin of the gum are lacking in these studies. Case reports of hypersensitivity reactions associated with locust bean gum included the case of a 5-month-old infant; the Panel considered that this hypersensitivity might be due to the locust bean gum proteins and therefore their content should be reduced as much as possible. The Panel further noted that in a group of 28 hypercholesterolaemic or normal adolescents and adults treated with locust bean gum for 8 weeks, doses up to 500 mg/kg bw per day were well tolerated without side effects. The present re-evaluation includes the use of locust bean gum (E 410) in foods for infants from 12 weeks of age onwards and for young children. Concerning uses of locust bean gum in food for infants and young children, the Panel concurs with the SCF (SCF 2003) '. . . the SCF reaffirmed its earlier view that it is not persuaded that it is necessary to give thickened infant formulae to infants in good health, and that the information available on the potential effects on the bioavailability of dietary nutrients and growth in young infants is not conclusive (SCF, 1999). It is therefore recommended that the use of locust bean gums should not be acceptable for use in infant formulae' and 'The SCF recommended maintaining the current maximum level of the use of locust bean gums in follow-on formulae of 1 g/L. The Committee further recommended maintaining the concept that if more than one of the three substances locust bean gum, guar gum or carrageenan are added to a follow-on formula, the maximum level established for each of those substances is lowered with that relative part as is present of the other substances together' and 'The SCF accepted that there is a case of need for use of locust bean gums in dietary foods for special medical purposes for therapeutic use in a small number of infants with gastro-oesophageal reflux disease under medical supervision, and the Committee considered its use in these products up to a maximum level of 10 g/L acceptable'. The Panel endorsed the conclusions of the SCF which are reflected in the current regulation for food category 13.1.2. The Panel acknowledged that consumption of the concerned food categories would be short and noted that it is prudent to keep the number of additives used in foods for infants and young children to the minimum necessary and that there should be strong evidence of need as well as safety before additives can be regarded as acceptable for use in infant formulae and foods for infants and young children. The Panel noted reports suggesting a putative effect of locust bean gum to decrease the bioavailability of certain nutrients; however, a human study did not confirm this effect. For the specific group of infants of more than 12 weeks of age, 1 the Panel considered a case of sensitivity and reports on undesirable gastrointestinal effects, such as diarrhoea, frequent loose stools and flatulence, associated with the use of locust bean gum in products for reduction in gastro-oesophageal reflux (GOR). Furthermore, the Panel noted that no specific clinical data addressing the safety of use of locust bean gum (E 410) in 'dietary foods for infants for special medical purposes and special formulae for infants' (food category 13.1.5.1) and in 'dietary foods for baby and young children for special medical purposes as defined in Directive 1999/21/EC' (food category 13.1.5.2) considering the defined maximum use levels were available to the Panel. The Panel also noted that infants and young children consuming these foods may be exposed to a greater extent to locust bean gum (E 410) than their healthy counterparts because the permitted levels of locust bean gum (E 410) in products for special medical purposes to reduce GOR are 10-fold higher than in follow-on formulae for healthy individuals. The Panel further noted that, given their medical condition, infants and young children consuming foods belonging to these food categories may show a higher susceptibility to the gastrointestinal effects of locust bean gum than their healthy counterparts. Thus, monitoring of any adverse effects including those in the gastrointestinal system in infants and young children consuming these foods under medical supervision could be helpful to reduce this uncertainty. From the refined estimated exposure scenario, considering only food categories for which direct addition of locust bean gum (E 410) to food is authorised, in the brand-loyal scenario, mean exposure to locust bean gum (E 410) ranged from 35.7 mg/kg bw per day in adults to 368.9 mg/kg bw per day in infants. The 95th percentile of exposure ranged from 74 mg/kg bw per day for adults to 765.2 mg/kg bw per day in infants. In the non-brand-loyal scenario, mean exposure to locust bean gum (E 410) from its use as a food additive ranged from 20.7 mg/kg bw per day for adults to 204.3 mg/kg bw per day in infants. The 95th percentile of exposure ranged from 38.8 mg/kg bw per day for the elderly to Introduction The present opinion document deals with the re-evaluation of the safety of locust bean gum (E 410) when used as a food additive. 1.1. Background and Terms of Reference as provided by the European Commission 1.1.1. Background as provided by the European Commission Regulation (EC) No 1333/2008 of the European Parliament and of the Council on food additives requires that food additives are subject to a safety evaluation by the European Food Safety Authority (EFSA) before they are permitted for use in the European Union (EU). In addition, it is foreseen that food additives must be kept under continuous observation and must be re-evaluated by EFSA. For this purpose, a programme for the re-evaluation of food additives that were already permitted in the EU before 20 January 2009 has been set up under the Regulation (EU) No 257/2010 2 . This Regulation also foresees that food additives are re-evaluated whenever necessary in the light of changing conditions of use and new scientific information. For efficiency and practical purposes, the reevaluation should, as far as possible, be conducted by group of food additives according to the main functional class to which they belong. The order of priorities for the re-evaluation of the currently approved food additives should be set on the basis of the following criteria: the time since the last evaluation of a food additive by the Scientific Committee on Food (SCF) or by EFSA, the availability of new scientific evidence, the extent of use of a food additive in food and the human exposure to the food additive, also taking into account the outcome of the Report from the Commission on Dietary Food Additive Intake in the EU of 2001. The report 'Food additives in Europe 2000' submitted by the Nordic Council of Ministers to the Commission, provides additional information for the prioritisation of additives for re-evaluation. As colours were among the first additives to be evaluated, these food additives should be re-evaluated with a highest priority. In 2003, the Commission already requested EFSA to start a systematic re-evaluation of authorised food additives. However, as a result of adoption of Regulation (EU) 257/2010, the 2003 Terms of References are replaced by those below. Terms of Reference as provided by the European Commission The Commission asked EFSA to re-evaluate the safety of food additives already permitted in the Union before 2009 and to issue scientific opinions on these additives, taking especially into account the priorities, procedures and deadlines that are enshrined in the Regulation (EU) No 257/2010 of 25 March 2010 setting up a programme for the re-evaluation of approved food additives in accordance with the Regulation (EC) No 1333/2008 of the European Parliament and of the Council on food additives. Interpretation of Terms of Reference The ANS Panel described its risk assessment paradigm in its Guidance for submission for food additive evaluations in 2012 (EFSA ANS Panel, 2012). This Guidance states that, in carrying out its risk assessments, the Panel sought to define a health-based guidance value (e.g. an acceptable daily intake; ADI) (IPCS, 2004) applicable to the general population. According to the definition above, the ADI as established for the general population does not apply to infants below 12 weeks of age (JECFA, 1978;SCF, 1998). In this context, the re-evaluation of the use of food additives, such as thickening agents and certain emulsifiers, in food for infants below 12 weeks represents a special case for which specific recommendations were given by the Joint Food and Agriculture Organization (FAO)/World Health Organization (WHO) Expert Committee on Food Additives (JECFA) (JECFA, 1972(JECFA, , 1978 and by the SCF (SCF, 1996(SCF, , 1998. The Panel endorsed these recommendations. In the current EU legislation (Annex II to Regulation (EC) No 1333/2008) 1 , use levels of additives authorised in food for infants under the age of 12 weeks in categories 13.1.1 and 13.1.5 (Annex II) and uses of food additives in nutrient preparations for use in food for infants under the age of 12 weeks and maximum levels for the carry-over from these uses (Annex III, Part 5, section B) are included. The Panel considers that these uses would require a specific risk assessment in line with the recommendations given by JECFA and the SCF and endorsed by the Panel in its current Guidance for submission for food additives evaluations (EFSA ANS Panel, 2012). Therefore, a risk assessment as for the general population is not considered to be applicable for infants under the age of 12 weeks and will be performed separately. This re-evaluation refers exclusively to the uses of locust bean gum (E 416) as a food additive in food, including food supplements and does not include a safety assessment of other uses of locust bean gum. Information on existing evaluations and authorisations Locust bean gum (E 410) is authorised as a food additive in the EU according to Annexes II and III to Regulation (EC) No 1333/2008 3 and specific purity criteria on locust bean gum (E 410) have been defined in Commission Regulation (EU) No 231/2012 4 . In the EU, locust bean gum has not been formally evaluated by the SCF. However, it was accepted for use in weaning food (SCF, 1991), and in infant formulae, for special medical purposes (SCF, 1994), but not for infant food in general (SCF, 1994(SCF, , 1999. In 2003, the SCF re-evaluated locust bean gum in the revision of the essential requirements of infant formulae and follow-on formulae intended for the feeding of infants and young children (SCF, 2003). • 'The Committee reaffirmed its earlier view that it is not persuaded that it is necessary to give thickened infant formulae to infants in good health, and that the information available on the potential effects on the bioavailability of dietary nutrients and growth in young infants is not conclusive (SCF, 1999). It is therefore recommended that the use of locust bean gums should not be acceptable for use in infant formulae. • The Committee recommends maintaining the current maximum level of the use of locust bean gums in follow-on formulae of 1 g/L. The Committee further recommends maintaining the concept that if more than one of the three substances locust bean gum, guar gum or carrageenan are added to a follow-on formula, the maximum level established for each of those substances is lowered with that relative part as is present of the other substances together. • The Committee accepts that there is a case of need for use of locust bean gums in dietary foods for special medical purposes for therapeutic use in a small number of infants with gastro-oesophageal reflux disease under medical supervision, and the Committee considers its use in these products up to a maximum level of 10 g/L acceptable'. Locust bean gum was evaluated by JECFA in 1975and 1981(JECFA, 1975a,b, 1980a,b, 1981a. Based on the lack of adverse effects in the available toxicity studies, an ADI 'not specified' was allocated. In 2008, JECFA updated the specifications of carob (locust) bean gum (JECFA, 2008a,b). In 2016, the Committee discussed the safety of use of locust bean gum in infant formula and concluded that the available studies are not sufficient for the evaluation of locust bean gum for use in infant formula at the proposed use level (10,000 mg/L). The Committee requested toxicological data from studies in neonatal animals, adequate to evaluate the safety for use in infant formula, to complete the evaluation (summary of JECFA, 2016). In 2004, the EFSA Panel on Food Additives, Flavourings, Processing Aids and Materials in Contact with Food (EFSA AFC Panel, 2004) prepared a scientific opinion on a request from the European Commission related to the use of certain food additives derived from seaweed or non-seaweed origin, including locust bean gum (E 410) in jelly minicups. The AFC Panel concluded that any of these gelforming additives or of any other type that gave rise to a confectionery product of a similar size, with similar physical and/or physicochemical properties and that could be ingested in the same way as the jelly minicups, would give rise to a risk for choking (EFSA, 2004). The use of these additives in jelly minicups is not authorised in the EU. 5 Furthermore, locust bean gum (E 410) belongs to those food additives of Group I for which the EU Regulation 5 in general indicates that they may not be used to produce dehydrated foods intended to rehydrate on ingestion. 2. Data and methodologies Data The Panel on Food Additives and Nutrient Sources added to Food (ANS) was not provided with a newly submitted dossier. EFSA launched public calls for data, 6,7 and, if relevant, contacted risk assessment bodies to collect information from interested parties. The Panel based its assessment on information submitted to EFSA following the public calls for data, information from previous evaluations and additional available literature up to the date of the last Working Group (WG) meeting before the adoption of the opinion on 12 October 2016. Attempts were made at retrieving relevant original study reports on which previous evaluations or reviews were based; however, these were not always available to the Panel. The EFSA Comprehensive European Food Consumption Database (Comprehensive Database 8 ) was used to estimate the dietary exposure. The Mintel's Global New Products Database (GNPD) is an online resource listing food products and compulsory ingredient information that should be included in labelling. This database was used to verify the use of locust bean gum (E 410) in food products. Methodologies This opinion was formulated following the principles described in the EFSA Guidance on transparency with regard to scientific aspects of risk assessment (EFSA Scientific Committee, 2009) and following the relevant existing guidance documents from the EFSA Scientific Committee. The ANS Panel assessed the safety of locust bean gum (E 410) as a food additive in line with the principles laid down in Regulation (EU) 257/2010 and in the relevant guidance documents: Guidance on submission for food additive evaluations by the Scientific Committee on Food (SCF, 2001) and taking into consideration the Guidance for submission for food additive evaluations in 2012 (EFSA ANS Panel, 2012). When the test substance was administered in the feed or in the drinking water, but doses were not explicitly reported by the authors as mg/kg body weight (bw) per day based on actual feed or water consumption, the daily intake was calculated by the Panel using the relevant default values as indicated in the EFSA Scientific Committee Guidance document (EFSA Scientific Committee, 2012) for studies in rodents or, in the case of other animal species, by JECFA (2000). In these cases, the daily intake is expressed as equivalent. When in human studies in adults (aged above 18 years), the dose of the test substance administered was reported in mg/person per day, the dose in mg/kg bw per day was calculated by the Panel using a body weight of 70 kg as default for the adult population as described in the EFSA Scientific Committee Guidance document (EFSA Scientific Committee, 2012). Dietary exposure to locust bean gum (E 410) from its use as a food additive was estimated combining food consumption data available within the EFSA Comprehensive European Food Consumption Database with the maximum levels according to Annex II to Regulation (EC) No 1333/2008 9 and/or reported use levels and analytical data submitted to EFSA following a call for data. Different scenarios were used to calculate exposure (see Section 3.3.1). Uncertainties on the exposure assessment were identified and discussed. In the context of this re-evaluation, the Panel followed the conceptual framework for the risk assessment of certain food additives re-evaluated under Commission Regulation ( 410) is the ground endosperm of the seeds of the strains of carob tree, Ceratonia siliqua (L.) Taub. (Family Leguminosae). The substance has the CAS Registry number 9000-40-2 and the EINECS number 232-541-5. The molecular weight of locust bean gum varies over a wide range and is reported to be approximately 50,000-3,000,000 g/mol (Commission Regulation (EU) No 231/2012). According to the information (Documentation provided to EFSA, document 5), locust bean gum consists of 87-88% (on dry basis) high molecular weight polysaccharides composed of mannose and galactose units (galactomannans) in approximate ratio 4:1. The remaining 12-13% consists mainly of protein and fibrous matter such as residues from the germ, seed coat and cell wall material. Locust bean gum galactomannans are composed of linear chains of (1?4)-linked b-D-mannopyranosyl units with varying amounts of single (1?6)-linked a-D-galactopyranosyl residues as side chains (Wielinga, 2009; Documentation provided to EFSA, document 5). According to industry, the content of protein (N 9 6.25) in the food additive is less than 7% (Documentation provided to EFSA, document 1; Documentation provided to EFSA, document 5). Locust bean gum is also produced in purified form as 'Carob Bean Gum (Clarified)' INS No 410. According to the JECFA specifications, clarified locust bean gum does not contain cell wall materials and the content of protein is less than 1% (JECFA, 2008a). An interested party (Documentation provided to EFSA, document 4) confirmed that according to the results for purity testing refined locust bean gum complies with the requirements for clarified locust bean gum from JECFA specifications. The structural formula of polysaccharide units of locust bean gum is presented in Figure 1. Locust bean gum is a white to yellowish-white, nearly odourless powder. It is soluble in hot water and insoluble in ethanol (Commission Regulation (EU) No 231/2012). The substance is insoluble in organic solvents (Lewis, 2007). When the degree of galactose substitution on mannose chain increases (e.g. in tara gum and guar gum), the galactomannans become more soluble in water (Barak and Mudgil, 2014). Locust bean gum swells in cold water, but heating is necessary for maximum solubility. Solutions are Re-evaluation of locust bean gum (E 410) as a food additive cloudy with a white opacity due to the presence of insoluble impurities such as proteins and cellulose. According to industry (Documentation provided to EFSA, document 1), the solubility varies with the molecular weight, solubility declines with increasing molecular weight and increasing concentration. Solutions of 1% w/w are highly viscous (2,000-6,000 mPa s); 2-5% w/w swollen dispersions show gel-like behaviour (Ullmann, 2007). A slight increase in viscosity on ageing is exhibited. Viscosity is little affected over pH range 3-11 (Klose and Glicksman, 1990). When heated to decomposition, locust bean gum emits acrid smoke and irritating fumes (Lewis, 2004). The viscosity profiles are similar to those of guar gum solutions: a drastic increase in viscosity is seen for fully hydrated gums above a concentration of 1% (at 25°C, viscosity of 1% solution amounts to 150 mPa s and of a 2% solution over 2,000 mPa s). A 3% solution of locust bean gum has sufficient viscosity that the solution has the appearance of gelling. Generally, solutions of locust bean gum have no gelling properties under normal conditions, although upon freeze-thaw treatment or in the presence of borate (pH ≥ 7) or of a large amount of saccharose, gels are obtained (Meister, 2000;Barak and Mudgil, 2014). Locust bean gum shows physicochemical synergistic interaction with other hydrocolloids, such as carrageenan, xanthan gum and agar, namely resulting in a high-degree increase in viscosity or gel strength and leading to the ability to impart a desirable elastic character and to retard syneresis in these gels (Klose and Glicksman, 1990;Copetti et al., 1997;Meister, 2000;Barak and Mudgil, 2014). The synergistic effects occurring when combining locust bean gum with carrageenan, agar or xanthan gum are technologically utilised in food production (Wielinga, 2010). The infrared (IR) spectrum of locust bean gum has been provided by Guiliano et al. (2002). The Panel noted that ultraviolet-visible, nuclear magnetic resonance or mass spectra were not available in the literature. Upon request of the Panel for information on the particle size distribution, data were provided by industry (Documentation provided to EFSA, document 7) regarding locust bean gum when used as a food additive. According to the submitted results of batch analysis by laser diffraction in dry dispersion, the average size of the particles was 66.4 lm. According to Documentation provided to EFSA, document 7, 'although the method used does not detect finer particles than 0.1 lm, the data shows that the majority of particles are of sizes larger than 1 lm and not in the nanoscale'. Specifications The specifications for locust bean gum (E 410) as defined in Commission Regulation (EU) No 231/2012 and for carob (locust) bean gum and carob (locust) bean gum (clarified) by the JECFA (2008a,b) are listed in Table 1. Lead Not more than 2 mg/kg Not more than 2 mg/kg Determine using an AAS/ICP-AES technique appropriate to the specified level. The selection of sample size and method of sample preparation may be based on the principles of the methods described in Volume 4 (under 'General Methods, Metallic Impurities') (b) Not more than 2 mg/kg Determine using an AAS/ICP-AES technique appropriate to the specified level. The selection of sample size and method of sample preparation may be based on the principles of the methods described in Volume 4 (under 'General Methods, Metallic Impurities') (b) Mercury Not more than 1 mg/kg --Cadmium Not more than 1 mg/kg -- Ethanol and Propane-2-ol Not more than 1%, single or in combination --Industry provided screening data for contamination of locust bean gum by microorganisms and derived toxins (aflatoxins B1, B2, G1, G2, deoxynivalenol, HT-2 Toxin, T-2 Toxin, ochratoxin A and patuline) (Documentation provided to EFSA, document 1). New results provided by the industry on analysis of aflatoxins B1, B2, G1, G2 in two samples of locust bean gum (non-refined and refined) were all below the limit of quantification (LOQ) of 0.1 lg/kg, (Documentation provided to EFSA, document 4). In contrast to the JECFA specifications, the EU specifications do not define limits for microbiological contaminations of locust bean gum (E 410). Because of both the botanical origin and the polysaccharidic nature of gums, they can be a substrate of microbiological contamination and of field and storage fungal development. The latter has been recently demonstrated by the mycotoxin contaminations of gums (Zhang et al., 2014). The Panel noted that, different from other gums, no microbiological criteria were defined for locust bean gum by the EU Regulation. The Panel also noted that the microbiological specifications for polysaccharidic thickening agents, such as gums, should be harmonised and that for locust bean gum criteria for the absence of Salmonella spp. and Escherichia coli, for total aerobic microbial count (TAMC) and for total combined yeasts moulds count (TYMC) should be included into the EU specifications, as it is the case for other polysaccharidic thickening agents (e.g. alginic acids and its salts (E 400-E 404), agar (E 406), carrageenan (E 407), processed eucheuma seaweed (E 407a), xanthan gum (E 415), gellan gum (E 418)). The proteins in the final product originate from the germ which is removed during the processing of carob kernels (Documentation provided to EFSA, document 5). Using a modified Osborne extraction procedure, carob germ flour proteins were found to contain $ 32% albumin and globulin and $ 68% glutelin with no prolamins detected (Smith et al., 2010). During the processing of carob kernels traces of seed coat, germ and cell wall material end up in the final product and are determined as 'Acid insoluble matter' (Documentation provided to EFSA, document 5). An interested party (Documentation provided to EFSA, document 4) provided the results for purity testing of three randomly selected lots of refined locust bean gum which demonstrated that tested samples comply with the requirements for clarified locust bean gum from the JECFA specifications. According to the industry, depending on the manufacturing process (sulphuric acid or thermal treatment), there might be residual enzymatic activity (amylase, galactosidase, mannosidase). Probably, the main part is inactivated but it is not excluded that there is some residual activity in locust bean product. Their assumption is that it will be inactivated during dissolution and heating before its use, as locust bean gum is cold water-insoluble (Documentation provided to EFSA, document 5). The Panel noted some case reports of hypersensitivity reactions including the case of a 5-month-old infant associated with locust bean gum (Section 3.5.7.2). The Panel considered that these hypersensitivity reactions might be due to the locust bean gum proteins. The Panel noted that locust bean gum (E 410) is also produced in a purified form as clarified locust bean gum. According to industry the 'Carob Bean Gum is also produced in purified form as "Carob Bean Gum (Clarified)" Initially prepare a 10-1 dilution by adding a 50 g sample to 450 mL of Butterfield's phosphate-buffered dilution water and homogenising the mixture in a high-speed blender. Total (aerobic) plate count: Not more than 5,000 CFU/g E. coli: Negative in 1 g Salmonella: Negative in 25 g yeasts and moulds: Not more than 500 CFU/g Initially prepare a 10-1 dilution by adding a 50 g sample to 450 mL of Butterfield's phosphate-buffered dilution water and homogenising the mixture in a highspeed blender. Total (aerobic) plate count: Not more than 5,000 CFU/g E. coli: Negative in 1 g Salmonella: Negative in 25 g yeasts and moulds: Not more than 500 CFU/g AAS: atomic absorption spectrophotometry; ICP-AES: inductively coupled plasma atomic emission spectroscopy; CFU: colony-forming units. (a): According to the recent JECFA evaluation, there were insufficient data to set a limit for arsenic for locust bean gum and locust bean gum (clarified) (summary of JECFA, 2016). (b): According to the recent JECFA evaluation, a limit for lead of 0.5 mg/kg for use in infant formula was introduced for locust bean gum and locust bean gum (clarified). The lead limits for general use (2 mg/kg) were maintained. The method descriptions for the determination of lead and sample preparation for residual solvents were updated (summary of JECFA, 2016). INS No. E-410. It is however not being used as replacement of "Carob Bean Gum"' (Documentation provided to EFSA, number 5). In view of the botanical origin of locust bean gum furthermore limitations of possible contamination with pesticides should be considered. According to an interested party (Documentation provided to EFSA, document 1), no pesticides are present. Pesticides analysis for different locust bean products produced in different countries were provided showing that the level of pesticides (27 analysed) were below the limit of detection. Documentation provided to EFSA (document 5), informed that industry is annually analysing pesticides residues in locust bean gum from kernels of different origin. The result is regularly in compliance with the regulation for pesticides in food (no analytical data provided). New results provided by the industry (Documentation provided to EFSA, document 4) on analysis of pesticides in two samples of locust bean gum (clarified and non-clarified) failed to detect organochlorine, organophosphorus and organonitrogen pesticides and pyrethroids (limit of detection (LOD) not indicated). However, in view of the use of locust bean gum in food for infant and young children, the Panel considered it particularly necessary to pay attention on the compliance of locust bean gum raw material to the existing EU regulation on pesticides. Information on the levels of heavy metals, including arsenic, lead, mercury, cadmium, as requested in the EFSA call for data 7 was not provided by interested parties. But JECFA recently reported, that based on the data submitted, the Committee was reassured that for locust bean gum the lead level of 0.5 mg/kg for use in infant formula was achievable (summary of JECFA, 2016). The Panel noted that, according to the European Commission specifications for locust bean gum (E 410), impurities of the toxic elements arsenic, cadmium, lead and mercury are accepted up concentrations of 3, 1, 2 and 1 mg/kg, respectively. Contamination at those levels could have a significant impact on the exposure to these metals, for which the intake is already close to the health-based guidance values established by EFSA (EFSA CONTAM Panel, 2009a,b, 2010. The Panel noted that since a roasting process is used as one of the manufacturing processes, information on the possible presence of polycyclic aromatic hydrocarbons may be relevant for the specifications. According to an interested party (Documentation provided to EFSA, document 5), there are no special requirements for specifications of locust bean gum (E 410) to be used in formulae or food for infants, toddlers and other young children. Manufacturing process According to an interested party (Documentation provided to EFSA, document 1), the endosperm of the carob fruit seeds is ground to a fine powder and is commercially available in this form as locust bean gum. The carob seeds are difficult to process, since the seed coat is very tough and hard. By special processes the seeds are pealed without damaging the endosperm and the germ. The following procedures are applied: • In the acid process, the seeds are heated with sulfuric acid to carbonise the seed coat. The remaining fragments of the seed coat are removed from the clean endosperm 'sandwich' in an efficient washing and brushing process. The sandwiches are dried and cracked and the more friable germs get crushed. The germ parts can be sifted off from the unbroken endosperm halves. • In the roasting process, the seeds are roasted in a rotating furnace where the seed coat drops off the rest. The germ and the endosperm halves are recovered as mentioned above. This process yields a product of slightly darker colour. The advantage is that no sulfuric acid as processing aid is necessary, and therefore, no effluent originates from the production process (Ullmann, 2007; Documentation provided to EFSA, document 1). The clarified gum is obtained by dissolution in hot water and then recovery by precipitation in ethanol or isopropanol (JECFA, 2008b, Barak andMudgil, 2014). Locust bean gum processing flowchart is presented in Figure 2. Methods of analysis in food Methods identified in the literature for the quantitative chemical analysis of locust bean gum in foods are based on the determination of the degradation products after hydrolysis. Koswig et al. (1997) reported a high-performance anion-exchange-pulsed amperometric detection method for determination of hydrolytic degradation monosaccharides of seven thickening agents, including locust bean gum, in fruit preparations. Eberendu et al. (2005) described quantitative determination of saccharides from plant-derived hydrocolloids, including locust bean gum, in food supplements by anion-exchange liquid chromatography with integrated pulsed amperometric detection. For galactose, the limits of detection and limit of quantification were 1.67 and 5.57 lg/kg, respectively, and for mannose, they were 4.64 and 15.48 lg/kg, respectively. Analytical methods for commercial locust bean preparations are reported. Commercial galactomannan preparations still may contain remnants of germs and hull that are rich in protein and fibre, and reduce the galactomannan content. The galactomannan content is therefore an important factor in quality. The presence of galactomannans can be demonstrated by various precipitation reactions (e.g. gelation with borax). For determination of the mannose: galactose ratio, the preparation is hydrolysed and the sugar moieties are analysed using gas chromatography (Ullmann, 2007). Stability of the substance, and reaction and fate in food According to industry, stability is tested by the increase in viscosity because this is directly related to the amount of galactomannan. Shelf life tests performed by an interested party in a ring-test with six samples from different manufacturers at 20°and 40°C for 12 months resulted in the following decrease in viscosity: < 5% (storage at 20°C) and < 10% (storage at 40°C) (Documentation provided to EFSA, document 1). No information on degradation products was provided. Technological function Locust bean gum is generally used as a food additive due to its thickening and stabilising properties. In the food industry, it is used as thickener and stabiliser in ice cream, sauce, beverages, (Kawamura and Carob, 2004) Re-evaluation of locust bean gum (E 410) as a food additive and in the bakery and meat industries (Barak and Mudgil, 2014). Its technological functions in food include improving shelf life by binding water, controlling the texture, influencing crystallisation, preventing creaming or settling, improving the freeze-thaw behaviour, preventing syneresis, preventing the retrogradation of starch products, maintaining turbidity in soft drinks and juices, and stabilising foam. The rate of addition in food products varies from below 0.1-2.0%, depending on the application. It is used as clouding agents at less than 0.1% and in gum drops and jelly candies up to 2.0%. The most common rate of addition lies between 0.2% and 0.5%. The functional properties of locust bean gum, blended with carrageenan and/or other biopolymers, are extensively utilised in dairy products, including ice cream. Synergistic interactions with xanthan gum or agar on viscosity are used in a variety of different foods, for example to improve the texture of sauces and dressings (Wielinga, 2010). Apart from its technological function it is used as an active ingredient as a source of soluble fibre in food supplements and dietetic products, e.g. in dietary fibre enriched products (Barak and Mudgil, 2014). Authorised uses and use levels Maximum levels of locust bean gum (E 410) have been defined in Annex II to Regulation (EC) No 1333/2008 9 on food additives, as amended. In this document, these levels are named maximum permitted levels (MPLs). Currently, locust bean gum (E 410) is an authorised food additive in the EU at quantum satis (QS) in most foods apart from jam, jellies and similar fruit or vegetables and foods for infants and young children. Locust bean gum (E 410) is included in the Group I of food additives authorised at QS. E 410 may not be used to produce dehydrated foods intended to rehydrate on ingestion. 410) is also authorised as a food additive other than carriers in all foods additives at QS. In addition, according to Annex III, Part 3, to Regulation (EC) No 1333/2008, locust bean gum (E 410) is authorised as a food additive including as a carrier, in food enzymes with a maximum level in enzyme preparation and in final food (beverages or not) at QS. According to Annex III, Part 4, to Regulation (EC) No 1333/2008, locust bean gum (E 410) is also authorised as a food additive including carriers in all foods flavourings at QS. Finally, according to Annex III, Part 5, Section A, to Regulation (EC) No 1333/2008, locust bean gum (E 410) is authorised as a food additive in all nutrients (except nutrients intended to be used in foodstuffs for infants and young children listed in point 13.1 of Part E to Annex II) at QS. Regulation (EC) No 1333/2008 stipulates that locust bean gum (E 410), as a food additive, belonging to group I, is not authorised for the use in jelly minicups and may not be used to produce dehydrated foods intended to rehydrate on ingestion. The Panel noted that these restrictions have to be seen against the background of human cases on severe adverse effects, such as oesophageal obstruction or asphyxiation, after oral intake of other gums/hydrocolloids with similar physicochemical properties as locust bean gum in the form of granules or pills without enough liquid (e.g. guar gum: Ranft and Imhof, 1983;Morse and Malloy, 1990;Opper et al., 1990;Seidner et al., 1990;Lewis, 1992;Halama and Mauldin, 1992;Taylor et al., 1998;FDA, 2015) or in the form of jelly minicups (konjac gum/glucomannan: EFSA, 2004). The Panel noted further that apart from the food additive uses in the food categories mentioned in Regulation (EC) No 1333/2008 (see Table 2), there is direct use of locust bean gum (E 410) as a food additive by the consumer in form of special powders to thicken foods (soup, sauce, cream, dessert, pap for children). Inappropriate preparation of the thickened food could lead to insufficient hydration of the locust bean gum particles associated with the possible risks of oesophageal obstruction. Thus, labelling including instructions for adequate preparation of the thickened food with sufficient liquid is deemed to be necessary. Furthermore, Regulation (EC) No 1333/2008 defines that for combined uses of locust bean gum with certain other gums/hydrocolloids, the individual maximum levels are reduced in certain food categories. However, the Panel noted that for food category 13. Most food additives in the EU are authorised at a specific MPL. However, a food additive may be used at a lower level than the MPL. Therefore, information on actual use levels is required for performing a more realistic exposure assessment, especially for those food additives for which no MPL is set and which are authorised according to QS. In the framework of Regulation (EC) No 1333/2008 on food additives and of Commission Regulation (EU) No 257/2010 regarding the re-evaluation of approved food additives, EFSA issued public calls, 10,11 for occurrence data (usage level and/or concentration data) on locust bean gum (E 410). In response to these public calls, updated information on the actual use levels of locust bean gum (E 410) in foods was made available to EFSA by the food industry and also by gums producers. No analytical data on the concentration of locust bean gum (E 410) in foods were made available by the Member States. Summarised data on reported use levels in foods provided by industry Industry provided EFSA with data on use levels (n = 375) of locust bean gum (E 410) in foods for 64 out of the 84 food categories in which locust bean gum (E 410) is authorised. Updated information on the actual use levels of locust bean gum (E 410) in foods was made available to EFSA by BABBI Confectionary Industry, Biovegan GmbH, Danisco, EMCESA, Eurogums A/S, FoodDrinkEurope (FDE), Mars, Specialised Nutrition Europe (SNE) and Rudolf Wild GmbH & Co. The Panel noted that some data providers (e.g. Danisco, Eurogums A/S, Rudolf Wild GmbH & Co) are not food industry using gums in their food products but food additive producers. Usage levels reported by food additive producers should not be considered at the same level as those provided by food industry. Food additive producers might recommend usage levels to the food industry but the final levels might, ultimately, be different, unless food additive producers confirm that these levels are used by food industry. In all other cases, data from food additive producers will only be used in the MPL scenario in case of QS authorisation when no data are available from food industry in order to have the most complete exposure estimates. For instance, for Eurogum A/S, all the submitted data are theoretical amounts suggested or recommended; they are based on their own technical know-how regarding adequate/recommended levels of use in different food applications. Eurogums A/S provided three identical levels on meat products not in line with levels from other data providers. These levels were not considered in the current estimates. Data made available by Danisco and Mars were provided in 2010. More recent data were received for the same food categories and these data were not used in the present exposure estimates. Appendix A provides data on the use levels of locust bean gum (E 410) in foods as reported by industry (food industry and gum producers). Summarised data extracted from the Mintel GNPD database The Mintel's Global New Products Database (GNPD) is an online database which monitors product introductions in consumer packaged goods markets worldwide. It contains information of over 2 million food and beverage products of which more than 900,000 are or have been available on the European food market. Mintel started covering the European Union's food markets in 1996, currently having 20 out of its 28 member countries and Norway presented in the GNPD. 12 For the purpose of this Scientific Opinion, the GNPD 13 was used for checking the labelling of products containing locust bean gum (E 410) within the EU's food products as the GNPD shows the compulsory ingredient information presented in the labelling of products. According to the Mintel GNPD, locust bean gum (E 410) was labelled on more than 24,000 food, drink and supplement products with over 15,500 of them published between 2011 and 2016. The Panel noted that information from Mintel's GNPD (Appendix B) indicated that approximately 95 out of the 118 food subcategories, categorised according to the Mintel nomenclature, in which locust bean gum (E 410) was labelled, were included by the Panel in the current exposure estimates. These 95 food subcategories represented approximately 95% of the food products items labelled with locust bean gum (E 410) in the database. In the remaining 23 food subcategories, in which locust bean gum (E 410) was labelled but which were not included in the exposure assessment, locust bean gum (E 410) was authorised in approximately 20 food subcategories. Appendix B presents the percentage of the food products labelled with locust bean gum (E 410) between 2011 and 2016, out of the total number of food products per food subcategories according to the Mintel food classification. 3.3.3. Food consumption data used for the exposure assessment EFSA Comprehensive European Food Consumption Database Since 2010, the EFSA Comprehensive European Food Consumption Database (Comprehensive Database) has been populated with national data on food consumption at a detailed level. The competent authorities in the European countries provide EFSA with data on the level of food consumption by the individual consumer from the most recent national dietary survey in their country (cf. Guidance of EFSA on the 'Use of the EFSA Comprehensive European Food Consumption Database in Exposure Assessment' (EFSA, 2011b). New consumption surveys recently 14 added in the Comprehensive Database were also taken into account in this assessment. 15 The food consumption data gathered by EFSA were collected by different methodologies and thus direct country-to-country comparisons should be interpreted with caution. Depending on the food category and the level of detail used for exposure calculations, uncertainties could be introduced owing to possible subjects' underreporting and/or misreporting of the consumption amounts. Nevertheless, the EFSA Comprehensive Database represents the best available source of food consumption data across Europe at present. Food consumption data from the following population groups: infants, toddlers, children, adolescents, adults and the elderly were used for the exposure assessment. For the present assessment, food consumption data were available from 33 different dietary surveys carried out in 19 European countries (Table 3). Consumption records were codified according to the FoodEx classification system (EFSA, 2011b). The nomenclature from the FoodEx classification system has been linked to the food categorisation system (FCS) as presented in Annex II of Regulation (EC) No 1333/2008, part D, to perform exposure estimates. In practice, FoodEx food codes were matched to the FCS food categories. Food categories considered for the exposure assessment of locust bean gum (E 410) The food categories in which the use of locust bean gum (E 410) is authorised were selected from the nomenclature of the EFSA Comprehensive Database (FoodEx classification system), at the most detailed level possible (up to FoodEx Level 4) (EFSA, 2011b). Some food categories or their restrictions/exceptions are not referenced in the EFSA Comprehensive Database and could therefore not be taken into account in the present estimate. This was the case for 13 food categories and may have resulted in an underestimation of the exposure. The food categories which were not taken into account are described below (in ascending order of the FCS codes): • 01.7.6 Cheese products (excluding products falling in category 16); • 02. For the following food categories, the restrictions/exceptions which apply to the use of locust bean gum (E 410) could not be taken into account, and therefore, the whole food category was considered in the exposure assessment. This applies to six food categories and may have resulted in an overestimation of the exposure: • 05.1 Cocoa and chocolate products as covered by Directive 2000/36/EC, only energy-reduced or with no added sugar; • 05.2 Other confectionery, including breath refreshening microsweets, may not be used in jelly minicups and to produce dehydrated foodstuffs intended to rehydrate on ingestion; • 07.1 Bread and rolls, except products in 7.1.1 and 7.1.2; • 08.3.2 Heat-treated meat products, except foie gras, foie gras entier, blocs de foie gras, Additionally, food category 4.2 (Processed fruits and vegetables) was further refined to take into calculation only the FoodEx levels which were presented in data provided for EFSA by the industry. This left the subcategory 4.2.5.3 (Other similar fruit or vegetable spreads) to be ignored during the exposure assessment. For the following food categories, the differences between subgroups could not be taken into account, and therefore, the whole category was considered in the exposure assessment. This may result in an overestimation of the exposure: • 01.6 Cream -01.6.2 Unflavoured live fermented cream products and substitute products with a fat content of less than 20%; • 08.3 Meat products -08.3.1 Non-heat-treated processed meat; -08.3.2 Heat-treated processed meat. • 11.4 Table top Considering that the food category 18 (Processed foods not covered by categories 1-17, excluding foods for infants and young children) being by definition unspecific (e.g. composite foods), processed foods, prepared or composite dishes belonging to the food category 18 were reclassified under their main component food categories. Therefore, no food products remain under the food category 18 and the food category as such will not appear as a food contributor of the total exposure estimates. Otherwise, for 16 food categories, no concentration data were provided to EFSA. For the remaining food categories, the refinements considering the restrictions/exceptions as set in Annex II to Regulation No 1333/2008 were applied. Overall, 59 food categories out of 84 were included in the present exposure assessment to locust bean gum (E 410) (Appendix C). 3.4. Exposure estimates 3.4.1. Exposure to locust bean gum (E 410) from its use as a food additive The Panel estimated chronic exposure to locust bean gum (E 410) for the following population groups: infants, toddlers, children, adolescents, adults and the elderly. Dietary exposure to locust bean gum (E 410) was calculated by multiplying locust bean gum (E 410) concentrations for each food category (Appendix C) with their respective consumption amount per kilogram of body weight for each individual in the Comprehensive Database. The exposure per food category was subsequently added to derive an individual total exposure per day. These exposure estimates were averaged over the number of survey days, resulting in an individual average exposure per day for the survey period. Dietary surveys with only 1 day per subject were excluded as they are considered as not adequate to assess repeated exposure. This was carried out for all individuals per survey and per population group, resulting in distributions of individual exposure per survey and population group (Table 3). On the basis of these distributions, the mean and 95th percentile of exposure were calculated per survey and per population group. The 95th percentile of exposure was only calculated for those population groups where the sample size was sufficiently large to allow this calculation (EFSA, 2011a). Therefore, in the present assessment, the 95th percentile of exposure for infants from Italy and for toddlers from Belgium, Italy and Spain were not included. It should be noted that, in two dietary surveys from Finland, namely DIPP_2001_2009 and NWSSP07-08 (EFSA, 2011a), the consumption of grain-based products including bread and fine bakery products was coded at the level of their ingredients (flour), which resulted in a very low exposure to locust bean gum in all Finnish populations compared with the other studies. Therefore, these two studies were excluded from the assessment. The exposure assessment to locust bean gum (E 410) was carried out by the ANS Panel based on (1) maximum levels of data provided to EFSA (defined as the maximum level exposure assessment scenario) and (2) reported use levels (defined as the refined exposure assessment scenario) as provided by industry. These two scenarios are discussed in detail below. As locust bean gum (E 410) is also authorised in the food categories 13.1.5.1 and 13.1.5.2, a refined estimated exposure assessment scenario taking into account these two food categories was performed to estimate the exposure of infants and toddlers who may eat and drink these foods for special medical purposes (FSMP). Considering that these specific foods are not reported in the EFSA Comprehensive database, but that foods for infants and young children in good health are, the Panel assumed that the amount consumed of FSMP in infants and toddlers is similar to the consumption of comparable foods in infants and toddlers from the general population. Thus, the consumption of FSMP under the food category 13.1.5 was assumed to be the same amount as the formulae and food products of food categories 13.1.1, 13.1.2, 13.1.3 and 13.1.4 (e.g. the consumption of 'special' infant formulae for medical purposes was assumed to be the same amount than the infant formulae of the FC 13.1.1). Concerning the uses of locust bean gum (E 410) as carriers, there might be food categories where locust bean gum is used according to Annex III and not to Annex II. These food categories can only be addressed by analytical data or limits set in the Regulation No 1333/2008 that were not available to the Panel. Therefore, a possible additional exposure from the use of locust bean gum (E 410) as a food additive in Annex III to Regulation (EC) No 1333/2008 was not considered in any of the exposure assessment scenarios. Another dietary exposure to locust bean gum (E 410) as a food additive could also be from its use directly by the consumer in adding special powder to thicken foods (soup, sauce, cream, dessert, pap for children). This was not considered in any of the exposure assessment scenarios. Maximum level exposure assessment scenario The regulatory maximum level exposure assessment scenario is based on the MPLs as set in Annex II to Regulation (EC) No 1333/2008. As locust bean gum (E 410) is authorised according to QS in almost all food categories, a 'maximum level exposure assessment' scenario was estimated based on the maximum reported use levels provided by industry, as described in the EFSA Conceptual framework (EFSA ANS Panel, 2014) (Appendix C). The Panel considers the exposure estimates derived following this scenario as the most conservative as it is assumed that the population group will be exposed to locust bean gum (E 410) present in food at maximum reported use levels over a longer period of time. Refined exposure assessment scenario The refined exposure assessment scenario is based on use levels reported by industry. This exposure scenario can consider only food categories for which the above data were available to the Panel. Appendix C summarises the concentration levels of locust bean gum (E 410) used in the refined exposure assessment scenario. Based on the available data set, the Panel calculated two refined exposure estimates based on different model populations: • The brand-loyal consumer scenario: It was assumed that a consumer is exposed long-term to locust bean gum (E 410) present at the maximum reported use for one food category. This exposure estimate is calculated as follows: combining food consumption with the maximum of the reported use levels for the main contributing food category at the individual level; using the mean of the typical reported use levels for the remaining food categories. • The non-brand-loyal consumer scenario: It was assumed that a consumer is exposed long term to locust bean gum (E 410) present at the mean reported use levels in food. This exposure estimate is calculated using the mean of the typical reported use levels for all food categories. For the scenario taking into account the FSMP, considering that it is very specific diet it is assumed that consumers are brand-loyal and only the results of the brand-loyal scenario were presented. Table 4 summarises the estimated exposure to locust bean gum (E 410) from its use as a food additive in six population groups (Table 3) Considering the general population, from the maximum level exposure assessment scenario, the mean exposure to locust bean gum (E 410) from its use as a food additive ranged from 51.6 mg/kg bw per day for adults to 436.7 mg/kg bw per day in infants. The 95th percentile of exposure to locust bean gum (E 410) ranged from 100.8 mg/kg bw per day for adults to 886.4 mg/kg bw per day in infants. From the refined estimated exposure scenario, in the brand-loyal scenario, the mean exposure to locust bean gum (E 410) from its use as a food additive ranged from 35.7 mg/kg bw per day in adults to 368.9 mg/kg bw per day in infants. The 95th percentile of exposure to locust bean gum (E 410) ranged from 74 mg/kg bw per day for adults to 765.2 mg/kg bw per day in infants. In the non-brand-loyal scenario, the mean exposure to locust bean gum (E 410) from its use as a food additive ranged from 20.7 mg/kg bw per day for adults to 204.3 mg/kg bw per day in infants. The 95th percentile of exposure locust bean gum (E 410) ranged from 38.8 mg/kg bw per day for the elderly to 415.4 mg/kg bw per day in infants. Uncertainty analysis Uncertainties in the exposure assessment of locust bean gum (E 410) have been discussed above. In accordance with the guidance provided in the EFSA opinion related to uncertainties in dietary exposure assessment (EFSA, 2007), the following sources of uncertainties have been considered and summarised in Table 9. Range of % contribution to the total exposure (number of surveys) (a) 12.7 Salads and savoury-based sandwich spreads 5.5-8.0 (2) 8.1 (1) 6.2-13.7 (3) 7.9-11.6 (2) 12.9 Protein products, excluding products covered in category 1.8 10.6 (1) 13.1 Foods for infants and young children 37.2-89.2 (5) 5.0-34.6 (6) (a): The total number of surveys may be greater than the total number of countries as listed in Table 3 because some countries submitted more than one survey for a specific population. (7) 12.9 Protein products, excluding products covered in category 1.8 11.3 (1) 13.1 Foods for infants and young children 56.3-97.2 (5) 7.6-55.3 (7) 16 Desserts, excluding products covered in category 1, 3 and 4 5.8 (1) (a): The total number of surveys may be greater than the total number of countries as listed in Table 3 because some countries submitted more than one survey for a specific population. Overall, the Panel considered that the uncertainties identified would, in general, result in an overestimation of the exposure to locust bean gum (E 410) as a food additive according to Annex II in European countries for the maximum level exposure scenario and for the refined scenario if it is considered that the food additive may not be used in food categories for which no usage data have been provided. The Panel noted that food categories which may contain locust bean gum (E 410) due to carry-over (Annex III, Part 1, 2, 3, 4, 5, Section A) were not considered in the current exposure assessment. Considering the exposure to locust bean gum (E 410) for infants and young children eating foods for special medical purposes, the Panel considered that the uncertainties identified would, in general, result in an overestimation of the exposure in European countries for the brand-loyal refined scenario. Exposure via other uses The exposure to locust bean gum due to the following uses was not considered in this opinion. Locust bean gum as ingredient in slimming products and other foods According to the literature, locust bean gum is used as a bulk-forming agent or 'soluble fibre' in energy-reduced slimming products and other foods taken under conditions of nutritional disorders (Teuscher et al., 2004;Gruenwald et al., 2007;Haensel and Sticher, 2007). Preparations containing locust bean gum are also offered for all age groups including infants to be used as a food for special medical purpose in the management of vomiting and rumination (Teuscher et al., 2004;Gruenwald et al., 2007). Pharmaceutical use Locust bean gum can be used as a thickening agent and stabiliser in medicinal products (Martindale, 2014). Sources of uncertainties Direction (a) Correspondence of reported use levels to the food items in the EFSA Comprehensive Food Consumption Database: • uncertainties to which types of food the levels refer to • levels considered applicable for all foods within the entire food category +/À Food categories excluded from the exposure assessment due to missing FoodEx linkage (n = 13/84 food categories) À Food categories selected for the exposure assessment: inclusion of food categories without considering the restriction/exception (n = 6/79 food categories) + Food categories included in the exposure assessment: data not available for certain food categories which were excluded from the exposure estimates (n = 16/84 food categories) À Maximum level exposure assessment scenario: • food categories which may contain locust bean gum (E 410) due to carry-over not considered • exposure calculations based on the maximum reported use levels (reported use from industries) À + Refined exposure assessment scenarios: • food categories which may contain locust bean gum (E 410) due to carry-over not considered • exposure calculations based on the maximum or mean levels (reported use from industries) À +/À Uncertainty in possible national differences in use levels of food categories +/À (a): +, uncertainty with potential to cause overestimation of exposure; À, uncertainty with potential to cause underestimation of exposure. 3.5. Biological and toxicological data 3.5.1. Absorption, distribution, metabolism and excretion There is evidence that certain high molecular weight dietary polysaccharides, such as gums, could be partially broken down in the large intestine of man. In addition to intermediate metabolites, such as lactate, acrylate or fumarate, the main end products of this colonic anaerobic digestive process are short-chain fatty acids (SCFAs), such as acetic, propionic and butyric acids, which are absorbed from the colon (Cummings and Englyst, 1987). In vitro studies The following in vitro data on microbial fermentation of locust bean gum were available: Solutions or suspensions of locust bean gum were incubated with human gastric juice, duodenal juice plus bile, pancreatic juice and succus entericus with or without added rabbit small gut membrane enzymes. Hydrolysis of locust bean gum was not observed in any of the test systems (Semenza, 1975;as reported in JECFA, 1981b). A total of 188 strains from 11 species of Bacteroides found in the human colon were surveyed for their ability to ferment mucins and plant polysaccharides, including gums (Salyers et al., 1977a). Many of the Bacteroides strains tested were able to ferment a variety of plant polysaccharides, including amylose, dextran, pectin and gums. The ability to utilise mucins and plant polysaccharides varied considerably among the Bacteroides species tested. Locust bean gum (origin, Meer Co) was shown to be mainly fermented by Bacteroides 0061-9 strains. A total of 154 strains from 22 species of Bifidobacterium, Peptostreptococcus, Lactobacillus, Ruminococcus, Coprococcus, Eubacterium and Fusobacterium, which are present in high concentrations in the human colon, were surveyed for their ability to ferment 21 different complex carbohydrates, including gums. Among them, locust bean gum (origin, Meer Co) was fermented by some strains of Bifidobacterium, and Ruminococcus species (Salyers et al., 1977b). A total of 290 strains of 29 species of bifidobacteria from human and animal origin (mainly from faecal origin) were surveyed for their ability to ferment complex carbohydrates (Crociani et al., 1994). The substrates fermented by the largest number of species were D-galactosamine, D-glucosamine, amylose and amylopectin. Locust bean gum (origin, Sigma Co.) was shown to be mainly fermented by Bifidobacterium dentium strains. In another in vitro study, locust bean gum (90.9% dry matter, 5.9% crude protein) was fermented using dog faeces as the source of inoculum (Sunvold et al., 1995a). Organic matter disappearance and SCFAs production were measured after 6, 12 or 24 h of incubation. Regardless of the duration of incubation, the disappearance of organic matter, and the production of acetate, propionate and butyrate, were higher in the case of locust bean gum than for other gums such as acacia gum and particularly karaya or xanthan gum. Identical conclusions were drawn from a similar study using the same substrates fermented by cat faecal microflora (Sunvold et al., 1995b). In vivo studies Rats were fed with locust bean gum for 21 days. Thereafter, the large gut microflora of these conditioned rats partially hydrolysed locust bean gum in vitro (Towle and Schranz, 1975;as reported in JECFA, 1981b). A digestibility study in groups of five male and five female rats (Purdue strain) on a mannose-free diet showed that 85-100% of mannose fed as 1% locust bean gum (unspecified origin) in the diet for 18 h was excreted in the faeces over a total of 30 h (Tsay andWhistler, 1975 as reported in JECFA, 1981b). Some decrease in chain length of galactomannan may have occurred, probably through the action of the microflora. Liberation of galactose units was not determined. The Panel noted that no further information was available. The effect of locust bean gum (unspecified origin) on protein digestibility and nitrogen balance was tested in male Sprague-Dawley rats (Harmuth-Hoene and Schwerdtfeger, 1979 as reported in JECFA, 1981b). A total of 12 rats received a basal diet plus 10% locust bean gum (equivalent to 12 g of locust bean gum/kg bw per day). A total of 12 control rats were fed only basal diet. Following a 3-day adaption period, feed remnants, urine and faeces were collected during an 8-day balance period. In the treated group, dry matter digestibility, apparent protein digestibility and urinary nitrogen excretion were significantly decreased. The latter compensated for increased faecal nitrogen loss by these rats. Although the faecal dry matter excretion was significantly increased in treated animals, it was considerably lower than could be expected if the ingested polysaccharide would be excreted unchanged. From these data, the authors assumed that 70-80% of locust bean gum was degraded by the intestinal microflora and made available to the host organism. Human study The behaviour of locust bean gum (unspecified origin) in the gastrointestinal tract of man was described by Holbrook (1951). According to the authors, both X-ray-studies and stool examination showed that the colloidal gel resulting from the disintegration of the locust bean gum preparation permeated the faecal mass in the colon and mixed thoroughly with it. The greatest effect on the faeces is the alteration in consistency (described as soft, gelatinous and homogeneous); the stool weight is only slightly increased. Locust bean gum did not disintegrate until reaching the large bowel and there is no interference with normal digestion (Holbrook, 1951). Overall, data on in vitro degradation by human gastrointestinal fluids and on in vivo digestibility of locust bean gum in animals demonstrated that this compound would not be absorbed intact or hydrolysed by digestive enzymes. However, locust bean gum would be fermented with production of SCFAs, such as acetic, propionic and butyric acids, during its passage through the large intestine by strains of bacteria found in the human colon. Based on the available knowledge on the role of SCFA as end products of the fermentation of dietary fibres by the anaerobic intestinal microflora (Topping and Clifton, 2001;den Besten et al., 2013), the Panel considered that their potential formation as fermentation products from locust bean gum does not raise any concern. Despite the absence of convincing in vivo study in humans, the Panel considered that these data indicate that locust bean gum would most probably not be absorbed intact but significantly fermented by enteric bacteria in humans. Acute toxicity The reported oral lethal dose (LD 50 ) values in all tested species exceed 5,000 mg of locust bean gum/kg bw (Table 10). The Panel noted that information about strain, sex, weights and effects was, in most cases, not available. In addition, Covington and Burling (1993) reported the oral LD 50 values for locust bean gum as 'more than 8,000 mg of locust bean gum/kg in rats, mice, rabbits and hamsters'. van Nevel et al. (2005) investigated the influence of various galactomannans on bacteriological and morphological aspects of the gastrointestinal tract in weanling pigs. Four groups of five newly weaned piglets received one of the following diets for 11 or 12 days: control feed (control group), control feed supplemented with guar gum (1%), locust bean gum (1%) or 10% of carob tree seeds meal as a source of locust bean gum (corresponding to about 250 mg gum/kg bw per day or 2,500 mg seeds/kg bw per day). The animals were euthanised after 11 or 12 days and digesta were sampled in the stomach, jejunum and caecum. On these samples, bacteriological, biochemical and morphological determinations were carried out. Only with the carob tree seeds diet the viscosity of jejunal contents was increased. According to the authors, the effects of the addition of 1% of pure guar gum or locust bean gum (250 mg/kg bw per day) to the diet were inconsistent, whereas only supplementation with 10% of carob tree seeds to the diet influenced the intestinal characteristics at the bacteriological and morphological levels. The Panel noted that this study was inconclusive when considered in the assessment of locust bean gum used as a food additive in humans. Short-term and subchronic toxicity Holtzman rats (eight males/group, age not specified) received basic diet (control), basic diet plus 1% cholesterol, basic diet plus 1% cholesterol and 10% locust bean gum (unspecified origin, equivalent to 12,000 mg of locust bean gum/kg bw per day) for 28 days (Ershoff andWells, 1962 as reported in JECFA, 1981b). There were no statistically significant differences in weight gain among the groups. No adverse effects were reported. Sprague-Dawley rats (six males/group, 3-weeks old) were fed a diet containing 50,000 mg of locust bean gum/kg diet (equivalent to 6,000 mg of locust bean gum/kg bw per day) for 4 weeks (Mallett et al., 1984). Treatment had no effect on body weight, but the weight of the caecal wall and of the caecal contents was significantly increased. There was also a significant increase in the activity of bacterial enzymes azo reductase, glucosidase, nitroreductase, nitrate reductase and urease compared to controls. The effect of 2% locust bean gum in the diet (unspecified origin, equivalent to 2,400 mg of locust bean gum/kg bw per day) on digestibility and growth was tested in newly weaned Sprague-Dawley rats (10 males/group) for 36 days (Vohra et al., 1979;JECFA, 1981b). The dietary intake was measured every day during the last week of the treatment period. The digestibility of the diet was calculated from dry weights of the feed and excreta (digestibility % = diet intake/excreta). The digestibility of the diet and the growth rate were not influenced by locust bean gum. Rats (10 males and 10 females Wistar/group) were given doses of 0%, 1%, 2% and 5% locust bean gum (origin Cesalpinia, Bergamo) in the diet for 90 days (Til et al., 1974). The dietary doses were equivalent to 0, 900, 1,800 or 4,500 mg of locust bean gum/kg bw per day. The animals were observed daily and weighed on the first day of treatment, weekly thereafter. Urine samples collected at weeks 5 and 13, were tested for pH, protein, glucose, ketones, blood and microscopic examination. Blood samples were obtained at weeks 4 and 12 of treatment and were examined for haemoglobin concentration, red and white blood cell counts, and haematocrit. Serum concentrations of glucose, blood urea nitrogen and activities of alanine transaminase (ALAT), aspartate transaminase (ASAT) and alkaline phosphatase (ALP) were determined. At autopsy, following visual examination of all organs, the following organs were removed and weighed: the heart, kidneys, liver, spleen, brain, testicles or ovaries, thymus, thyroid, adrenals and caecum (filled and empty). Detailed histopathological examination was performed on 10 males and 10 females of the highest dose group and of the control group. Examinations were performed in the weighed organs and also in the lung, prostate, epididymis, urinary bladder, salivary glands, axillary and mesenteric lymph nodes, gastrointestinal tract (six levels) and pancreas. No treatment-related differences between the control and treated groups were observed in general condition, behaviour, survival, growth, feed intake, haematology, blood biochemistry and urinalysis. The serum glucose level determined in the highest dose group was slightly increased. Since this increase was within the normal range for the strain of rats used, this change was not ascribed by the authors to the feeding of the gum. At any dietary level of locust bean gum, gross and microscopic examinations did not reveal any pathological changes attributable to the test substance. The increase in the relative weight of the caecum observed only in groups receiving the highest dose was considered to be of no toxicological relevance by the authors. The authors concluded that the feeding of the locust bean gum at levels up to 5% in the diet of rats for 3 months did not cause any distinct adverse effects. The Panel agreed with this conclusion and identified a NOAEL of 4,500 mg of locust bean gum/kg bw per day, the highest dose tested. The Panel noted that an increased caecum weight in animals fed high amounts of carbohydrates is considered a physiological response to an increased fermentation due to a carbohydrate-induced modification on the composition of the intestinal microbiota. Increased caecum weight has been observed in rats fed carbohydrates other that guar gum (Leegwater et al., 1974;Licht et al., 2006). Animals fed diets containing potato starch, inulin or oligofructose had significantly higher caecum weights and lower pH values than the reference animal group (Licht et al., 2006). Different groups of animals fed modified diets containing increased concentration of potato starch, hydroxypropyl starch and hydroxypropyl distarch glycerol showed increases in the relative caecal weights, filled and emptied, with increasing concentrations of the various hydroxypropyl starches. These increases were accompanied by increased severities of diarrhoea that was related to an increased osmotic activity of the caecal fluid in the animals (Leegwater et al., 1974). The authors hypothesised that dietary components not completely digested and/or absorbed in the small intestine, and further fermented by the gut microflora, enhance the amounts of osmotically active material resulting in an increase in water retention and the animals drinking more water leading to the caecum distention to a size larger than normal. A 90-day study was performed in F344 rats (initial weight 69-89 g) and B6C3F1 mice (initial weight 16-21 g) fed diets containing 0%, 0.63%, 1.25%, 2.5%, 5%, and 10% locust bean gum (food grade material, 77-88% galactomannans) (NTP, 1982). The dietary doses were equivalent to 0, 560, 1,120, 2,250, 4,500 and 9,000 mg of locust bean gum/kg bw per day in rats and 1,250, 2,500, 5,000, 10,000 and 20,000 mg of locust bean gum/kg bw per day in mice. Ten animals of each sex and each species per dose were used and separate control groups of 10 animals for each sex and species were included. The investigations included clinical signs, body weights, feed consumption and histopathology of all major organs. Haematology, clinical chemistry and urine were not investigated. One female rat of the 2,250 mg of locust bean gum/kg bw per day group died. The weight gain depression in rats was ≤ 10% in all dosed groups and no compound-related toxic effects were observed. Two male mice (one from the 5,000 mg of locust bean gum/kg bw per day group and one from the control group) and two female mice (one from the 2,500 mg of locust bean gum/kg bw per day and one from the 5,000 mg of locust bean gum/kg bw per day group) died from accidental causes. No weight gain changes were reported in mice (NTP, 1982). The Panel noted that no adverse effects were reported in this study. Overall, the subacute or subchronic administration of oral doses as high as 9,000 mg of locust bean gum/kg bw per day to rats did not induce any adverse effect. The only side effect observed was the caecal enlargement. The Panel considered that an increased caecum weight in animals fed high amounts of carbohydrates is considered as a physiological response to an increased fermentation due to a carbohydrate-induced modification on the composition of the intestinal microbiota. In vitro Negative results for locust bean gum (unspecified origin) were reported when assayed for gene mutation in the Ames test with Salmonella Typhimurium, tester strains TA1535, TA1537, and TA1538 and for gene conversion with Saccharomyces cerevisiae, tester strain D4, both in the absence and presence of exogenous mouse, rat and monkey liver S9 metabolism. Dose levels of 0.45%, 0.90% and 1.80% (equivalent to 4.5, 9.0 and 18.0 mg/mL) for the Ames test and dose levels of 0.55%, 1.1% and 2.2% (equivalent to 5.5, 11 and 22 mg/mL) for the gene conversion assay were used (Litton Bionetics, 1975). However, the Panel noted that the results from both studies are limited due to the inadequate number of tester strains used in the bacteria gene mutation assay and that the gene conversion assay with S. cerevisiae has not been validated and it is no longer employed for risk assessment. In the spore rec-assay employing the Bacillus subtilis strains M45 rec À , unable to repair DNA damage, and the wild-type strain H17 rec + as a control, locust bean gum (unspecified origin) was assessed for its potential DNA-modifying effects at single dose level of 2.5 mg/plate both in the absence and presence of S9 metabolism. Negative results were obtained. The Panel noted that this mutagenicity assay is not frequently used and has not been validated (Ishizaki and Ueno, 1987). In the studies by Ashby and Tennant (1988) and Zeiger et al. (1992), the mutagenicity of locust bean gum (unspecified origin) was evaluated in the reverse mutation assay using the S. Typhimurium strains TA 1535, TA1537, TA97, TA98 and TA100 according to the method of Ames by the preincubation protocol, both in the absence and presence of Aroclor 1254-induced rat and hamster S9 fractions at 10% and 30% up to a dose level of 10 mg/plate. The outcome of the study clearly indicated that locust bean gum was devoid of mutagenic activity under the reported experimental conditions. The Panel noted that the study complies with the current OECD Guideline 471 with the exception that tester strains TA102 or WP2uvrA bearing AT mutation were not used. Locust bean gum (unspecified origin) was assessed for its capability to induce chromosomal aberrations in anaphase in human embryonic lung cells (WI-38) at dose levels up to 1,000 lg/mL without S9 mix. No cytogenetic effects were reported by authors. However, the Panel noted that this assay has not been validated and it is no longer employed for risk assessment Newell, 1972, 1974). In vivo In the studies by Newell (1972, 1974), locust bean gum (unspecified origin) was assessed for its genotoxicity in the following in vivo assays: 1) Host-mediated assay in Swiss Webster male mice administered once by oral gavage at 30, 2,500, 5,000 mg/kg bw or for five consecutive days at the same dose levels employed in the single administration regime using the S. Typhimurium tester strains TA 1530 and G-46 for mutagenicity and S. cerevisiae (strain D3) for mitotic recombination. 2) Chromosomal aberrations in bone marrow cells of male albino rats administered by oral gavage once at 30, 2,500, 5,000 mg/kg bw or for five consecutive days at the same dose levels employed in the single administration regime. In the acute treatment, sampling of bone marrow cells was performed at 6, 24 and 48 h from the administration of test compound, whereas, in the multiple administration study, sampling of bone marrow cells was only performed at 6 h from the last administration. 3) Dominant lethal assay in Sprague-Dawley rats following administration of test compound by oral gavage once at 30, 2,500, 5,000 mg/kg bw or for five consecutive days at the same dose levels employed in the single administration regime. Negative and positive control animal groups were also included. Following treatment, the males were sequentially mated to two females per week for 8 weeks (7 weeks in the subacute study) and housed separately until sacrifice. Total implants (live fetuses plus early and late fetal deaths), total dead (early and late fetal deaths), dead implants per total implants and pre-implantation loss (calculated as the difference between the total corpora lutea and total implant counts) were evaluated. No genotoxic effects were observed in any of the assay performed. The Panel noted that the hostmediated assay system has not received further validation and it is presently considered to be obsolete. For chromosomal aberration, no significant reduction in mitotic indices in the locust beantreated groups compared to the concurrent vehicle control groups were observed indicating no target tissue exposure. However, this is in line with the evidence that locust bean gum is not absorbed as such but appears, to be slightly fermented in the intestine to SCFAs. Overall, although the available in vitro and in vivo studies are generally limited or not relevant for the reasons listed above, no genotoxic activity has been observed for locust bean gum. On this base and considering the chemical structure of locust bean gum and its negligible absorption, the Panel concluded that there is no concern with respect to the genotoxicity of locust bean gum (E 410). JECFA (1981b) reported an unpublished study from Carlson and Domanski (1980). In this study, groups of 50 male and 50 female Charles River albino rats were given diets containing 2% and 5% locust bean gum (unspecified origin, equivalent to 1,000 and 2,500 mg of locust bean gum/kg bw per day) for 24 months. Rats (10/sex per dose) were sacrificed at 12 months. Significantly greater body weights were observed in the females of the 2% group at week 11, as well as weeks 94-100, and in the females of the 5% group at week 13. The following changes in haematological parameters were noted: decrease in reticulocyte count in females of the 5% group at 6 months; decrease in haemoglobin concentration in females of the 2% group at 6 months; increase in segmented neutrophils; and decrease in lymphocytes in males of the 2% group at 6 months. At the interim sacrifice, a statistically significant reduction in the absolute thyroid weight was observed in males of both treated groups and, at final sacrifice, a significant reduction in the absolute brain weight in females of the 5% group was reported. Significant treatment-related effects on gross or microscopic pathology were not seen. The study report was not available to the Panel, and therefore, the relevance of the reported effects cannot be fully evaluated. However, the Panel considered that the effects reported in the JECFA evaluation did not show any consistency (no dose-effect relationship, no consistency in time, no histopathological findings in these organs) and therefore their adversity does not seem to be demonstrated. In addition, JECFA did not consider these effects in its conclusion on this study. The authors of the study concluded that under the conditions of this bioassay, locust bean gum was not carcinogenic for male or female F344 rats. The Panel agreed with this conclusion. Chronic toxicity and carcinogenicity A carcinogenicity study was performed in F344 rats and B6C3F1 mice fed diets containing 2.5% and 5% locust bean gum (food grade material, 77-88% galactomannans) for 103 weeks (NTP, 1982;Melnick et al., 1983). The investigations included clinical signs, body weights, feed consumption and histopathology of all major organs. Haematology, clinical chemistry and urine analysis were not performed. In mice, the dietary doses were equivalent 3,750 and 7,500 mg of locust bean gum/kg bw per day. A total of 50 animals of each sex per dose were used and separate control groups of each sex and species were included. Although alveolar/bronchiolar adenomas occurred in low-dose male mice at a significantly (p = 0.017) higher incidence than that in the controls (7/50, 17/50 and 11/50), no significant statistical results were obtained when the combined incidence of animals with either alveolar/bronchiolar adenomas or carcinomas was analysed (14/50, 21/50 and 14/50). In rats, the dietary doses were equivalent to 1,250 and 2,500 mg of locust bean gum/kg bw per day. A total of 50 animals of each sex per dose were used and separate control groups of each sex and species were included. Cortical adenomas in the adrenal gland of female rats occurred with a statistically significant (p < 0.05) positive trend (1/50, 4/50 and 6/50), but comparisons between test groups and the control group were not statistically different. From these data in mice and rats, the authors concluded that under the conditions of this bioassay, locust bean gum was not carcinogenic for male or female F344 rats or B6C3F1 mice. By considering the absence of a dose-related effect of these findings, the Panel agreed with the conclusion of the authors and noted that no adverse effects were reported in rats at the highest dose tested of 2,500 mg locust bean gum/kg bw per day. Overall, the Panel considered that locust bean gum is not of concern with respect to carcinogenicity. 3.5.6. Reproductive and developmental toxicity 3.5.6.1. Reproductive toxicity studies As reported by JECFA 1981b, 'A three-generation reproduction study was carried out in CD strain Charles River albino rats . Groups of 10 male and 20 female animals were fed a rat chow diet containing 2% or 5% locust bean gum (equivalent to 1,800 and 4,500 mg LBG/kg bw/day) or 5% alpha cellulose (control). The same doses and animal numbers were used throughout the study. In each generation the parental animals received the test diet for 11 weeks prior to mating and then through mating, gestation and weaning. Two or three litters were raised per generation and the second litter was used to produce the following generation. Ten males and 10 females from each treatment group of the F 3b generation were selected for histopathological examination of 12 major organs and tissues and organ weight analysis. All other animals were subject to gross necropsy only. There were statistically significant decreases in premating body weight gain in the F 0 females fed 2% LBG and in final body weight in the females fed 5% LBG. There were the following significant differences in organ weight ratios in the F 3b 5% LBG group as compared to the controls: smaller spleen to brain weight, absolute liver weight, liver to brain weight and larger brain to body weight. These differences were ascribed to the highly variable values for these parameters in young rats and the fact that all the animals may not have been at the same age at sacrifice. This factor could have had an effect on organ weight ratios in young animals. There were no significant treatment-related effects on reproductive indices or gross or microscopic pathology'. The Panel noted that the original study report was not available. According to JECFA, no reproductive, including fertility, maternal or developmental toxic effects were seen in a three-generation study with rats at the doses tested up to 5% locust bean. The Panel considered that the NOAEL (according to JECFA1981b) for reproductive toxicity was 5% (equivalent to 4,500 mg of locust bean gum/kg bw per day), the highest dose tested. Developmental toxicity studies Several developmental toxicity studies of locust bean gum were conducted in CD-1 mice Wistar, rats, golden hamsters and Dutch-belted rabbits (FDRL, 1972). Animals were administered different doses of locust bean gum (unspecified origin) suspended in anhydrous corn oil by gavage (1.0 mL/kg bw); the control groups were vehicle-treated. Body weights were recorded at regular intervals during gestation and all animals were observed daily for appearance and behaviour. All dams were subjected to caesarean section, and the numbers of implantation sites, resorption sites, live and dead fetuses, and body weight of live fetuses were recorded. All fetuses were examined grossly for external abnormalities; one-third underwent detailed visceral examinations and two-thirds were stained and examined for skeletal defects. Mice Pregnant CD-1 mice (20-21/group) were treated by oral gavage once daily from gestation day (GD) 6-15 with doses of 0, 13, 60, 280, 1,300 mg/kg bw per day of locust bean in corn oil (20,18,19,20 and 16 pregnant surviving females/group, respectively, and 0, 2, 2, 1 and 5 dead females, respectively) (Morgareidge, 1972). At necropsy on GD 17, the surviving dams appeared to be completely normal and the number of implantations and live fetuses was comparable to the control group. Doses up to 1,300 mg of locust bean/kg bw per day had no effects on implantation nor on maternal and fetal survival (with the exception of five of 21 female deaths in the highest dose group). The numbers of live or dead fetuses, resorptions and average implant sites, and also fetal weights, did not differ among the groups. The sex distribution of fetuses was not affected by the treatment. The number of abnormalities seen in either soft tissues or skeletons at fetal pathological examination of the locust bean-treated groups did not differ from the number in vehicle-treated dams of the control group. Rats Pregnant Wistar rats (24/group) were treated by oral gavage once daily from GD 6-15 with doses of 0, 13, 60, 280 or 1,300 mg/kg bw per day of locust bean in corn oil (23, 23, 21, 24, 22 pregnant surviving females/group, respectively) (Morgareidge, 1972). At necropsy on GD 20, doses up to 1,300 mg of locust bean/kg bw per day appeared to be completely normal and had no effects on implantation or on maternal and fetal survival. The numbers of live or dead fetuses, resorptions, average implantations and fetal weights did not differ among the groups. The sex distribution of fetuses was not affected by the treatment. The number of abnormalities seen in either soft tissues or skeletons at fetal pathological examination of the locust bean-treated groups did not differ from the number in vehicle-treated dams of the control group. Hamsters Pregnant golden hamsters (20 animals/group) were treated by oral gavage once daily from GD 6-10 of gestation with doses of 0, 10, 45, 220 or 1,000 mg/kg bw per day of locust bean in corn oil (20,20,19,20,19 pregnant surviving females/group, respectively (Morgareidge, 1972). At necropsy on GD 14, doses up to 1,000 mg locust bean/kg bw per day appeared to be completely normal and showed no effects on implantation or on maternal and fetal survival. The numbers of live or dead fetuses, resorptions, average implant sites or fetal weights did not differ among the groups. The sex distribution of fetuses was not affected by the treatment. The number of abnormalities seen in either soft tissues or skeletons at fetal pathological examination of the locust bean-treated groups did not differ from the number in vehicle-treated dams of the control group. Rabbits Artificially inseminated Dutch-belted rabbits (15 animals/group) were treated by oral gavage once daily from GD 6-18 with doses of 0,9,42,196 or 910 mg/kg bw per day of locust bean in corn oil (13,11,12,11 and 6 pregnant surviving females/group, respectively) (Morgareidge, 1972). The mortality in this test was 1, 0, 0, 2 and 7 dams in the respective groups. Maternal death was preceded by severe bloody diarrhoea and urinary incontinence, and anorexia was observed 48-72 h before death with pathological findings of haemorrhages in the mucosa of small intestines. At necropsy on GD 29, the surviving does appeared normal throughout the observation period and had normal fetuses. No effect was observed on the number of implantations. The numbers of live or dead fetuses, resorptions, average implant sites or fetal weights did not differ among the groups. The sex distribution of fetuses was not affected by the treatment. The number of abnormalities seen in either soft tissues or skeletons at fetal pathological examination of the locust bean-treated groups did not differ from the number in vehicle-treated dams of the control group. The Panel noted that the mortality observed in mice at 1,300 mg/kg bw per day and in rabbits at 910 mg/kg bw per day is not in line with the findings from acute toxicity studies (Maxwell and Newell, 1972). The higher maternal toxicity observed in these studies as to compared to other species can be caused by the difficulty of dosing mice and rabbits with a viscous solution. Therefore, the Panel considered these studies not suitable for the evaluation of risk assessment. Overall, the Panel considered that the NOAEL for reproductive toxicity was 5% in the diet (equivalent to 4,500 mg of locust bean gum/kg bw per day), the highest dose tested . Furthermore, the Panel considered the prenatal developmental toxicity studies in mice and rabbits not suitable for the evaluation of risk assessment due to high maternal toxicity (mortality). In the prenatal developmental toxicity studies in rats and hamsters no maternal and developmental effects were observed up to the highest dose tested (1,300 and 1,000 mg locust bean gum/kg bw per day) (FDRL, 1972). 3.5.7. Other studies including hypersensitivity, allergenicity and food intolerance 3.5.7.1. Animal studies As reported by Covington and Burling (1993), an unpublished animal study was conducted by the oral administration of locust bean gum (unspecified origin) to male B6C3FI mice at a dose of 2,500 mg of locust bean gum/kg bw per day (considered by the authors as 50% maximum tolerated dose) daily for 11 days (no further information available). The animals were sensitised to sheep red blood cells (SRBC) by intraperitoneal injections on the third day of test material administration. On the 12th day following the initiation, no suppression of the anti-SRBC primary immune response was observed among locust bean gum-treated mice even at the high dose level administered. The effect of polysaccharides, including locust bean gum, fed at the 10% level in a semisynthetic diet on absorption of Ca, Fe, Zn, Cu, Cr and Co, on weight gain and on faecal dry matter excretion was studied over a period of 8 days in five groups of 12 weanling male rats each and compared to a control group (Harmuth-Hoene and Schelenz, 1980). Locust bean gum reduced significantly the absorption of Zn, Cr, Cu and Co. A considerable portion of locust bean gum was metabolised, presumably due to the action of intestinal bacteria. The Panel noted that JECFA recently requested toxicological data from studies in neonatal animals, adequate to evaluate the safety for use in infant formulae (JECFA, 2016). This study is not yet available and its results may be also relevant for the risk assessment of the additive use in food for other age groups than infants up to 12 weeks of age which is excluded from this evaluation. Adults Few case reports on allergic reactions in humans are available in the literature. Occupational rhinitis and asthma due to locust bean gum (unspecified origin) have been described in a chemist (Mechaneck, 1954), in a jam factory worker (van der Brempt et al., 1992) and in an ice cream manufacturer (Scoditti et al., 1996). Fiocchi et al. (1999) reported that 'carob'-specific sensitisation, apparent both in vitro and in skin prick tests, can be concordant with peanut allergy and that heat-processing deactivates carob protein allergenicity. Immediate hypersensitivity to locust bean pods resulting in burning sensations, discomfort in the throat and chest tightness when chewing locust bean pods was reported. Prick tests showed positive results for locust bean gum and were confirmed by detection of specific immunoglobulin E (IgE) to locust bean gum (Komericki and Kr € anke, 2009). In a case of urticaria and angioedema in a woman after ingestion of a cr eme caramel containing locust bean gum (unspecified origin), allergy to locust bean gum was proven by prick testing and detection of IgE specific. Skin prick tests were positive for locust bean gum and high titres of serum IgE specific to locust bean gum demonstrated an IgE-mediated mechanism (Alarcon et al., 2011). Various studies investigated the effect of locust bean gum on plasma lipids. In the study of Zavoral et al. (1983), 17 adults and 11 adolescents, a group of 18 familial hypercholesterolaemic (FHC) and 10 normal subjects, were fed for 8 weeks diets with and without locust bean gum (10-20 and 10-35 g/day in children and adults, respectively; corresponding to 200-500 mg/kg bw per day) to assess the hypolipidemic effect of the gum. Identical food products with and without locust bean gum were consumed by two groups (A and B) of arbitrarily assigned patients using a cross-over design. Plasma cholesterol, low-density lipoprotein cholesterol (LDL-C), high-density lipoprotein cholesterol (HDL-C), very LDL-C and triglycerides were measured at 2-week intervals and compared to control feeding periods. According to the authors, locust bean gum food products provided a safe, effective means to lower serum lipids in hypercholesterolaemic and normal adults and children over a 3-month feeding study. In FHC patients, serum cholesterol was lowered 10-17%, LDL-C was lowered from 11 to 19%, and HDL/LDL ratios were unchanged. The locust bean gum food products were consumed without significant side effects and were well accepted. Behall et al. (1984) considered 12 men consuming a basal diet for 20 weeks. Four refined fibres including locust bean gum were added to the basal diet individually for 4 week each at levels ranging from 19.1 to 27.0 g/day of fibre source (corresponding to 270-380 mg locust bean gum/kg bw per day). Locust bean gum added to the diet decreased significantly total serum cholesterol and plasma low-density lipoproteins, whereas no change was observed in triglyceride, very LDL or HDL cholesterol levels. In this study, no side effects of locust bean gum were described. Infants and children Locust bean gum is described to be frequently added to infant formulae in the treatment of noncomplicated gastro-oesophageal reflux (GOR) (Meunier et al., 2014). The efficacy of thickening agents depends on their ability to increase gastric retention time, avoiding a return to the oesophagus during the first digestion phase (Corvaglia et al., 2013). These types of products are commercialised under the name of antireflux or antiregurgitation infant formulae (AR formulae), and are promoted with the claim that they benefit infants who have GOR or who spit up regularly. The North American and the European Society for Pediatric Gastroenterology, Hepatology, and Nutrition (NASPGHAN and ESPGHAN, respectively) commented on the use of AR formulae in physiological GOR and gastro-oesophageal reflux disease (GORD) 16 and reported that it may decrease visible regurgitation but does not result in a measurable decrease in the frequency of oesophageal reflux episodes. However, long-term use would require additional studies (Vandenplas and Rudolph, 2009). Authors of a systematic review concluded that there was no evidence from randomised controlled trials to support or refute the efficacy of feed thickeners in newborn infants with GOR (Huang et al., 2002). Locust bean gum as ingredient for AR formulae, is legally allowed in Europe. 17 Locust bean gum may be added up to a maximum level of 10 g/L from birth onwards (see Section 3.2). As previously mentioned, locust bean gum was considered by the SCF as acceptable for its use up to 1 g/100 mL (10 g/L) in Foods for Special Medical Purposes when prescribed under medical supervision to treat GORD (SCF, 2003) (see section 1.2). • Paediatric case reports Few cases of severe adverse effects or fatality in premature or low birth weight infants with GORD receiving infant formulae thickened with locust bean gum were reported in the open literature. The origin of such effects could be related to the very immature state of their gastrointestinal tract. Sievers and Schaub (2003) reported a study in which six premature infants (age not specified) were fed 0.2-0.5 g/100 mL of locust bean gum formula to reduce vomiting. This treatment resulted in an increased frequency of defaecation, metabolic acidosis and hypokalaemia as long as the exposure continued. Clarke and Robinson (2003) reported fatal necrotising enterocolitis observed on days 26 and 30, in two extremely low birth weight premature infants fed locust bean gum thickened milk (dose unspecified). According to the authors, thickened milk may have led to a bowel obstruction resulting in a necrotising enterocolitis; however, the pathophysiology of these two cases was not investigated because no post-mortem examination was done. A study was conducted in 166 bottle-fed infants under 4 months of age, who were presented with frequent regurgitation/vomiting due to uncomplicated GOR at the outpatient clinics of six paediatric centres (Iacono et al., 2002). Patients were divided into two groups. Group 1 comprised 82 infants (45 males, 37 females; median age 1.5 months) who were treated with an AR formula available on the Italian market thickened with locust bean gum (the concentration is not given, but is expected to not exceed MPLs according to the EU food legislation). Group 2 consisted of 84 patients (43 males, 41 females; median age 1.5 months) treated with a common, adapted formula without any thickening component. No differences between the two groups in regurgitation scores were recorded either at baseline, after 4 and 8 weeks of treatment. But in 14 patients 18 in the group receiving the formula thickened with locust bean gum, treatment was suspended due to the onset of diarrhoea during the first 2 weeks of the study. The authors concluded that the use of the thickened formula could cause diarrhoea. 16 GOR is a normal physiological process occurring several times per day in healthy infants, children and adults, in the postprandial period, and causes few or no symptoms. In contrast, GORD is present when the reflux of gastric contents causes troublesome symptoms and/or complications with possible pathophysiological causes underlying (Vandenplas and Rudolph, 2009 Considering the specific group of infants of more than 12 weeks of age, a case of hypersensitivity and cases of gastrointestinal discomfort have been associated with the use of locust bean gum in infant formulae to treat GOR and are described in the following. An immediate hypersensitivity to locust bean gum after oral ingestion has been reported in an infant (Savino et al., 1999). The report described a 5-month-old girl, suffering from GOR who developed recurrent vomiting, urticaria and flushing after ingestion of a milk formula containing locust bean gum as a thickening agent. Diagnosis was based only on a case history but no specific tests were performed. Vivatvakin and Buachum (2003) described the effect of a locust bean gum milk formula (unspecified concentration) on gastric emptying and on regurgitation in 20 Thai infants (4-24 weeks old). Statistically significant improvements in symptoms of vomiting and in weight gain per week where observed in infants consuming the locust bean gum formula for 2-4 weeks. However, there was no significant difference in gastric emptying half time. The only side effect was a significantly increased flatus observed in one infant. Miyazawa et al. (2006) investigated the effects of milk-based formulae thickened with two different concentrations of locust bean gum (0.4 and 0.5 g/100 mL, formulae HL-350 and HL-450, respectively) on gastric emptying in 27 infants (18-19 weeks old) with recurrent regurgitation episodes. The thickened formula containing the higher concentration of locust bean gum (0.5 g/100 mL) slowed gastric emptying in infants with GOR. In this study, the mothers of two subjects fed with HL-350 and one subject fed with HL-450 reported an increase in bowel movements, but none of these infants had severe diarrhoea. The Panel noted that the highest applied dose reported was only half of the MPL of locust bean gum in formulae for special medical purposes (categories 13.1.5.1 and 13.1.5.2). Carr e described that in the management of GOR of infants, carob seed (locust bean) preparations added to bottle feeds in the proportion of 1 g to 115 mL were given. The author indicated that the treatment could cause frequent loose gelatinous stools which might occasionally necessitate the temporary withdrawal of the treatment (Carr e, 1985). The Panel noted from the above-mentioned studies, that hypersensitivity and undesirable gastrointestinal effects occurred when locust bean gum was used to treat GOR in infants and young children. However, the evidence on these effects was limited. Nevertheless, the Panel noted that the Committee on Nutrition of the European Society for Paediatric Gastroenterology Hepatology and Nutrition (ESPGHAN) concluded 'The effects of thickening agents in infant feeds on the bioavailability of nutrients; on mucosal, metabolic, and endocrine responses; and on infant growth have not been clarified sufficiently. The frequency of allergic reactions to various thickening agents in infancy also is unknown. In view of the limited information available, the Committee recommends that thickening agents and infant diets containing thickening agents should not be used indiscriminately in healthy, thriving infants who spit up. Until better information is available, thickening agents and infant diets containing thickening agents should be used only in selected infants with failure to thrive caused by excessive nutrient losses associated with regurgitation, and used only in conjunction with appropriate medical treatment and supervision. The current practice of indiscriminately offering thickened infant diets to the general public in retail stores, with claims that these products benefit infants who spit up, results in their frequent overuse and misuse, and should be discontinued' (Aggett et al., 2002). Furthermore, the ESPGHAN/NAPGHAN concluded that 'The impact of thickened formula on the natural history of physiologic GOR or GORD has not been studied. The allergenicity of commercial thickening agents is uncertain, and the possible nutritional risks of long-term use require additional study' (Vandenplas and Rudolph, 2009). Overall, the Panel considered that these conclusions were in line with the one by SCF regarding the use of the locust beam gum in the revision of the essential requirements of infant formulae and followon formulae intended for the feeding of infants and young children (SCF, 2003) (see Section 1.2). Possible effects of locust bean gum on the bioavailability of nutrients Several authors have indicated the need to explore further the effect of thickening agents on the nutrition and health of infants as some studies in in vitro model have suggested that the bioavailability of calcium, iron and zinc may be affected by these compounds (Bosscher et al., 2000(Bosscher et al., , 2003aVandenplas et al., 2011;Gonz alez-Berm udez et al., 2014). However, one in vivo human study (Behall et al., 1987) did not support this hypothesis. Discussion Locust bean gum is the ground endosperm of the seeds of the strains of carob tree, Ceratonia siliqua (L.) Taub. (family Leguminosae). The substance has the CAS Registry Number 9000-40-2 and the EINECS number 232-541-5. Purity criteria on locust bean gum (E 410) have been defined in Commission Regulation (EU) No 231/2012 and by JECFA (JECFA, 2008a,b). As indicated by the JECFA specifications, locust bean gum (E 410) is also produced in a purified form as clarified locust bean gum. According to industry the 'Carob Bean Gum is also produced in purified form as "Carob Bean Gum (Clarified)" INS No. E-410. It is however not being used as replacement of "Carob Bean Gum"' (Documentation provided to EFSA, number 5). According to the recent JECFA evaluation (summary of JECFA, 2016), a limit of 0.5 mg/kg for lead was introduced for locust bean gum and locust bean gum (clarified) for use in infant formula. Because of both the botanical origin and the polysaccharidic nature of gums, they can be a substrate of microbiological contamination and of field and storage fungal development. The latter has been recently demonstrated by the mycotoxin contaminations of gums (Zhang et al., 2014). The Panel noted that therefore for locust bean gum criteria for the absence of Salmonella spp. and E. coli, for TAMC and for TYMC should be included into the EU specifications. The Panel noted that special requirements for specifications of locust bean gum (E 410) to be used in formulae or food for infants, toddlers and other young children (food category 13.1) might be prudent. The Panel noted that when locust bean gum (E 410), was added to food in combination with other gums, such as xanthan gum (E 415), agar (E 406) or carrageenan (E 407), there was a greater than additive increase in viscosity or gel strength which may have implications for the safety of the product. The in vitro degradation by human gastrointestinal fluids and in vivo digestibility of locust bean gum in animals have been investigated. Despite the absence of an in vivo study in humans, the Panel considered that these data indicated that locust bean gum would be not absorbed intact but significantly fermented by the intestinal microflora. Locust bean gum is regarded as not acutely toxic, based on the results of acute oral toxicity studies. In a subchronic toxicity study in rats by Til et al. (1974), no adverse effects were reported at doses up to 4,500 mg of locust bean gum/kg bw per day. In the NTP (1982), no adverse effects were reported in mice and rats receiving doses up to 20,000 and 9,000 mg of locust bean gum/kg bw per day, respectively. The Panel noted that the NTP study did not include haematology, urinalysis and clinical chemistry. The available in vitro and in vivo data for genotoxicity were limited. However, no genotoxic activity was observed for locust bean gum. In addition, considering the chemical structure of locust bean gum and its negligible absorption the Panel concluded that there is no concern with respect to the genotoxicity of locust bean gum (E 410). Locust bean gum was tested for carcinogenicity in mice and rats receiving doses up to 7,500 mg of locust bean gum/kg bw per day up to 2,500 mg of locust bean gum/kg bw per day, respectively, for 103 weeks (Carlson and Domanski, 1980 as reported by JECFA;NTP, 1982;Melnick et al., 1983). The Panel noted that in these studies, locust bean gum was not carcinogenic. The Panel considered that no adverse effects were reported in a three-generation reproductive toxicity study receiving doses up to 5% locust bean gum in the diet (equivalent to 4,500 mg of locust bean gum/kg bw per day), the highest dose tested. In the prenatal developmental toxicity studies in rats and hamsters, no maternal and developmental effects were observed up to the highest dose (1,300 and 1,000 mg locust bean gum/kg bw per day) (FDRL, 1972). In human adults, there are few case reports of allergy to locust bean gum after oral ingestion. However, most of the reports described rhinitis and asthma that were essentially caused by occupational contact with locust bean gum. The Panel recognised the possible potential allergenicity of locust bean gum; however, it considered that the specification and origin of the gum are lacking in these studies. Case reports of hypersensitivity reactions associated with locust bean gum included the case of a 5-month-old infant, the Panel considered that this hypersensitivity might be due to the locust bean gum proteins and therefore their content should be reduced as much as possible. The Panel further noted that in a group of 28 hypercholesterolaemic or normal adolescents and adults treated with locust bean gum for 8 weeks, doses up to 500 mg/kg bw per day were well tolerated without side effects. The present re-evaluation includes the use of locust bean gum (E 410) in foods for infants from 12 weeks of age onwards and for young children. Concerning uses of locust bean gum in food for infants and young children the Panel concurs with the SCF (SCF 2003) '. . . the SCF reaffirmed its earlier view that it is not persuaded that it is necessary to give thickened infant formulae to infants in good health, and that the information available on the potential effects on the bioavailability of dietary nutrients and growth in young infants is not conclusive (SCF, 1999). It is therefore recommended that the use of locust bean gums should not be acceptable for use in infant formulae' and 'The SCF recommended maintaining the current maximum level of the use of locust bean gums in follow-on formulae of 1 g/L. The Committee further recommended maintaining the concept that if more than one of the three substances locust bean gum, guar gum or carrageenan are added to a follow-on formula, the maximum level established for each of those substances is lowered with that relative part as is present of the other substances together' and 'The SCF accepted that there is a case of need for use of locust bean gums in dietary foods for special medical purposes for therapeutic use in a small number of infants with gastro-oesophageal reflux disease under medical supervision, and the Committee considered its use in these products up to a maximum level of 10 g/L acceptable'. The Panel endorsed the conclusions of the SCF which are reflected in the current regulation for food category 13.1.2. The Panel acknowledged that consumption of the concerned food categories would be short and noted that it is prudent to keep the number of additives used in foods for infants and young children to the minimum necessary and that there should be strong evidence of need as well as safety before additives can be regarded as acceptable for use in infant formulae and foods for infants and young children. The Panel noted reports suggesting a putative effect of locust bean gum to decrease the bioavailability of certain nutrients; however, a human study did not confirm this effect. For the specific group of infants of more than 12 weeks of age, 1 the Panel considered a case of sensitivity and reports on undesirable gastrointestinal effects, such as diarrhoea, frequent loose stools and flatulence, associated with the use of locust bean gum in products for reduction in GOR. Furthermore, the Panel noted that no specific clinical data addressing the safety of use of locust bean gum (E 410) in 'dietary foods for infants for special medical purposes and special formulae for infants' (food category 13.1.5.1) and in 'dietary foods for baby and young children for special medical purposes as defined in Directive 1999/21/EC' (food category 13.1.5.2) considering the defined maximum use levels were available to the Panel. The Panel also noted that infants and young children consuming these foods may be exposed to a greater extent to locust bean gum (E 410) than their healthy counterparts because the permitted levels of locust bean gum (E 410) in products for special medical purposes to reduce GOR are 10-fold higher than in follow-on formulae for healthy individuals. The Panel further noted that, given their medical condition, infants and young children consuming foods belonging to these food categories may show a higher susceptibility to the gastrointestinal effects of locust bean gum than their healthy counterparts. Thus, monitoring of any adverse effects including those in the gastrointestinal system in infants and young children consuming these foods under medical supervision could be helpful to reduce this uncertainty. According to the conceptual framework for the risk assessment of certain food additives re-evaluated under Commission Regulation (EU) No 257/2010(EFSA, 2014, the Panel considered that sufficient toxicity data were available in animals showing no adverse effects at highest doses tested up to 7,500 mg/kg bw per day. Therefore, the Panel considered that there is no need to allocate a numerical ADI for locust bean gum (E 410). To assess the dietary exposure to locust bean gum (E 410) from its use as a food additive, the exposure was calculated based on (1) maximum levels of data provided to EFSA (defined as the maximum level exposure assessment scenario) and (2) reported use levels (defined as the refined exposure assessment scenario). Based on the available data set, the Panel calculated two refined exposure estimates based on different assumptions: a brand-loyal consumer scenario, where it is assumed that the population is exposed over a long period of time to the food additive present at the maximum reported use for one food category and to a mean reported use for the remaining food categories; and a non-brand-loyal scenario, where it is assumed that the population is exposed over a long period of time to the food additive present at the mean reported use in all relevant food categories. The Panel considered that the refined exposure assessment approach resulted in more realistic long-term exposure estimates compared to the maximum level exposure assessment scenario as it is based on the range of data made available to EFSA by food industry. The exposure estimates in the maximum level exposure assessment scenario, at the mean level exposure to locust bean gum (E 410) from its use as a food additive ranged from 51.6 mg/kg bw per day for the adults to 436.7 mg/kg bw per day in infants. The 95th percentile of exposure to locust bean gum (E 410) ranged from 100.8 mg/kg bw per day for adults to 886.4 mg/kg bw per day in infants (Table 4). The main contributing food categories to the mean exposure estimates for infants and toddlers in this scenario were foods for infants and young children (FCS 13.1), and bread and rolls. For the other population groups, the main contributing food categories were bread and rolls, and fine bakery wares. The Panel noted that the estimated long-term exposures based on this scenario are very likely conservative as this scenario assumes that all foods and beverages listed under the annex II to Regulation No 1333/2008 contain locust bean gum (E 410) as a food additive at the maximum reported use level. From the refined estimated exposure scenario considering only food categories for which direct addition of locust bean gum (E 410) to food is authorised, in the brand-loyal scenario, the mean exposure to locust bean gum (E 410) ranged from 35.7 mg/kg bw per day in adults to 368.9 mg/kg bw per day in infants. The 95th percentile of exposure ranged from 74 mg/kg bw per day for adults to 765.2 mg/kg bw per day in infants. In the non-brand-loyal scenario, the mean exposure to locust bean gum (E 410) from its use as a food additive ranged from 20.7 mg/kg bw per day for adults to 204.3 mg/kg bw per day in infants. The 95th percentile of exposure ranged from 38.8 mg/kg bw per day for the elderly to 415.4 mg/kg bw per day in infants. The main contributing food categories for all population groups were foods for infants and young children and bread and rolls in both scenarios. A refined estimated exposure assessment scenario taking into account the food for special medical purpose for infants and young children (FCS 13.1.5. 'Dietary foods for infants and young children for special medical purposes as defined by Commission Directive 1999/22/EC and special formulae for infants') was also performed to estimate exposure for infants and toddlers who may be on a specific diet. Considering that this diet is required due to specific needs, it is assumed that consumers are loyal to the food brand, therefore only the refined brand-loyal estimated exposure scenario was performed. From this refined brand-loyal estimated exposure scenario taking into account the foods for special medical purposes, the mean exposure to locust bean gum (E 410) from its use as a food additive ranged for infants between 179 and 553 mg/kg bw per day and between 135 and 245 mg/kg bw per day for toddlers. The 95th percentile of exposure ranged for infants between 435 and 1,555 mg/kg bw per day and for toddlers between 262 and 578 mg/kg bw per day. These estimates are based on 59 out of 84 food categories in which locust bean gum (E 410) is authorised. The main food categories, in term of amount consumed, not taken into account were breakfast cereals, gluten-free dietary foods for infants and young children, snacks and some alcoholic beverages (cider and perry, spirit drinks . . .). However, based on the information in the Mintel GNPD (Appendix C), in the EU market, no breakfast cereals are labelled with locust bean gum (E 410), and few alcoholic drinks are labelled with the additive. Therefore, the Panel considered that the uncertainties identified would, in general, result in an overestimation of the exposure to locust bean gum (E 410) as a food additive according to Annex II in European countries for all scenarios. Locust bean gum (E 410) is used as a thickener and stabiliser in a wide range of foods. In specific populations, consuming foods for special medical purposes and special formulae and in infants and young children consuming follow-on formulae brand-loyalty may be relevant. Because these food groups are main contributors to mean exposure, the Panel therefore selected the brand-loyal refined scenario as the most relevant exposure scenario for this additive in these specific situations. The Panel noted that in Annex II of Regulation (EC) No 1333/2008 use levels of locust bean gum (E 410) in food for infants under the age of 12 weeks are included in categories 13.1.1, 13.1.5.1 and 13.1.5.2. The Panel considered that these uses would require a specific risk assessment in line with the recommendations given by JECFA (1978) and the SCF (1998) and endorsed by the Panel (EFSA ANS Panel, 2012). Therefore, the current re-evaluation of locust bean gum (E 410) as a food additive(s) is not considered to be applicable for infants under the age of 12 weeks and will be performed separately. General population Following the conceptual framework for the risk assessment of certain food additives re-evaluated under Commission Regulation (EU) No 257/2010(EFSA, 2014, and given that • adequate exposure data were available; in the general population, the highest refined exposure estimates based on the reported data from food industry were for infants (12 weeks-11 months) up to 765 mg/kg bw per day (brand-loyal scenario) while these estimates were lower than 435 mg/kg bw per day in the other groups of population; • locust bean gum is practically undigested, not absorbed intact, but significantly fermented by enteric bacteria in humans; • adequate toxicity data were available; • no adverse effects were reported in 90-day toxicity studies in rodents at the highest doses tested (20,000 mg locust bean gum/kg bw per day in mice, and 4,500 or 9,000 mg locust bean gum/kg bw per day in rats); • there is no concern with respect to the genotoxicity of locust bean gum; • no carcinogenic effects were reported in carcinogenicity studies in rodents at the highest dose tested, up to 7,500 mg locust bean gum/kg bw per day in mice and 2,500 mg locust bean gum/kg bw per day in rats; • oral intake of locust bean gum at doses in doses amounting up to 500 mg/kg bw per day for 8 weeks was tolerated in adolescents and adults without significant side effects. Although no information was available for higher doses in humans, no effects were observed in animals at doses up to 10-fold higher, the Panel concluded that there is no need for a numerical ADI for locust bean gum (E 410), and that there is no safety concern for the general population at the refined exposure assessment for the reported uses of locust bean gum (E 410) as a food additive. 4.2. Infants and young children consuming foods for special medical purposes and special formulae Concerning the use of locust bean gum (E 410) in 'dietary foods for special medical purposes and special formulae for infants' (Food category 13.1.5.1) and in 'dietary foods for babies and young children for special medical purposes as defined in Directive 1999/21/EC' (Food category 13.1.5.2), and given that: • for populations consuming foods for special medical purposes and special formulae, the 95th percentile of refined exposure assessments calculated based on the reported data from food industry are for infants (12 weeks -11 months) up to 1,555 mg/kg bw per day (brand-loyal scenario); • infants and young children consuming foods belonging to these food categories may show a higher susceptibility to the gastrointestinal effects of locust bean gum than their healthy counterparts due to their underlying medical condition; • no adequate clinical data addressing the safety of these uses of locust bean gum (E 410) in this population under certain medical conditions were available; however, there are case reports indicating undesirable gastrointestinal symptoms in infants taking food products for reduction in GOR which are authorised to contain locust bean gum up to 10-fold higher levels than in follow-on formulae for healthy infants; • no relevant animal studies were available; the Panel concluded that the available data do not allow an adequate assessment of the safety of locust bean gum (E 410) in infants and young children consuming these foods for special medical purposes. Recommendations The Panel recommended that the maximum limits for the impurities of toxic elements (lead, mercury and arsenic) in the EU specification for locust bean gum (E 410) should be revised in order to ensure that locust bean gum (E 410) as a food additive will not be a significant source of exposure to those toxic elements in food in particular for infants and children. The Panel recommended to give separate specifications in the EU regulation for locust bean gum and clarified locust bean gum differing significantly in the protein content. The Panel noted some case reports of hypersensitivity reactions associated with locust bean gum (E 410). The Panel considered that this hypersensitivity might be due to the locust bean gum proteins, and therefore, the Panel recommended that their content should be reduced as much as possible. The Panel recommended to harmonise the microbiological specifications in the EU Regulation for polysaccharidic thickening agents, such as gums, and to include criteria for the absence of Salmonella spp. and E. coli, for TAMC and for TYMC into the EU specifications of locust bean gum (E 410). In view of the synergistic effects on viscosity and gelling with mixtures of locust bean gum (E 410) and carrageenan (E 407), the Panel recommended to include carrageenan (E 407) in the footnote for food category 13.1.4 regulating the combined use of the gums. Concerning the direct use of locust bean gum (E 410) by the consumer in the form of a powder to thicken food, the Panel recommended labelling, including instructions for adequate preparation of the thickened food with sufficient liquid to avoid the risk of possible oesophageal obstruction due to insufficient hydration of the locust bean gum. The Panel recommended that additional data should be generated to assess the potential health effects of locust bean gum (E 410) when used in 'dietary foods for infants for special medical purposes and special formulae for infants' (Food category 13.1.5.1) and in 'dietary foods for babies and young children for special medical purposes as defined in Directive 1999/21/EC' (Food category 13.1.5.2). Re-evaluation of locust bean gum (E 410) as a food additive www.efsa.europa.eu/efsajournal
2019-03-16T13:11:53.482Z
2017-01-01T00:00:00.000
{ "year": 2017, "sha1": "35c9ecf79cc61ace642121d2f11cdc99fd72c098", "oa_license": "CCBYND", "oa_url": "https://efsa.onlinelibrary.wiley.com/doi/pdfdirect/10.2903/j.efsa.2017.4646", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "c634cd365079b147c5f63883b244174ca9e258a4", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "extfieldsofstudy": [ "Medicine" ] }
270307396
pes2o/s2orc
v3-fos-license
Continuous surveillance of pathogens detects excretion of avian orthoreovirus and parvovirus by several wild waterfowl: possible wild bird reservoirs Migratory wild birds can carry various pathogens, such as influenza A virus, which can spread to globally and cause disease outbreaks and epidemics. Continuous epidemiological surveillance of migratory wild birds is of great significance for the early warning, prevention, and control of epidemics. To investigate the pathogen infection status of migratory wild birds in eastern China, fecal samples were collected from wetlands to conduct pathogen surveillance. The results showed that duck orthoreovirus (DRV) and goose parvovirus (GPV) nucleic acid were detected positive in the fecal samples collected from wild ducks, egrets, and swan. Phylogenetic analysis of the amplified viral genes reveals that the isolates were closely related to the prevalent strains in the regions involved in East Asian-Australasian (EAA) migratory flyway. Phylogenetic analysis of the amplified viral genes confirmed that they were closely related to circulating strains in the regions involved in the EAA migration pathway. The findings of this study have expanded the host range of the orthoreovirus and parvovirus, and revealed possible virus transmission between wild migratory birds and poultry. INTRODUCTION Wild birds harbor a diverse range of pathogens capable of infecting both humans and animals, while simultaneously contaminating their natural habitats through the secretion and excretion of these pathogens (Shan et al., 2022).Consequently, wild birds are of paramount importance in the propagation of the pathogens.Specifically, migratory wild birds frequently traverse national and even intercontinental boundaries during migration or flight, serving as vectors for long-distance transmission of pathogenic viruses, bacteria, and parasites (Envelope et al., 2021;Zhang et al., 2022).Furthermore, these pathogens undergo genetic evolution and reassortment mutations, leading to the emergence of mutated strains with altered pathogenicity and antigenicity.As a result, livestock and poultry production face significant challenges along with public health safety (Daszak et al., 2000). It has been confirmed that wild birds are closely related to the occurrence and spread of various poultry infectious diseases, including avian influenza and Newcastle disease (Ziedler and Hlinak, 1992;Rahman et al., 2018;Verhagen et al., 2021;Graziosi et al., 2022).Migratory wild birds have played a crucial role in the extensive geographical dissemination of avian influenza, with multiple outbreaks being closely associated with viruses carried by these migratory species (Bi et al., 2015;Krauss et al., 2015;Li et al., 2021;Wang et al., 2022;Zhao et al., 2022).The highly pathogenic H5N1, H5N6 and H5N8 viruses carried by migratory birds have caused multiple epidemics in Eurasia and North America (Lee et al., 2015).The introduction of North American lineage's H10 strains to Asia can be attributed to migratory bird populations (Wang et al., 2022).The acute mortality observed among spotted geese in China's Qinghai Lake wetlands has been linked to the presence of H5N1 virus carried by migratory geese populations (Chen et al., 2005;Bi et al., 2015).Research has found that H3N8 disease carried by migratory birds in Shandong region undergoes complex genetic reassortment with waterfowl, resulting in distinct genetic and evolutionary characteristics within viral strains (Wang et al., 2023) (Fig. 1).Unfortunately, apart from the avian influenza virus, limited research has been conducted on the carrier and dynamic evolution of avian pathogens in migratory wild birds. Duck orthoreovirus (DRV) infection causes spleen necrosis in ducks, which is widely prevalent in major duck breeding areas such as Shandong, Jiangsu, Anhui, and Heilongjiang (Wang et al., 2019).As a multi segment RNA virus, the replication of the virus genome occurs after gene segment classification and packaging.In mixed infection, genetic reassortment is prone to lead to the emergence of new progeny strain.Among the twelve proteins encoded by the DRV genome, the C protein plays a key role in viral pathogenesis, and the gene encoding sC protein is also the most susceptible gene segment in orthoreovirus to mutation under continuous immune selection pressure (Jiang et al., 2021;Yan et al., 2021).Goose parvovirus (GPV) is a member of the Parvoviridae and Dependoparvovirus genus.It is a single stranded DNA virus that often causes growth waterflow obstruction, known as stiff birds.As the main component of nucleocapsid and located on the surface of virion, VP3 is the main antigen protein that stimulates the body to produce protective neutralizing antibodies (Chen et al., 2016).DRV and GPV are important pathogens that seriously harm waterfowl farming.Both can be detoxified through the cloaca, and the fecal oral route is an important infection pathway (Chen et al., 2016;Wang et al., 2019).No research has yet revealed whether wild migratory waterfowl are involved in the spread of duck reovirus and GPV. In this study, we conducted pathogen surveillance of migratory wild waterfowl in 3 natural bird habitats located in eastern China.These results underscore the potential risk posed by wild birds as carriers of pathogens to the prevention and control measures for poultry diseases in eastern China and beyond, thereby emphasizing the criticality of continuous monitoring. Surveillance Sites and Sampling The surveillance sites were located in Yellow River Dela wetland, Swan Lake wetland and Weishan Lake wetland, all of which are important habitats for migratory birds along the East Asian-Australasian (EAA) migratory flyway in eastern China.After identifying the species of resident wild birds with binoculars, fresh fecal dropping of wild ducks, swans, and egrets were collected using sterile swabs.The species of wild birds at different surveillance sites are shown in Table 1.Samples will be immediately transferred to Dulbecco's modified eagle medium supplemented with penicillin and streptomycin.After marking information such as collection site, sample type, and collection procedure, total of 1,155 samples were collected for further study. Sample Screening and Virus Isolation Viral RNA was extracted from the samples using RNAprep pure Tissue Kit (TIANGEN, China), and subsequently, the FastKing RT Kit (TIANGEN, China) was used to obtain the cDNA based on the RNA sample.The presence of DRV and GPV nucleotide in the samples were assessed by PCR using primers to amplify the sC gene (forward primer: ATGGATCGCAACGAGGTGATAC, reverse primer: CTAGCCCGTGGCGACGGT, fragment: 966bp) of DRV and VP3 gene (forward primer: AAGTCTT-TACGGATGACGAGC, reverse primer: TACAAACGGCGTAGGGTGGA, fragment: 702bp) of GPV, respectively.The amplification products were analysisd by 1.0% agarose gel electrophoresis and visualized by UV after Goldview staining.Virus isolation was attempted in Leghorn male hepatoma (LMH) cells (Wang et al., 2019). Sequencing and Phylogenetic Analysis After stable cell culture was obtained, total RNA was extracted from the viral cell culture as described above, and the amplicons of sC and VP3 genes were obtained after reverse transcription and PCR amplification.All the amplicons were purified using a Gel Extraction Kit (OMEGA, USA).Genome were obtained by Sanger sequencing conventional methods.Sequence similarities between the isolated strains and published sequences were compared using the BLASTN online searching program in GenBan (https://blast.ncbi.nlm.nih.gov/Blast.cgi).The phylogenetic trees of major gene segments of different virus strains were constructed with the neighbor-joining method of MEGA 7.0, and 1,000 bootstrap replicates were performed to obtain confidence in the groups. Infection and Co-Infection Situation The positive samples were analyzed for mono-infection and co-infection.Among 399 samples positive for viral nucleic acid, 36 samples were detected with 2 viral genomes at the same time, and the co-infection rate was 9.0% (36/399).The co-infection was concentrated in swan in Swan Lake wetland, with a co-infection rate of 21.9% (36/164).In contrast to many surveillance data on domestic poultry, this result seems to imply that mixed infection is not prevalent in wild birds.For the swan in Swan Lake wetland, mixed infection accounted for 40% (36/90) and 32.7% (36/110) of DRV-positive and GPV-positive samples.This result suggests that DRV infection may make wild birds more susceptible to other pathogens such as GPV.This result is believed to be associated with DRV as an immunosuppressive pathogen. Sequence Comparison and Phylogenetic Analysis Six DRV strains were isolated, of which 5 were all from swans and the other one was derived from wild ducks (Table 2).The sC protein-coding genes of these strains were amplified and analyzed.Sequence homology analysis showed that the sequence identity of viral genes among different hosts was 98.4 to 100%.The DRV isolates obtained from wild waterfowl were highly clustered with the DRV strains circulating in domestic ducks in recent years.Among them, the wild duck SDDY11,429 isolate and the swan SDWH10274, SDWH10379, SDWH10383 and SDWH10399 isolates were all in the same evolutionary branch with the strains circulating in Shandong province, with sequence identity value of 99.6 to 99.9%.The swan-origin strain SDWH13161 was clustered with strains prevalent in other provinces in genetic evolution, and homology of 98.3 to 98.6% (Figure 2).The emergence of gene segments with different origins indicated that there were multiple DRV epidemic strains in the migratory wild waterfowl population in Shandong province, which were closely related not only to the epidemic strains in local domestic ducks, but also to the strains circulating in pass-through areas. GPV correlation analysis was based on the VP3 gene.6 strains were isolated from wild ducks and swans (involved 2 sites).The homology of VP3 gene among the isolates was 99.3−100%.Phylogenetic analysis showed that SDDY12410 and SDDY12529 were in the same evolutionary branch with the goose-origin strains in Shandong, Jiangsu, and Heilongjiang provinces involved in the EAA migration route.Despite its uniqueness, the SDDY12410 strain is in the same evolutionary lineage as the isolates from these regions.The swan isolates were clustered with the novel duck-derived GPV strains isolated from domestic ducks in Shandong, and the 3 isolates (SDDY12839, SDWH12992 and SDZZ61) from different sampling areas were highly clustered (Figure 3).These results further confirmed the phenomenon that migratory wild waterfowl populations carry different sources of viruses. DISCUSSION It is well known that wild birds are the main hosts of avian influenza viruses, however, there are few reports on the surveillance of wild birds carrying other major waterfowl viruses.To address this knowledge gap, we investigated the prevalence of DRV and GPV in wild waterfowl migrating to Shandong Province, China for overwintering.Our findings confirm the widespread presence of DRV and GPV in certain migratory bird populations, indicating potential transmission chains between wild migratory waterfowl and domestic poultry. Migratory wild birds are important pathogen carriers, and their contact with resident birds and domestic poultry greatly increases the risk of pathogen transmission and epidemic (Chen et al., 2015, Naggar et al., 2020).However, this risk has not received enough attention because pathogen infections in wild birds are usually inapparent, with low morbidity and mortality.Shandong Province is at the key node of EAA migration route, with vast wetlands and abundant food resources.More than 200 species of birds, including Whooper swans, Pochard and Red duck, overwintered and breed here (Bi et al., 2015).In addition to the conventional 3 north-south migration routes, novel transmission pathways for pathogens may arise in conjunction with new migratory pattern.A virus tracing study based on satellite tracking suggests that there may be a new lateral migration path from east to west (Meng et al., 2019).Therefore, the pathogen carriage of wild birds has a great impact on the prevention and control of poultry epidemics in East China and even the whole country. This study revealed that DRV and GPV were widespread in some wild waterflow, but the infection rates of DRV and GPV varied greatly among different surveillance sites and sampling populations, which might be related to the susceptibility of wild birds to different pathogens, or the contact history of sampling populations with other poultry or wild bird populations.At the same time, Eurasian lineage of avian metapneumovirus type C was also found in wigeon migrating and overwintering in Europe, suggesting that migratory birds may participate in the migration and spread of avian metapneumovirus (Giulia et al., 2022).Some wild birds may act as intermediate hosts, serving as a bridge for pathogen transmission between wild birds and domestic poultry.Both high and low pathogenic strains of Newcastle disease virus could be detected in wild birds (Shchelkanov et al., 2006;Bansal et al., 2022), and pheasants and partridges might serve as intermediate host (Ross et al., 2023).Naggar's study found that there might be 2-way spil-over between wild birds and poultry.Three infectious bursal disease virus strains were isolated from wild birds in Egypt.Phylogenetic analysis showed that one of the 3 infectious Bursal virus strains was clustered with virulent strains circulating in local areas, while the other 2 strains were clustered with vaccine strains used in poultry (Naggar et al., 2020).The migration of wild birds is also accompanied by the genetic evolution of viruses.Through the contact of wild birds with domestic poultry along the migration route, gene segments of wild birds carrying viruses are exchanged and rearranged, and new virus variants are evolved, thereby enriching the genetic diversity of viruses.This phenomenon has been widely confirmed in the genetic evolution of avian influenza virus (Shehata et al., 2019).Avian orthoreovirus is highly susceptible to genetic reassortment when it infects the same host or cell.In Germany, the D2533/4/1-10 strain was formed by reassortment of gene segments from 3 different sources, including classical and novel waterfowl orthoreovirus reovirus and another unknown strain (Farkas et al., 2018).Prior studies have also confirmed that the epidemic strains in Eastern China have mutated strains produced by multiple homologous and divergent reassortment (Jiang et al., 2021).We found that wild birds carried DRV-sC gene segments from different origins.Similarly, genetic reassortment also exists in the genetic Figure 2. Phylogenetic analysis based on the sC protein-coding gene of avian orthoreovirus using the Neighbor-Joining method with 1,000 bootstrap replications.The strains obtained in this study were annotated with red triangles, and the red dotted were magnified displays of the evolutionary branch in which the strains were located.evolution of GPV, and avian parvoviruses have the ability to adapt to new host through genetic evolution, such as duck beak atrophy and dwarfism syndrome caused by novel duck-origin GPV infection (Zhu et al., 2014;Chen et al., 2015;Wang et al., 2015).It is necessary to pay attention to whether these gene segments would participate in the genome reassortment of epidemic strains. In addition to being involved in the maintenance and transmission of a variety of avian pathogens, wild birds are important hosts for a variety of zoonotic agents.Although there is still a lack of exact and sufficient evidence on the transmission of zoonotic pathogens from wild birds to humans, a variety of pathogens carried by wild birds can cause infections in mammals and humans, including West Nile virus, Japanese encephalitis virus, Salmonella enterica, Chlamydia psittaci, and so on (Murray et al., 2010;Nicole et al., 2012;Lawson et al., 2014;Medrouh et al., 2020;Belo et al., 2023).The H13 subtype avian influenza virus isolated from wild birds in China was able to replicate in both chickens and mice (Sun et al., 2023).As the host of a variety of coronaviruses, wild birds play a role in the cross-species transmission of coronaviruses from birds to mammals (Michelle and Holmes, 2020), and a recent study confirmed that coronaviruses increase the susceptibility of wild birds to avian influenza viruses (Ma et al., 2022).The conclusion of these similar studies gives an indication of the risk of pathogens from wild waterfowl crossing the host barrier to infect poultry or mammals. In conclusion, this study once again highlights the imminent threat and practical challenges posed by pathogens carried by wild birds in the prevention and control of poultry epidemics.However, due to potential sample population overlap, limited sample size, and potential influence of dry fecal samples on detection outcomes, there remains a knowledge gap regarding the infection dynamics of wild birds.Therefore, it is imperative to conduct regular pathogen surveillance in wild bird populations and systematically investigate the genetic evolution and transmission dynamics of pathogens among migratory birds, wild birds, and poultry.These efforts will provide valuable insights for guiding effective measures against poultry epidemics and mitigating public health risks. Figure 1 . Figure 1.Geographic distribution of sampling sites and schematic of migration routes.Note: The colored lines represent the 3 migratory flyways of wild birds in China, and the position indicated by the arrows were the surveillance sites of this study. Figure 3 . Figure3.Phylogenetic tree constructed based on the VP3 gene of avian parvovirus.The strains obtained in this study were annotated with red triangles, and the red dotted were magnified displays of the evolutionary branch in which the strains were located. Table 1 . Statistical results of samples from different populations. Table 2 . Details of the virus strains isolated in this study.
2024-06-07T15:17:18.928Z
2024-06-01T00:00:00.000
{ "year": 2024, "sha1": "3af5e4f5f66191ae5ecf5570a127327dfa24bfbe", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.1016/j.psj.2024.103940", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "ecdba1126777cb53fee8ea9f64c346d3e76a78d5", "s2fieldsofstudy": [ "Environmental Science", "Biology" ], "extfieldsofstudy": [] }
257409625
pes2o/s2orc
v3-fos-license
Proposed Digital Marketing Strategy Using Existing Customer Journey Analysis Case Study: PT SS : PT SS is an Indonesian electricity provider that has been in operation for over 25 years. PT SS has a primary client in the utilities industry. In 2018, the government implemented a regulation that decreased the company's revenue. In order to boost revenue, market penetration must be increased. With digital marketing, the objective is to expand the market to non-utility industries.Currently, the internet serves as a medium for discovering information anywhere and for connecting individuals with one another. In addition to this, it can also be utilized for media marketing. Digital marketing is a sort of business in which a person or company engages in internet-based marketing. Both B2C and B2B companies can utilize digital marketing media to promote their products.This paper provides an overview of PT SS' issues, specifically unsuccessful digital marketing. The analysis is conducted utilizing internal analysis, namely Marketing Mix and STP, external analysis, including PESTLE, Five Porter Analysis, and competitor analysis, followed by SWOT analysis. Then, customer journey analysis is utilized to determine the consumer's journey when dealing with PT SS via the Internet. To create a digital marketing plan, the writers use the RACE framework to develop marketing strategies for new and existing PT SS customers. Utilizing this approach, the right use of social media and digital marketing methods to handle PT SS's issues is determined. INTRODUCTION Indonesia is the fourth most populous country in the world with a population of 275 million, which is expected to surpass 300 million by 2035. Population growth will increase the need for electricity, which the Indonesian government must meet. With Presidential Regulation of the Republic of Indonesia Number 4 of 2016 on the Acceleration of Electricity Infrastructure Development, the government began a 35,000 MW power plant development program to fulfill the nation's electricity needs (Indonesia). This program is a breath of new air for business owners who are active in the sector of power production providers and their equipment. The following steps is to map online customer journey. The online customer journey map is a graphical representation of the stages and touchpoints a consumer encounters when interacting with a company or brand via online channels. By establishing an online customer journey map, businesses can improve their online services and overall consumer experience. RACE framework was utilized to direct a digital marketing strategies from customer journey steps. This framework is hoped to reach the awareness of potential customers and maintain relationship with existing customers METHOD This research conducted a qualitative research method. Qualitative data refers to all-non numeric data or data that have not been quantified and can be a product of all research strategies [3]. This methodology converts the environment into a collection of representations, including fieldnotes, interviews, conversations, images, recordings, and memos to the self. At this point, qualitative research adopts a naturalistic and interpretative worldview.This research use a case study as an approach. Case study approach is as a comprehensive description and explanation of several aspects of a person, group, organization (community), program, or social situation [4]. This method is used to obtain a thorough and comprehensive description of an entity. Case studies generate data for theoretical investigation. Utilizing methods such as observation and interviews, the author intends to collect as much data as possible regarding the topic under investigation. Unfortunately, this is limited to the issue and is not overly comprehensive. To facilitate the research process for this final project, the writers require factual information from informants in order to answer the research question. Data is collected by using interview-based approach. The interview was conducted with marketing team from PT SS. Interviews were conducted during exploratory business issues, understand deeper company conditions for internal analysis, and formulate the right marketing strategy based on the problems that have been formulated. RESULT AND DISCUSSION Internal Analysis STP Analysis A well-known strategic approach is the STP (Segmentation, Targeting, Positioning) marketing model. With a focus on commercial effectiveness, STP marketing develops a marketing mix and product positioning strategy for each of a company's most lucrative market segments [5]. Segmentation PT SS markets B2B. Market B2B is business-to-business. Sales of business products or services to other businesses. PT SS divides its market into utility and non-utility. Large power plants, electric utilities, and power distribution corporations comprise the utilities Positioning In the competitive power industry, the company knows it must stand out. PT SS does this by acting as an electricity consultant and product provider. This shows that the company extends beyond selling power generators and equipment and advises clients on how to optimise electricity utilization. PT SS provides advice and end-to-end services. This means the company provides initial advice, equipment installation, and maintenance. PT SS's USP is a one-stop shop for all power needs. Our end-to-end service strategy is best for mining, real estate, manufacturing, and industrial customers, who need a lot of help to keep their power solutions running properly. 7P Marketing Mix Kotler's marketing tool, the marketing mix model, also known as the 4Ps (Product, Promotion, Price, and Place), is used to achieve marketing goals and support the implementation of marketing strategies. The four pillars of marketing mix were created for companies that sell products rather than services [6]. People, tangible proof, and process were later added to the original three. The Service Marketing Mix 7Ps are therefore thought of as an assessment of competitive advantage [7]. Marketing Mix Analysis also joined industry groups and organizations to build relationships with their target market. To promote the company and extend the network, coffee mornings, small gatherings, and workshops were held. They can network with potential customers and stay current on industry trends. Digital marketing and networking help PT SS reach their target market. Price PT SS prices based on consumer demands. Unlike most of their competitors, they have variable pricing. Instead, they tailor prices to each customer's needs. They may tailor their pricing to each client's demands, ensuring the best value. By adjusting rates to consumer needs, PT SS may build long-term, mutually beneficial relationships with clients and provide the best value for their money. This sets them apart from their competitors and shows they're dedicated to client satisfaction. Place PT SS office is located on Jl. Raya Cilandak KKO Jakarta. PT SS operational activities cover the entire territory of Indonesia, where its projects are spread throughout the archipelago. Apart from the head office in Jakarta, Sewatama also has 5 representative offices and 4 depot stations. People PT SS has 626 employees in the company. The employees are divided into two part, who works in office and technicians. All technicians in PT SS is well trained and certified. The HR management in the company is carried out in an integrated manner with the management of the Company's business and other resources. Process Service option : end-to-end service. The company service its clients from the beginning until the end of the process and include the maintance if needed. Physical Evidence Offline office and online website External Analysis PESTLE Analysis PESTEL analysis is a tool for assessing and evaluating the marketing environment's external elements. PESTEL analysis is often used to evaluate a company's external environment [8]. A PESTEL analysis is conducted by examining a company's political, economic, social, technical, environmental, and legal impacts. Diesel use in the electricity sector is reduced in the RUPTL 2018-2027. The plan aims to expand hydro, geothermal, solar, and wind power in the national energy mix. Diesel-fired power plants, utilized for backup or peaking power, would be less needed. Threat New carbon trading legislation will help Indonesia fulfill its 2030 greenhouse gas reduction goals. President Joko Widodo signed "the Economic Value of Carbon" law before COP26 in Glasgow, according to an undisclosed document. Emission levels will be capped and domestic and international enterprises can trade allowances in the carbon trade. Authorities expect a fully functional carbon market by 2025, but coal-fired power facilities that exceed the emission cap will pay 30,000 rupiah ($2.09) per tonne of CO2e in April 2019. Opportunity Economical There is a strong correlation between Gross Domestic Product (GDP) and electricity demand in Indonesia. As the economy grows, so does the demand for electricity. This is because as the economy expands, more businesses and industries are established, leading to an increase in energy consumption from manufacturing, transportation, and other activities. Social The number of internet users in Indonesia is increasing yearly, along with the country's information technology infrastructure expansion and government initiatives to provide rural areas with access to the internet. Social media can promote and distribute government policies and programs and iterate and absorb public demands to establish mutual understanding for common Threat Technology Artificial intelligence (AI) is one such technology that may be used to deliver personalized recommendations, automate customer support, and analyze customer data to improve company decision-making. Another applicable technique is Data analytics. Insights into customer behavior, preferences, and usage patterns can be gained through the application of data analytics. This data can be utilized to create customised offers and services that better fit customers' needs. Opportunity Legal The work agreement for temporary employees (non-permanent employees), outsourcing, working hours, rest periods, and termination of employment are all governed by PP 35/2021, which may have an impact on the minimum benefits that must be provided to employees. Five Porter Analysis Porter's faramework consist of the five major forces of new entrants, bargaining power of suppliers, bargaining power of buyers, threat of substitute products or services and rivalry among competitors. The state of defined sub forces determines each force's strength and, thus, its level of threat, whereas the force's combined strength determines the industry's final profit potential [9]. Threat of substitute product or service When evaluating their market position, companies must take the threat of substitutes into account. This refers to the likelihood that consumers will select competing products or services that offer comparable benefits or capabilities. The greater the threat of substitutes, the more difficult it is for companies to maintain market share. In certain industries, such as the fuel industry, the product itself may be viewed as a substitute, rendering the threat of substitutes relatively low. Nonetheless, even in low-force industries, businesses must maintain vigilance and continue to differentiate their products in order to preserve their competitive advantage. Moreover, in industries with limited product differentiation, such as some commodity markets, it can be difficult for customers to switch to alternative products, reducing the threat of substitutes. Ultimately, it is essential for businesses to evaluate the threat posed by substitutes in their industry and take the necessary precautions to mitigate potential risks. Threat of New Entrants The threat of new entrants is an essential factor for businesses to consider when evaluating their competitive landscape. This refers to the probability that new competitors will enter the market and possibly disrupt existing participants. Due to the high entry barriers, the threat of new entrants is minimal in the power industries. These include substantial capital requirements for equipment, maintenance, and certified employees, which make it difficult for new players to enter the market. In addition, the government plays a significant role in determining product consumption and pricing within the power industry. Moreover, client brand loyalty is typically high in the power industry because customers tend to place large orders and sign long-term contracts with established companies. All of these factors contribute to the power industry's low vulnerability to new entrants. Overall, businesses must assess the threat posed by new entrants in their industry and take the necessary steps to preserve their competitive advantage. Bargaining Power of Suppliers The bargaining power of suppliers is their ability to influence the prices of their products and services. The greater the bargaining power of suppliers, the more pricing discretion they have. There are only a few suppliers for the power rental industry on the equipment market. Under the regulation of the Minister of Energy and Mineral Resources, there are fuel utilization policies for gas providers. Since the products in this industry are is not varied, so is the uniqueness of each supplier. These factors result in a low bargaining position for suppliers. Bargaining Power of Customer The purchasers' bargaining power refers to their ability to negotiate lower prices or more favorable terms with the companies with which they do business. The greater the purchasing power of consumers, the greater their market influence. Electricity is the essential product for all businesses. Consequently, the number of consumers in this industry is substantial due to the fact that power is required to meet electricity demands. In addition, a large quantity is typically required for a single order on the power leasing market due to the need to fulfill significant electrical demand or supply reserve power. The ability of purchasers to substitute is moderate due to transferring expenses and contractual obligations. According to the analysis, customer bargaining power is high. Rivalry Among Competitors Intensity of competitive rivalry refers to the degree of competition within an industry. Competition reduces profit margins and increases the pressure on businesses to differentiate themselves from their competitors. Annual increases in power consumption indicate that the industry is still expanding. In addition, the government has a plan to use renewable energy as the primary resource. Since the industry is expanding, the competitiveness between competitors is primarily high as a result. The product quality offered by competitors has become a major factor in the competition between them, as each company may have its own distinctive quality. Consequently, competition within this industry is high. For the five porter analysis conclusion can be seen in table below : Implication Low Low Low High High Competitor Analysis Competitor analysis is the management tool used in strategic management to evaluate the strengths and weaknesses of present and potential competitors; thus, it is a more specific name for competitive analysis [10].The organization conducts a competitor analysis to evaluate or analyze its standing among competitors by collecting and evaluating competitors' information to determine a business position and provide a business owner with a more realistic image of the market and the organization's position. Moreover, according to [11], at the most specific level, a company's competitors are other businesses that offer comparable products and services to the same clients at comparable pricing. The competitor analysis in this section will compare PT SS with it is competitors who have similar concet and products in the local compepitive scope to know and understand the position of PT SS with close competitors. Based on the information from PT SS, the closest competitors that offer similar concepts and products is Aggreko. Focus on generator and gas diesel More wide range, focus on power generation, temperature control, and oil-free air solutions Price No fixed price based on needs, place, size, location, duration, and type of project. Using cost effective strategy which the price is lower than competitor. Using bundling package to offer affordable price. Price is higher than the competiors because offers more features. Has a fixed price and has a bundling package. Place Mainly in Southeast Asia. Only has two markeint channels, offline and online. Operates over 180 countries worldwide. Has two marketing channels, online where its selss directly to customers, and resellers. Focus on promoting its products and services, building strong brand reputation and providing value-added services. Using digital marketing such as social media and website People Experienced and skilled staffs Experienced and skilled staffs Process Service option : end-to-end service Has online delivery process Physical Evidence Offline office and online website Offline office and online website. SWOT Analysis SWOT is a framework for analyzing a company's internal and external opportunities and threats, as well as its internal and external strengths and weaknesses. According to Thompson (2007), SWOT analysis is an easy yet effective strategy for evaluating a company's resource strengths and weaknesses, market opportunities, and prospective external threats. Table 5. SWOT Analysis Helpful to achievhing the objectives Harmful to achieving the objectives Internal Factors Strength -Has a strong brand reputation because of its services and long journey experience in power rental industry. -Having an end-to-end and complex service that satisfied customers. Weakness -Lack of digital promotion, low awareness in social media. -Compared to its competitor, the product is less variated and the market is not large enough. External Factors Opportunitties -Digital marketing growth can be an opportunities to use as an effective marketing tools. -Economic growth. As the economic condition is growing, the demand of electricity is automatically grows. -Newest digital technology that helps to improve customers services and backup data. Customer Journey Analysis Customer journeys should be analyzed to depict the touchpoints where customers may interact with the companies [12]. The customer journey is essential to understand because of the increasing complexity in providing services that customers want. The digital customer journey combines all of the digital touchpoints a customer has with a brand and aggregates data collected such as: basic online consumer data, information about transactions, browsing history on all devices, and online customer service interactions. Brands may use digital customer journey mapping to design a communication strategy that engages customers in a conversation. Following the digital customer journeys of the brand helps the brand see current and planned client journeys as well as important touch points across various marketing channels. pragmatists or the early majority market. Content marketing recommendations will vary based on the type of social media employed, as each social media has a unique approach to the target audience. Linkedin content : PT SS should build a detailed corporate page that explains its past, purpose, services, and people to gain credibility and client faith. Sharing business news, trends, and developments will also show PT SS's expertise and thought leadership. Employee narratives about PT SS workers' views and experiences can humanize the brand and show fans its culture. Webinars and panel talks will let PT SS engage with the business and show its expertise. This will allow the firm to interact with potential clients, answer their questions, and become an industry thought leader. Facebook content : LinkedIn and Facebook content promotion have some similarities, but Facebook has unique perks. By following the above suggestions, PT SS can use Facebook to reach and engage their target audience. First, they can discuss PT SS's recent projects, initiatives, and successes to inform the audience on the company's growth and success. Second, they can share industry news and trends in papers and studies to show their power industry knowledge and thought leadership. Thirdly, interactive power industry polls can improve audience involvement and gather useful views and passions. Facebook's ability to integrate video content, which engages viewers, is a benefit. Video content engages audiences best, according to sproutsocial.com. Twitter content : Twitter is a rapid and dynamic network that PT SS can use to promote its content effectively. For PT SS to maximize Twitter's features, the following content marketing recommendations are provided. Produce concise updates regarding industry developments and PT SS's initiatives to maintain the interest of the audience. Utilize Twitter's direct engagement tools, such as "Spaces," to communicate directly with followers and discuss prospective and current clients. Clients will receive prompt responses to their inquiries and concerns if you provide Twitter-based customer service. Creating a hashtag specific to an industry for initiatives or events will increase their visibility and interaction. Consider using #yourelectricitysolutions or #leadingpowerrental for PT SS. PT SS will be able to utilize Twitter for content marketing and strengthen its relationships with its target audience by implementing these strategies. Convert -Personalized Email Marketing Customers who have already subscribed to PT SS's mailing list can be engaged through the use of tailored email marketing. Delivering customized and relevant messages to each recipient is an excellent strategy to enhance sales and customer retention with personalized email marketing. This approach may include ideas and estimations for PT SS's products as an effective component. This individualized strategy enables PT SS to cultivate relationships with its consumers, deliver relevant information, and eventually drive business expansion. Through the use of personalised and targeted messaging, PT SS can stand out in customers' inboxes and enhance engagement and conversions. Engage -Re-engage email program Customers of PT SS can be re-engaged with an efficient email re-engagement program. These are some suggestions that can be implemented for this program: Surveys regarding goods and services and request for testimonial. By implementing these recommendations for an email re-engagement program, PT SS can revitalize ties with inactive customers, collect useful feedback and testimonials, and ultimately enhance customer retention and promote business growth. An efficient email reengagement program can assist you retain solid customer relationships and drive long-term profitability. -CRM Omnichannel A digital CRM omnichannel is a holistic strategy to managing customer interactions and relationships across different digital channels. A CRM that integrates customer data and interactions across numerous channels, including as email, social media, chatbots, and mobile apps, to deliver a unified view of each customer. This strategy enables businesses to provide a consistent and tailored customer experience regardless of the channel via which the customer interacts with the business. Additionally, an omnichannel CRM delivers significant insights into consumer preferences and behavior, enabling businesses to better understand and address their customers' demands. By utilizing an omnichannel strategy, businesses may increase customer engagement and happiness, boost sales and revenue, and foster sustainable growth.
2023-03-09T16:18:01.673Z
2023-03-07T00:00:00.000
{ "year": 2023, "sha1": "6cd00e0f219809335464972e18d1ba2e68babeae", "oa_license": "CCBY", "oa_url": "https://ijcsrr.org/wp-content/uploads/2023/03/07-07-2023.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "64b52e2297bc298da2d34e70dc74f4d331e6a115", "s2fieldsofstudy": [ "Business" ], "extfieldsofstudy": [] }
237594572
pes2o/s2orc
v3-fos-license
Identification of an ATP/P2X7/mast cell pathway mediating ozone-induced bronchial hyperresponsiveness Ozone is a highly reactive environmental pollutant with well-recognized adverse effects on lung health. Bronchial hyperresponsiveness (BHR) is one consequence of ozone exposure, particularly for individuals with underlying lung disease. Our data demonstrated that ozone induced substantial ATP release from human airway epithelia in vitro and into the airways of mice in vivo and that ATP served as a potent inducer of mast cell degranulation and BHR, acting through P2X7 receptors on mast cells. Both mast cell–deficient and P2X7 receptor–deficient (P2X7–/–) mice demonstrated markedly attenuated BHR to ozone. Reconstitution of mast cell–deficient mice with WT mast cells and P2X7–/– mast cells restored ozone-induced BHR. Despite equal numbers of mast cells in reconstituted mouse lungs, mice reconstituted with P2X7–/– mast cells demonstrated significantly less robust BHR than mice reconstituted with WT mast cells. These results support a model where P2X7 on mast cells and other cell types contribute to ozone-induced BHR. Introduction Ozone is a powerful, nonphysiological oxidant produced in major urban areas throughout the world when primary pollutants (nitrogen oxides and volatile organics) from motor vehicle emissions react with sunlight (1).In summer months, ozone levels often exceed the National Ambient Air Quality Standard, contributing to increased morbidity and mortality in patients with cardiovascular and respiratory diseases (2).Long-term ozone exposure is also associated with increased mortality in people with such diseases (3,4). Immediate effects of ozone substantially affect the respiratory tract.Inhaled ozone reacts with molecules in the airway surface liquid throughout the upper and lower airways, indirectly stimulating epithelial cells and resulting in well-characterized negative effects on pulmonary function, including acute falls in vital capacity, and more persistent increases in airway resistance and bronchial hyperresponsiveness (BHR) (5)(6)(7).These adverse effects on lung physiology occur in individuals without atopic disease and in patients with asthma or chronic obstructive pulmonary disease, potentially resulting in symptom exacerbation and hospitalization for individuals with compromised pulmonary function at baseline. Ozone is a highly reactive environmental pollutant with well-recognized adverse effects on lung health.Bronchial hyperresponsiveness (BHR) is one consequence of ozone exposure, particularly for individuals with underlying lung disease.Our data demonstrated that ozone induced substantial ATP release from human airway epithelia in vitro and into the airways of mice in vivo and that ATP served as a potent inducer of mast cell degranulation and BHR, acting through P2X7 receptors on mast cells.Both mast cell-deficient and P2X7 receptor-deficient (P2X7 -/-) mice demonstrated markedly attenuated BHR to ozone.Reconstitution of mast cell-deficient mice with WT mast cells and P2X7 -/-mast cells restored ozone-induced BHR.Despite equal numbers of mast cells in reconstituted mouse lungs, mice reconstituted with P2X7 -/-mast cells demonstrated significantly less robust BHR than mice reconstituted with WT mast cells.These results support a model where P2X7 on mast cells and other cell types contribute to ozone-induced BHR.JCI Insight 2021;6 (21):e140207 https://doi.org/10.1172/jci.insight.140207 The mechanisms by which ozone increases airway resistance and induces BHR remain poorly understood.The effect appears independent of ozone-induced neutrophilia but dependent upon products of arachidonic acid metabolism (8,9).Since ozone exposure results in an increase in mast cell numbers in the bronchial submucosa (10), and mast cells are major producers of arachidonic acid metabolites and other proinflammatory mediators, we hypothesized mast cell activation may be responsible for ozone-induced BHR.In this report, we describe a pathway involving mast cells, ATP, and P2X7 receptors regulating ozone-induced BHR in mice. Results Ozone increases peripheral airway resistance and induces BHR in mice.To determine whether mice develop changes in airway physiology that are similar to humans after ozone exposure, we exposed WT C57BL/6 mice to filtered air or ozone (2 ppm) for 3 hours and measured airway mechanics at baseline and during bronchial challenge with aerosolized methacholine.Overall resistance increased in the murine lung in response to ozone (Figure 1A).A greater effect was seen in peripheral airways (Figure 1B) than in central airways (Figure 1C), similar to what has been reported in humans (6).Overall, the effect of ozone on airway resistance was small and highly variable between mice.In contrast, ozone more consistently induced marked BHR in response to methacholine (Figure 1D).Ozone had no effect on static or dynamic lung compliance (Figure 1, E and F).Collectively, these results showed that ozone produced similar physiological effects in the mouse airway as in humans, supporting the use of this model organism for this investigation. Mast cells mediate ozone-induced increases in BHR. To test the hypothesis that ozone-induced changes in BHR are mediated by mast cells, we measured the ability of ozone to increase airway resistance in response to methacholine in C57BL/6Kit W-sh/W-sh mast cell-deficient mice.Ozone-exposed WT mice demonstrated a significant increase in airway resistance after aerosolized methacholine compared with air-treated controls (Figure 2A).Ozone-exposed mast cell-deficient mice demonstrated more modest increases in resistance after aerosolized methacholine but a statistically significant greater response than air-exposed mast celldeficient mice.These findings showed mast cells contributed to ozone-induced BHR in mice but also suggest the existence of a concomitant mast cell-independent process. To demonstrate that ozone activates airway mast cells, histamine was measured in bronchoalveolar lavage fluid (BALF) of WT mice immediately after exposure to ozone.Histamine levels in the airway of ozone-exposed mice were greater than levels measured in air-exposed controls (Figure 2B). Next, to test whether ozone could directly activate mast cells, human mast cells derived from umbilical cord blood mast cells (CBMCs) and murine bone marrow-derived mast cells (BMMCs) were exposed to ozone or air, and histamine levels in the culture media were measured immediately after exposure.Histamine levels in air-exposed and ozone-exposed CBMCs and BMMCs were similar, indicating ozone did not directly activate mast cells (data not shown).A possible explanation for the discrepancy between our in vivo and in vitro findings is that mast cell activation after ozone exposure occurs indirectly by an epithelial cell-derived mediator(s).To test this hypothesis, the apical side of human bronchial epithelial (HBE) cells cultured at air-liquid interface (ALI) were exposed to ozone or air in the presence of cocultured mast cells in the basolateral compartment (Figure 2C).Greater histamine release was observed from ozone-exposed CBMCs incubated with HBE cells than similar cultures instead exposed to air (Figure 2D).Collectively, these experiments demonstrated that mast cell activation after ozone exposure occurred through an intermediary signaling molecule released by epithelia in response to ozone, and ozone-induced BHR in mice was largely mast cell dependent. Ozone stimulates purine release by airway epithelia.ATP is an alarmin released from epithelia in response to numerous stimuli (11).Since adenine nucleotides/sides activate mast cells in vitro and in vivo (12)(13)(14)(15), we hypothesized ATP and/or its metabolites may be the critical intermediary signaling molecule(s) responsible for ozone-induced mast cell activation.To begin to test this hypothesis, we first investigated and established the capacity of ozone to stimulate ATP release into the airways of mice in vivo and from human epithelial cells in vitro.WT mice were exposed to ozone (2 ppm) or filtered air for 3 hours, and ATP was measured in the BALF immediately after exposure using a luciferin-luciferase assay.ATP levels were 0.25 ± 0.03 nM in BALF from air-exposed mice, but levels increased to 45.5 ± 3.8 nM in ozone-exposed mice (Figure 3A).Since released ATP is rapidly metabolized by cell-surface ectonucleotidases (16,17), the ATP levels measured above likely underestimate the magnitude of released ATP.Therefore, we performed etheno-derivatization/HPLC analysis to quantify ATP and its catabolites.ATP metabolites were considerably JCI Insight 2021;6(21):e140207 https://doi.org/10.1172/jci.insight.140207more abundant than ATP in control BALF, but ATP, ADP, adenosine, and to a greater extent AMP were all increased in ozone-treated mouse BALF (Figure 3B). Although these results illustrated that ozone exposure promoted nucleotide release into the airway lumen, whether ATP release also occurred toward the basolateral compartment cannot be assessed in vivo.Therefore, we used polarized airway epithelial cell culture models to test this possibility.An HBE cell line (16HBE) and primary HBE cells were cultured at ALI and exposed to ozone on the apical side, and purine levels were measured in the basolateral media.Ozone caused substantial accumulation of purines in the basolateral media in both 16HBE and primary HBE cell cultures (Figure 3, C and D).These data suggest ATP and/or one of its metabolites as a candidate intermediary signaling molecule for submucosal mast cell activation in response to ozone. ATP activates mast cells and induces BHR. To determine the potential of ATP to induce mast cell degranulation and affect airway physiology, we performed a series of experiments in mice, murine BMMCs, and human CBMCs.First, mice were treated with nebulized ATP, and mast cell activation and BHR were evaluated.Like the effect of ozone shown in Figure 2, ATP elicited airway mast cell activation (Figure 4A) and BHR to methacholine (Figure 4B) in WT mice but did not induce BHR in mast cell-deficient mice (Figure 4B).Next, ATP-induced mast cell activation was investigated using murine and human mast cells, which showed ATP directly induced degranulation of murine BMMCs and human CBMCs in culture (Figure 4, C and D).These results are likely due to the direct effect of ATP rather than its metabolite adenosine since we have previously found that adenosine alone does not degranulate BMMCs or CBMCs (15).Adenosine can, however, potentiate antigen-induced degranulation of BMMCs and CBMCs via A 3 and A 2B adenosine receptors, respectively (18,19).To further establish the role of ATP in mast cell degranulation and exclude adenosine signaling, we tested the ability of ATP to potentiate antigen-induced CBMC degranulation in the presence of a selective A 2B adenosine receptor antagonist.Although adenosine receptor antagonism dose-dependently decreased the ability of adenosine to modestly potentiate antigen-induced mast cell degranulation, it had no effect on ATP-induced potentiation of degranulation (Figure 4E).Previously, we showed that adenosine-induced BHR in mice is mediated by the A 3 adenosine receptor on mast cells (15). To exclude the possibility that ATP-induced BHR is not due to its metabolite adenosine, we tested the effect of ATP and adenosine in A 3 adenosine receptor-deficient mice.As expected, and consistent with our previous publication (15), adenosine failed to produce BHR in A 3 adenosine receptor-deficient mice.In contrast, ATP produced robust BHR in these mice lacking the A 3 adenosine receptor, indicating catabolism to adenosine is not responsible for ATP-induced BHR (Figure 4F).Collectively, these in vitro and in vivo data support a model in which ozone-induced BHR is mediated by ATP-induced mast cell activation.ATP-induced mast cell activation is mediated by the P2X7 receptor.Extracellular ATP is known to signal through many P2X and P2Y purinergic receptors, variably expressed by different cell types (20).Given that several pharmacological studies have implicated P2X7 as the receptor mediating mast cell degranulation (21,22), we quantitated mast cell activation in response to ATP in BMMCs from WT mice compared with BMMCs from mice genetically deficient in P2X7 and in CBMCs in the presence or absence of the selective P2X7 receptor antagonist A740003.In WT murine BMMCs and human CBMCs, ATP stimulation resulted in mast cell degranulation and lipid mediator production.These effects of ATP were abolished in P2X7 receptor-deficient BMMCs and in human CBMCs incubated with the P2X7 receptor antagonist A740003 (Figure 5, A-D), despite the expression of other ATP-sensing purinergic receptors by these cells (Supplemental Figure 1; supplemental material available ATP-induced BHR is mediated by P2X7 receptors.To determine whether ATP/P2X7 receptor signaling is required for ATP-induced BHR, we treated WT and P2X7 receptor-deficient mice with ATP and measured airway resistance during methacholine challenge.ATP-induced BHR was abolished in P2X7 receptordeficient mice compared with WT controls (Figure 6A).These results suggest that ATP-induced BHR is entirely dependent upon ATP activation of P2X7 receptor signaling on mast cells. To rule out the possibility that in vivo mast cell activation by ATP occurs indirectly via an alternative mast cell activator released from another immune cell or structural cell after ATP/P2X7 signaling, we performed reconstitution experiments where mast cell-deficient mice (C57BL/6Kit W-sh/W-sh ) were reconstituted with WT or P2X7 receptor-deficient mast cells to generate animals where P2X7 receptor deficiency was limited to mast cells.Mast cell-deficient mice reconstituted with WT mast cells developed ATP-induced BHR, whereas mice reconstituted with P2X7 receptor-deficient mast cells did not (Figure 6B).These results suggest that ATP triggers mast cell activation directly by stimulating P2X7 receptors on mast cells. Ozone-induced BHR is partially mediated by P2X7 receptors on mast cells.To determine the contribution of P2X7 receptor signaling to ozone-induced BHR, WT and P2X7-deficient mice were exposed to air or ozone, and airway resistance to methacholine was measured 24 hours later.Although ozone-exposed WT mice demonstrated robust BHR to methacholine, the response was markedly attenuated in mice deficient in P2X7 (Figure 7A).Ozone-exposed P2X7-deficient mice showed a modest increase in resistance compared with air-exposed controls, suggesting a small P2X7-independent component to ozone-induced BHR.Next, to establish that P2X7 receptors on mast cells contribute to ozone-induced BHR, we reconstituted Wsh mast cell-deficient mice with WT and P2X7 -/-BMMCs.BHR induced by ozone was restored in Wsh mice reconstituted with WT mast cells.Wsh mice reconstituted with P2X7 -/-mast cells also regained BHR but to a lesser extent than mice reconstituted with WT cells (Figure 7B).These results suggest that ozone-induced BHR is mediated by P2X7 receptors on mast cells as well as other cell types, as illustrated in Figure 7C.To exclude the possibility that P2X7 deficiency on mast cells affected mast cell reconstitution in the lung, causing the attenuation of BHR in P2X7 -/-reconstituted mice, submucosal and parenchymal mast cell numbers were quantified from reconstituted mice.Mast cell numbers and distribution in the lungs of WT and P2X7 -/-mast cell-reconstituted animals were similar (Figure 8).Taken together, these data support a model in which ozone-mediated BHR is mediated by P2X7 receptors on mast cells and other cell types. Discussion In this report, we identified a critical role for mast cells, ATP, and P2X7 receptors in the development of ozone-induced BHR (Figure 7C).Several previous studies have shown a contribution of mast cells to ozone-induced changes in lung biology (23)(24)(25).One of the most comprehensive studies previously investigating the effects of ozone on airway physiology of the mouse was reported by Noviski et al. (26).In this study, WBB6F1-Kit W/W-v mast cell-deficient mice and WBB6F1 +/+ controls were exposed to ozone or air and specific airway conductance (GL) and dynamic lung compliance (Cdyn) measured in anesthetized tracheostomized mice at baseline and after challenge with i.v.methacholine.A significant difference in Cdyn values of WT mice versus WBB6F1-Kit W/W-v mast cell-deficient mice was observed 4 hours after 3 ppm ozone exposure, and similar to our studies, no difference in Cdyn was observed after 24 hours.A contributory role of mast cells to ozone-induced BHR was also suggested by the Cdyn response to 1 ppm ozone; however, no contribution was observed at 3 ppm.This discrepancy of reported ozone-induced BHR from the Noviski et al. investigation when compared with our data -wherein a large mast cell contribution to ozone-induced BHR was observed -may be related to differences stemming from the use of i.v.versus aerosolized methacholine or strain-specific differences in the mouse models used.The genetic differences between WBB6F1-Kit W/W-v mice and C57BL/6-Kit W-sh/W-sh mice are substantial.WBB6F1-Kit W/W-v mice have a mutation in the c-Kit gene, are on a mixed genetic background, and are infertile and anemic (27).C57BL/6-Kit W-sh/W-sh mice have an inversion in regulatory elements upstream of c-kit and are on an inbred C57BL6 background (27).Several lines of evidence in humans also support a role for mast cells in mediating the physiological effects of ozone on the airway.Acute ozone exposure causes the release of mast cell-specific mediators into lavageable spaces of the upper and lower airways of human subjects (28,29).After ozone exposure, mast cell numbers in bronchial mucosa increased 2-fold in normal subjects (30), but in asthmatic humans, a 4-fold increase was observed despite treatment with inhaled corticosteroids (10). Mast cell activation by ozone appears to occur indirectly.Peden et al. exposed a rat mast cell line to ozone and found no evidence of mast cell degranulation (31).Consistent with these reports, our data also JCI Insight 2021;6(21):e140207 https://doi.org/10.1172/jci.insight.140207 showed that isolated cultures of human and murine mast cells did not degranulate after direct exposure to ozone; however, degranulation was observed when cocultured with epithelia.These data suggest an epithelial-derived mediator, released by exposure to ozone, triggers mast cell degranulation.Several lines of evidence suggest ATP is such a mediator.Extracellular ATP and its metabolites are well-recognized signaling molecules.Epithelia release ATP via conductive pathways mediated by pannexin-1 channels (32), and via exocytosis of granules storing ATP via vesicular nucleotide transport (33,34).Our data demonstrated substantial ATP release occurred from airway epithelia in vivo and in vitro in response to ozone.Based on the robust ecto-ATPase activity present in airway epithelia (16,17), rapid metabolism of released ATP likely accounted for the relative distribution of purines (AMP >>ADP/ADO>ATP) we observed in extracellular samples.The presence of elevated purine levels on the basolateral side of HBE-ALI cultures supports the hypothesis that ozone-induced epithelial ATP release activates mast cells in the submucosa.Indeed, coculture experiments showing mast cell degranulation after HBE cell exposure to ozone further supports this hypothesis. Since ATP released by epithelia is rapidly degraded into adenosine, which activates mast cells (13,14,35), we first hypothesized that adenosine rather than ATP was responsible for mast cell degranulation after ozone exposure.In mice, adenosine activates mast cells in the airway via the A 3 adenosine receptor (36); however, ATP-mediated BHR was robust in mice lacking the A 3 receptor, similar to WT controls.These results suggest that ATP, rather than adenosine, is responsible for ozone-induced mast cell activation.To further substantiate this conclusion, we tested the ability of adenosine and ATP to degranulate human mast cells in vitro.ATP was much more potent than adenosine at triggering mast cell degranulation.To exclude any effects of metabolism of ATP to adenosine, we treated cells with a selective A 2B receptor antagonist, the adenosine receptor implicated in the degranulation of human mast cells, to eliminate adenosine-induced mast cell activation (19).Although adenosine-induced mast cell degranulation was abolished in the presence of A 2B antagonist, ATP-induced degranulation remained intact. Extracellular ATP activates cells through many purinergic receptors.Our in vivo observations showing attenuation of ozone-and ATP-induced BHR in P2X7-deficient mice suggest that the P2X7 receptor on mast cells is involved.Previous pharmacological studies have suggested roles for P2X and P2Y receptors in mediating ATP-induced mast cell activation (37).Although early studies were limited because of lack of specificity of ligands used, more recent studies with selective receptor antagonists have shown evidence for functional P2X1, P2X4, and P2X7 receptors on human lung mast cells and the LAD2 human mast cell line, and similar to our findings with CBMCs, ATP induces degranulation WT and P2X7 -/-mice (females, aged 9-25 weeks) were exposed to aerosolized ATP (50 mg/mL) and challenged with methacholine 30 minutes later.Black circles represent PBS-treated WT mice (n = 4), red circles represent ATP-treated WT mice (n = 16), black triangles represent PBS-treated P2X7 -/-mice (n = 2), and red triangles represent ATP-treated P2X7 -/-mice (n = 4); *P < 0.05 by mixed effects analysis for repeated measures between ATP-treated groups.(B) ATP-induced BHR is mediated by P2X7 receptors on mast cells.C57BL/6-Kit W-sh/W-sh mice (females, aged 4-5 weeks) were reconstituted with either P2X7 -/-or WT BMMCs.After 16 weeks, mice were exposed to aerosolized ATP (50 mg/ mL) and challenged with methacholine 30 minutes later.Black diamonds represent mice reconstituted with WT mast cells (n = 7), black triangles represent mice reconstituted with P2X7 -/-mast cells (n = 7), open red squares represent nonreconstituted C57BL/6-Kit W-sh/W-sh controls (n = 2); *P < 0.05 by mixed effects analysis between reconstituted groups.Data are shown as mean ± SEM. of LAD2 cells via P2X7 (21,38).Our data with mast cells cultured from P2X7 receptor-deficient mice are consistent with pharmacological studies suggesting that P2X7 receptors mediate degranulation of murine BMMCs (39). Marked attenuation of BHR in mast cell-deficient mice and in P2X7-deficient mice initially suggested the possibility that direct activation of P2X7 receptors on mast cells played a major mechanistic role in the development of ozone-induced BHR.However, while such a pathway appears to be contributory, substantial restoration of the BHR response was still present in mast cell-deficient mice reconstituted with P2X7 -/- mast cells.These results suggest that P2X7 receptor activation on other cell types contributes substantially to ozone-induced mast cell activation.P2X7 receptors demonstrate a broad tissue distribution and are expressed by airway epithelia and most immune cells.Activation of P2X7 results in production and secretion of mature IL-1β and IL-18 through the activation of the NLRP3 inflammasome, and both cytokines can directly activate mast cells (40,41).Our results also showed a small P2X7-independent component to ozone-induced BHR in mice.Ozone stimulates the release of several mediators from epithelia capable of activating mast cells, including IL-33, and synergistic signaling of mast cells by ATP and IL-33 has been reported (42,43). Presently, there exists no recognized therapeutic intervention to prevent the adverse effects of ozone on susceptible populations, such as patients with atopy or chronic lung disease.Continued investigations on the role that ATP, P2X7 receptors, and mast cells play in humans, and strategies targeting this pathway, may lead to preventive approaches for reducing morbidity and mortality from ozone. Methods Animals.C57BL/6-Kit W-sh/W-sh mast cell-deficient mice, P2X7 -/-mice, and WT controls, all on the C57BL/6 genetic background, were bred in a pathogen-free facility on a 12-hour light/12-hour dark cycle on the University of North Carolina (UNC) campus.For all experiments, mice were older than 8 weeks and female. Figure 1 . Figure 1.Ozone increases resistance in the murine lung and induces marked BHR in response to methacholine.C57BL/6 WT mice (females, aged 8-40 weeks) were exposed to 2 ppm ozone or air for 3 hours, and airway mechanics at baseline and to graded methacholine challenges were measured 24 hours later for (A) total lung resistance (R L ), (B) peripheral airway resistance (G), (C) central airway resistance (Raw), (D) methacholine challenge, (E) static lung compliance (Cst), and (F) dynamic lung compliance (Cdyn).Yellow circles represent air-treated mice (n = 13), and blue circles represent ozone-treated mice (n = 61); *P < 0.05 by Student's t test (A and B) and mixed effects analysis comparing repeated measures between air and ozone (*P < 0.05) (D).Ozone exposure has no effect on static or dynamic lung compliance (E and F).Data are shown as mean ± SEM. Figure 2 . Figure 2. Mast cells mediate ozone-induced BHR, but ozone does not directly activate mast cells.(A) Ozone-induced BHR is mast cell dependent.C57BL/6 WT and C57BL/6Kit W-sh/W-sh mast cell-deficient mice (females, aged 12-30 weeks) were exposed to 2 ppm ozone or air for 3 hours and challenged with methacholine at 24 hours.Yellow circles represent air-treated WT mice (n = 3), blue circles represent ozone-treated WT mice (n = 14), yellow squares represent air-treated mast cell-deficient mice (n = 3), and blue squares represent ozone-treated mast cell-deficient mice (n = 14); *P < 0.05 by mixed effects analysis comparing repeated measures between WT and WSH ozone exposures.(B) Ozone activates airway mast cells.WT mice (females, aged 10-16 weeks) were exposed to 2 ppm ozone or air for 3 hours and BALF was collected immediately after exposure.Histamine concentrations were measured by ELISA.Yellow circles represent histamine concentrations in BALF from air-treated mice and blue circles represent the same in ozone-treated mice.n = 7-8; *P < 0.05 by Student's t test.(C) Graphic depicting HBE-ALI/mast cell coculture.(D) Ozone indirectly activates mast cells in HBE/mast cell cocultures.Primary HBE cells grown for 21 days under ALI conditions were cocultured with CBMCs and exposed to 0.8 ppm ozone or air for 4 hours at 37°C and 5% CO 2 .Histamine concentrations were measured by ELISA in basolateral media collected immediately after exposure.Yellow circles represent air-treated cells and blue circles represent ozone-treated cells.n = 5, *P < 0.05 by Student's t test.Data are shown as mean ± SEM. Figure 3 . Figure 3. Ozone stimulates ATP release in murine airways and human epithelia in vitro.(A) Ozone stimulates ATP release in murine BALF.Mice (females, aged 12-22 weeks) were exposed to 2 ppm ozone or air for 1 hour, and BALF was collected immediately after exposure.ATP was measured via luciferin-luciferase assay.n = 4; *P < 0.05 by Student's t test.(B) Ozone stimulates nucleotide release in murine BALF.Mice were exposed to 2 ppm ozone or air for 3 hours, and BALF was collected immediately after exposure.Samples were heat-inactivated for 2 minutes and frozen, and purines measured by etheno-derived HPLC.Yellow and blue circles represent nucleotide/side levels from air-treated mice and ozone-treated mice, respectively.n = 13-14; *P < 0.05 by Student's t test.(C) Ozone stimulates nucleotide release from 16HBE cells.16HBE epithelial cells were cultured for 28-35 days at ALI and exposed apically to 0.8 ppm ozone or air for 1 hour or 3 hours, and basolateral media was collected immediately after exposure.Purines measured as described in B. Data represent total purines (ATP + ADP + AMP + adenosine).Yellow and blue circles represent air-and ozone-treated cells, respectively.n = 5-6; *P < 0.05 by 2-way ANOVA with Tukey's test for multiple comparisons with exposure type (air or ozone) and time as independent variables.(D) Ozone stimulates nucleotide release from HBE cells.HBE cells from normal volunteers were obtained, cultured at ALI, and exposed apically to 0.8 ppm ozone or air for 1 hour or 3 hours.Total purines in the basolateral media were measured as described in B. Data represent total purines (ATP + ADP + AMP + adenosine).Yellow and blue circles represent air and ozone-treated cells, respectively.n = 6-11; *P < 0.05 by 2-way ANOVA with Tukey's test for multiple comparisons applied with exposure type (air or ozone) and time as independent variables.Data are shown as mean ± SEM. Figure 8 . Figure 8. Lung histology of mast cell-reconstituted mice.Immediately after measurement of lung mechanics, lungs from mast cell-reconstituted mice were removed en bloc and fixed in 10% formalin.Paraffin-embedded sections were cut and stained with toluidine blue, and mast cells were counted by an observer blinded to the experimental group.(A) Mast cells per section from Wsh mice reconstituted with WT (n = 5) and P2X7 -/-(n = 5) BMMCs.Data are presented as mean ± SEM. (B) Representative section from nonreconstituted Wsh mouse.(C) Representative section from Wsh mouse reconstituted with WT BMMCs.(D) Representative section from a Wsh mouse reconstituted with P2X7 -/- BMMCs.Arrowheads = mast cells.Scale bars: 110 μm.
2021-09-23T06:23:23.820Z
2021-09-21T00:00:00.000
{ "year": 2021, "sha1": "0e70685d80c8fc55d4b6199dde9391fa6a814ee6", "oa_license": "CCBY", "oa_url": "http://insight.jci.org/articles/view/140207/files/pdf", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "5e20465783518bf5f2c11b5519d9802761e07803", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
220280137
pes2o/s2orc
v3-fos-license
Enhancement of recoil optical forces via high-k plasmons on thin metallic films The recoil optical force that acts on emitters near a surface or waveguide relies on near-field directionality and conservation of momentum. It features desirable properties uncommon in optical forces, such as the ability to produce it via wide-area illumination of vast numbers of particles without the need for focusing, or being dynamically switchable via the polarization of light. Unfortunately, these recoil forces are usually very weak and have not been experimentally observed in small dipolar particles. Some works theoretically demonstrate orders-of-magnitude enhancement of these forces via complex nano-structuring involving hyperbolic surfaces or metamaterials, complicating the fabrication and experimental demonstration. In this work, we theoretically and numerically show enhancement of the lateral recoil force by simply using thin metallic films, which support ultra-high-momentum plasmonic modes. The high-momentum carried by these modes impart a correspondingly large recoil force on the dipole, enhancing the force by several orders of magnitude in a remarkably simple geometry, bringing it closer to practical applications. Optical forces, enabling the manipulation of nanoparticles through light-matter interactions, have allowed important progress in many areas ranging from ultra-cold matter physics to biology [1][2][3][4][5][6][7][8][9][10] . Most optical forces rely on optical traps, gradients of field intensity created via focusing, ideal for the precise manipulation of individual nanoparticles, but challenging to adapt to massivelyparallel manipulation of several particles simultaneously. An alternative approach that overcomes this obstacle is to rely on the scattering of particles under non-focused plane-wave-like illumination with no intensity gradients. Because light carries linear momentum, it follows that if a particle scatters light in a given direction, it must experience a mechanical recoil pushing it in the opposite direction, due to conservation of momentum. This has been exploited in free-standing particles for applications such as tractor beams 11 , and more recently in particles placed near surfaces or waveguides [12][13][14][15][16][17][18][19][20] . When a particle is illuminated and there is a waveguide or surface nearby, the near-field scattering fields can couple into the guided modes of the waveguide or surface modes such as surface plasmons, which propagate away from the particle and hence impart a corresponding mechanical recoil to the particle 12,13 . This mechanism does not rely on gradients of the illuminating light, and so it doesn't require focusing and occurs under plane wave illumination in a wide area. The directionality of the near-field excitation, required to achieve a net force, can be achieved by controlling the electric and/or magnetic polarization of the scatterer 12,13 . Interestingly, these optical forces can be controlled using optical degrees of freedom, such as the polarization of light. Thanks to spin-orbit interaction of light, the polarization state of light may modify and control the spatial degrees of freedom of light, i.e. its propagation direction and scattering 21 . In particular, it is well-known that a circularly polarized dipole or scatterer near a waveguide will excite waveguide modes directionally in a lateral direction [14][15][16] and hence this will result in a polarization-dependent recoil lateral optical force, whose direction is switchable with the polarization handedness of the illumination [17][18][19] , thus enabling a simple non-mechanical method for dynamically controlling the direction of the force. Alternatively to plane wave illumination, the polarization pattern of tightly focused linearly polarized beams can exhibit spinning polarizations away from the beam's axis, which can be used to enable optical trapping or anti-trapping near surfaces based on the same recoil-force principle 20 , instead of relying on illuminating field gradients. The same recoil optical forces also lead to lateral Casimir forces on rotating particles near smooth surfaces 22 . Despite much theoretical works, these polarizationcontrolled recoil lateral forces are very weak under normal circumstances, and their experimental measurement has only been achieved on large 4.5 µm particles at wavelengths of 532 nm 19 , well outside the dipole approximation, and on macroscopic birefringent objects 23 . The lateral force due to near-field directionality of a circularly polarized scatterer in the dipole approximation, which would have vast applications on optical manipulation of nanoparticles and molecules, has proven too weak for an experimental demonstration to date. However, some works have theoretically proposed methods to greatly enhance this force. One approach has been to use hyperbolic metasurfaces, theoretically predicting an increase on the lateral force by roughly three orders of magnitude 24 . Hyperbolic metasurfaces are planar metamaterials which show an anisotropic surface conductivity [25][26][27][28] and were experimentally demonstrated using single-crystal silver nanostructures 29 , but such materials are challenging to fabricate in large areas useful for measurement and application of lateral optical forces, arXiv:2007.00485v1 [physics.optics] 1 Jul 2020 with no experimental demonstration yet produced of the enhanced force. Another approach to achieve enhanced lateral forces from circularly polarized dipoles was theoretically predicted with the use of bulk hyperbolic metamaterials 30 . This is different to hyperbolic metasurfaces because, rather than requiring an anisotropic surface conductivity, it requires an anisotropic bulk permittivity. Such hyperbolic metamaterials are arguably easier to produce in large centimeter-sized areas via the use of metallic nanorods or alternating metal-dielectric thin layers [31][32][33][34] . Circularly polarized dipoles are known to exhibit directionality near hyperbolic metamaterials 35 , where instead of exciting surface plasmons or single guided modes, they excite combinations of high-wavevector bulk modes forming subwavelength 'rays' inside the hyperbolic metamaterial, but still experience the associated lateral recoil force. Yet no experimental demonstration of the enhancement has been achieved in this case either. We believe that a simpler geometry is desirable. The key feature, common in all the approaches for enhancement of recoil lateral forces, is the existence of highwavevector (high-k) modes in the surface or in the bulk. Such modes, excited by the dipole, possess a huge phase gradient due to the very small wavelength of the mode, which manifests as a strong recoil force acting back on the dipole. In terms of momentum conservation, the individual photons or plasmons of the excited modes have a very large momentum p = k owing to the large value of k, and hence each produces a strong recoil 'kick' on the particle. This is a similar idea to the concept of a 'super-kick' acting on particles placed near vortex beam singularities, in regions where the optical phase gradient is also huge 36 . Therefore, for a practical recoil lateral force enhancement, it would be advantageous to find the simplest structures, easy to reproduce in the laboratory and commercially, that support modes with very high wave-vector k. The directional excitation of such modes would then result in an enhanced lateral recoil force. In this work we propose the use of a very thin metallic slab, known to support short-range surface plasmon with a dramatically reduced wavelength and correspondingly high k 37-40 . We theoretically and numerically prove several orders of magnitude enhancement of the lateral recoil force acting on circularly polarized emmiters near such thin films. It is well known that a planar interface between a metal with relative permittivity ε 2 and a dielectric with permittivity ε 1 supports surface plasmon polariton modes (SPPs) with a wave-vector given by k SPP = k 0 (ε 1 ε 2 /(ε 1 + ε 2 )) 1/2 . If a thick metal slab is sandwiched between two dielectrics ε 1 and ε 3 , then, as long as the metal slab is thicker than the plasmon penetration depth, two surface modes will exist independently in the two interfaces, as shown in Fig. 2. However, if the slab is thinner than the plasmon penetration depth t ∼ (1/2)(k 2 SPP − ε 2 k 2 0 ) −1/2 , then coupling and mode splitting will occur between the two modes, forming two supermodes with even and odd symmetry in different field components, commonly known as the long-range (LR) and short-range (SR) surface plasmons [37][38][39][40] , depicted in Fig. 2. The SR-SPP is highly confined in the metal film and is ignored in many plasmonic applications due to its high losses and low propagation length -hence its name, however, these are not limiting factors for the recoil forces that these modes may cause. The thinner the metal film, the stronger the splitting between the two modes, and the higher the wave-vector up to which the SR-SPP is pushed into, as shown in Fig. 2, resulting in extremely reduced modal wavelengths. In this work, as an example, we consider a gold film between free space and a SiO 2 substrate. Both materials are widely used in the development of both photonic and plasmonic structures, operating in a broad frequency spectrum 41,42 . We have used the free-electron Drude model fit 43 to the dielectric function of gold 42 . Knowing that we can achieve high-k modes in thin metallic films, we now study how these affect the recoil optical forces that act on a nearby circularly polarized dipole, which in practice can represent any polarizable particle illuminated with polarized light. Our results are obtained analytically via the exact Greens function formalism of a dipole over a surface; using the dipole angular spectrum approach 12,[44][45][46][47] . Consider an electric dipole source p = (p x , p y , p z ) located at r 0 = (0, 0, h), above a metallic slab of thickness d slab , whose surfaces are z = 0 and z = −d slab , which has been grown on a dielectric substrate, as seen in Fig. 1. The dipole is radiating with an angular frequency ω. The time-averaged optical force F acting on the dipole can be deduced from first principles -the Lorentz electromagnetic force acting on the oscillating charges of a dipole due to the backscattered fields from the surface-and is well-known 46,48,49 . It is given by F = i=x,y,z where E x,y,z represents the total electric field minus the self-fields of the dipole, and therefore includes any back-scattered fields or excited modes on the surface responsible for the recoil force 12 . Here, we will focus on the x component of the force that acts over a polarized particle, which following previous works 17 , after some mathematical steps and the only assumption that ε 1 is real (lossless upper medium), can be written in a compact manner as: where the integration is performed over the normalized transverse wave-vector, k tr = kt k0 ∈ [0, ∞], P xz rad = n 3 1 ω 4 (|px| 2 +|pz| 2 ) 12πε0ε1c 3 0 is the power radiated by the x and z components of the dipole, n 1 = √ ε 1 is the refractive in- |px| 2 +|pz| 2 ∈ [0, 1] is the dipole spin along the y-axis, k z1 = k 0 (n 2 1 − k 2 tr ) 1/2 is the z-component of the wave-vector, k 0 = 2π/λ is the wavenumber of free space, λ is the wavelength, and r pp is the well-known Fresnel reflection coefficient for p-polarization on a slab, which also has a dependency with the transverse wave-vector and the wavelength r pp (k tr , λ). As is well known 17 , a lateral force in the x-direction appears when the dipole has a spin along the y-direction, i.e. spinning in the xz plane. In the following results, we will consider the case that achieves the maximum force along +x: a clockwise circular dipole p = (1, 0, −i), with σ y = 1, which excites modes directionally in the −x direction and experiences the corresponding recoil directed along +x. Let us calculate the lateral force when the dipole is placed near thin metallic slabs. As one can see in Eq. 1, the high-k modes in the film will manifest themselves as a peak in r pp at a very high value of k tr = k SPP /k 0 and will be weighted by k 3 tr , greatly enhancing the lateral force. Let us consider the same thin films as shown in Fig. 2. The exact calculation of the force F x via Eq. 1 is shown in Fig. 3, for varying values of the metal thickness d slab . The polarization of the dipole is p = (1, 0, −i) and it is located at a subwavelength distance h = 0.009λ over the metallic slab. We notice that the force experiences a dramatic increase of nearly three orders of magnitude when the thickness of the slab is very thin, reaching an optimal strength around 0.5 nm. The figure insets also show the magnetic field distribution (H y ) for three values of the thickness. The directional excitation of guided modes to the left is observed in each one of these cases, resulting in recoil, but there is a clear distinctive feature: the wavelength of the plasmon decreases dramatically for the thinner films. This is as expected due to the shortrange surface plasmon having an ultra high k value 37-40 as discussed earlier. Such a high k and the associated small wavelength produces a huge phase gradient at the location of the dipole, resulting in the increased lateral recoil force, in accordance with our initial expectation. Fig. 2. The dipole is located at a distance h = 0.009λ above the gold slab. As discussed earlier, this has a quantum interpretation: the amount of momentum carried by each quantized plasmon in the excited plasmonic mode p = k is proportional to k = −k SPPx , hence, for each excited plasmon with ultra-high momentum, the dipole source must fulfill conservation of momentum and experience a 'super-kick' in the opposite direction, similar to the 'super-kick' predicted for particles near phase singularities 36 . The fields in the three cases depicted in Fig. 3 are oscillating at the same angular frequency ω, hence the energy carried per quantum E = ω is the same, while the momentum is much higher for the thinnest film. For each energy packet transferred to the plasmon mode, the recoil 'kick' experienced by the dipole is far greater in the high-k plasmonic mode. One may ask what is the theoretical limit of this effect. Can we keep reducing the film to achieve infinitely higher wave-vectors? Unfortunately, there is an unavoidable limitation. The larger the propagation wavevector of the mode becomes, the more confined it is to the interface, and the stronger the evanescent decay into the dielectric. This will reduce the coupling efficiency between the dipole and the mode. In the quantum picture, fewer energy packets will be excited by the dipole to produce the super-kick. The coupling is mathematically accounted for on Eq. 1 by the exponential decay term e 2ikz1h = e −2hk0(k 2 tr −ε2) 1/2 inside the integral which dampens the peak in r pp (k tr ) that occurs at k tr = k SPP /k 0 1 due to the SPP. This dampening trades off against the cubic enhancement of the term k 3 tr , resulting in the existence of an optimal thickness of the metallic layer, as clearly shown in Fig. 3. This optimum depends on the distance h between the dipole and the surface. The greatest enhancements are achieved when the dipole is placed very close to the surface, therefore the technique will work best for atoms and molecules, which can be placed at distances of hundredths of a wavelength or less. This theoretical limitation applies to any proposal based on high-k modes, such as the existing proposals based on hyperbolic materials. In view of an experiment, the fabrication is simpler than that of hyperbolic metamaterials and metasurfaces, but still faces some challenges. The enhancement of the force relies on the extremely small wavelength and highconfinement of the modes to the surface, hence the mode will be highly susceptible to surface roughness. The aim is, therefore, a very thin and smooth film. While in our calculations we used gold as the archetypal plasmonic material, the physics is identical for any plasmonic alternative, such as most metals, some semiconductors, conductive metal oxides such as indium-tin-oxide (ITO), and others 50 . The choice of material should be decided by the ease of fabrication of smooth thin films and the desired wavelength of operation. Even the use of plasmonic 2D conductive materials such as doped graphene could be considered. In conclusion, we have theoretically shown several orders of magnitude enhancement of the lateral recoil optical forces with the use of a simple system: thin plasmonic films supporting an ultra-high-k hybridized supermode. The geometry of such system is simpler than hyperbolic metasurfaces or metamaterials, and is suitable for fabrication on wide areas, and hence this increases the prospects of observing these lateral forces experimentally and brings the force a step closer to applications such as the development of efficient platforms for optical manipulation of nanoparticles in optical 'conveyor belts' relying on lateral forces, or manipulation and separation of chiral nanoparticles. The recoil force on circular dipoles is directly responsible for the lateral Casimir force acting on rotating particles near a smooth surface 22 , hence the enhancement presented here directly translates into a corresponding enhancement of lateral Casimir forces. We acknowledge the financial support from the Colombian agency COLCIENCIAS (Postdoctoral stays -No. 811) and the European Research Council Starting Grant ERC-2016-STG-714151-PSINFONI.
2020-07-02T01:01:47.145Z
2020-07-01T00:00:00.000
{ "year": 2020, "sha1": "040f665c91abf4e7cee1c186504639e803d1348d", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "040f665c91abf4e7cee1c186504639e803d1348d", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Materials Science", "Physics" ] }
36992545
pes2o/s2orc
v3-fos-license
NS-398 induces caspase-dependent , mitochondria-mediated intrinsic apoptosis of hepatoma cells The present study was conducted to investigate whether mitochondrial pathway of apoptosis is involved in cyclooxygenase-2 (COX-2) inhibitor-induced growth inhibition of hepatoma cells. The growth rate and pattern of NS-398 (selective COX-2 inhibitor)-treated Hep3B hepatoma cells were analyzed by microscopic examination, DNA fragmentation gel analysis and flow cytometry followed by the cleavage of downstream caspase 3 and the release of cytosolic fraction of cytochrome c assessed by Western blot analysis. NS-398 induced the growth inhibittion of hepatoma cells depending on the concentration of this COX-2 inhibitor and time sequence. Ladder patterned-DNA fragmentation and cytometric redistribution to subG1 phase in cell cycle were revealed in NS-398-induced growth inhibition of hepatoma cells. Cytochrome c was translocated from mitochondria to cytosol in time-dependent manner following NS-398 treatment to hepatoma cells. COX-2 inhibitor induces the growth inhibition of hepatoma cells via caspasedependent, mitochondria-mediated intrinsic apoptosis pathway. These results strongly suggest the possibility of therapeutic implication of COX-2 inhibitor in HCC. INTRODUCTION Hepatocellular carcinoma (HCC) is a growing health problem worldwide, which is the fifth most common malignancy in incidence and the third leading cause in cancer-related mortality [1][2][3].In advanced stage of HCC beyond clinical indications for curative treatment mo-dalities such as surgical resections or percutaneous ablations, no effective systemic therapies but recently introduced molecular targeted agent are present by this time [4][5][6]. Cyclooxygenases (COX), the key enzymes involved in the metabolic conversion of arachidonic acid to prostaglandins, consist of at least two isoforms, constitutive form of COX-1 and inducible form of COX-2.Since the overexpression of COX-2 was known to be associated with neoangiogenic, antiapoptotic, and invasive or metastatic property in certain cell types [7][8][9][10][11], COX-2 has come to the surface as a therapeutic target of several malignant tumors including HCC.Several growing evidences of preclinical studies have indicated that COX-2 inhibitors exert antineoplastic effects on hepatoma cells both in vitro and in vivo [12][13][14][15][16][17][18].However, the degree of impact of COX-2 inhibitor on growth control of heaptoma cells are controversial and its growth inhibitory mechanisms remain unclear thus far. Major signaling pathways of apoptosis, extrinsic death receptor-mediated and intrinsic mitochondria-mediated, are usually carried out through the activation of downstream effector caspases in cytoplasm, resulting in the cleavage of cellular substrates relevant to the morphological and biochemical constellations of apoptotic phenotype [19,20].Among these pathways, mitochondriamediated apoptosis progresses to the cascade activation of initiator caspase 9 and effector caspase 3 via cytoplasmic translocation of cytochrome c from mitochondria [21][22][23]. In the present study, we tried to evaluate whether COX-2 inhibitor induces the growth inhibition of heaptoma cells and engages a caspase-dependent, mitochondrial-mediated apoptosis signaling pathway in HCC. Treatment of Selective COX-2 Inhibitor to Hepatoma Cells NS-398 (N-[2-(cyclohexyloxy)-4-nitrophenyl]methanesulfonamide), dissolved in demethyl sulfoxide (DMSO), was used as a selective COX-2 inhibitor.We prepared the culture media in concentrations of 0, 10, 100, and 200 μM NS-398 for concentration-oriented experiments.Hepatoma cells were plated at a density of 1 × 10 5 cells/well in six-well plastic dishes with 2 mL of 10% FBS-supplemented medium.After 24 h exposure of NS-398, the media were changed with other new media containing same concentration of NS-398, and then the cells were incubated for 72 h. Microscopic Examination After discarding the media with floating cells, we microscopically observed the cells continuously for three days for time-oriented experiments under ×20 magnification and compared the growth pattern of cell proliferation between controls (DMSO-treated cells) and NS-398treated cells according to sequential time course of 24, 48, and 72 h. DNA Fragmentation Gel Analysis Hepatoma cells were harvested at 24, 48, and 72 h after treatment of various concentrations of NS-398.The cells dissolved with lysis buffer were centrifuged at 10,000 g for 30 min.For DNA extraction, the supernatant was digested with 50 ng/mL proteinase K at 37˚C for 24 h, and precipitated with a equal volume of absolute ethanol. For RNA elimination, the pellet was incubated with a Tris-EDTA buffer containing 10 μg/mL RNase A at 37˚C for 1 h.The amount of extracted DNA was measured by spectrophotometric analysis.Each DNA sample was electrophoresed on 1.8% agarose gel containing 0.5 mg/L ethidium bromide, and photographed under ultraviolet (UV) light. Flow Cytometric Analysis Cell cycle distribution was determined by flow cytometric analysis.After treatment of NS-398, the cells were collected by trypsinization, washed twice with phosphate-buffered saline (PBS), and fixed overnight in 70% ethanol at 4˚C.The cells were stained with 50 μg/mL propidium iodide at room temperature for 30 min in the dark, following to be incubated with 50 μg/mL RNase A at 37˚C for 1 h.Then cell cycle components were analyzed by a flow cytometer and CellQuest software. Western Blot Analysis NS-398-treated hepatoma cells were prepared by washing in PBS and dissolving in lysis buffer (50 mmol/L Tris pH 7.5, 250 mmol/L NaCl 2 , 0.5% Triton X-100, 1 mmol/L EDYA, 1 mmol/L PMSF, 1 mmol/L Na 3 VO 4 , 1 mmol/L dithiothreitol, 10 μg of leupeptin/mL and 10 μg of aprotinin/mL).After centrifugation of cell lysates for 10 min at 14,000 g, the protein concentrations of supernatant in the homogenate were determined by bicinchoninic acid assay (Pierce Co, Rockford, IL, USA) according to the manufacturer's instructions.40 μg of protein in each extract was separated by 15% SDS-polyacrylamide gel, and electronically transferred to nitrocellulose membrane.The membrane was blocked with 5% fat free dry milk in TBS-T (25 mmol/L Tris-HCl, pH 7.5, 100 mmol/L NaCl, 0.5 Tween-20), incubated with primary antibody for overnight at 4˚C, washed three times in TBS-T for 10 min, and then incubated with horseradish peroxidase-conjugated secondary antibody for 1 h at room temperature.The immunoblotting signals were developed with an ECL system (Amersham Life Sciences, Buckinghamshire, UK).Anti-β-actin (1:1000, Santa Cruz Biotechnology Inc, Santa Cruz, CA, USA) was used as a protein loading control. Preparation of Mitochondrial and Cytosolic Extracts for Localization of Cytochrome c The cell pellets were obtained from NS-398-treated hepatoma cells that were washed, centrifugated, and resuspended in a buffer containing 25 mM Tris (pH 7.4), 250 mM sucrose, 10 mM KCl, 1.5 mM MgCl 2 , 1 mM EDTA, 1 mM EGTA, and 1 mM DTT.The resuspened cells were homogenized ten times with Dounce homogenizer (Wheaton Scientific Products, Millville, New Jersey, USA) after adding protease inhibitor (10 μg of leupeptin/mL, 10 μg of aprotinin/mL, and 1 mM PMSF) and phosphatase inhibitor (10 μM Na 3 VO 4 ).Unlysed cells and nuclei were discarded by centrifugation at 750 g for 10 min.The supernatant was centrifugated at 10,000 g for 30 min, and the resulting pellet, which indicates the mitochondrial-enriched fraction, was washed once with the same buffer.The remnant supernatant was further centrifugated at 10,000 g for 1 h, representing the cytosol fraction of final supernatant.Each 40 μg of cytosolic and mitochondrial fractions were used for cytochome c immunoblotting described above. Statistical Analysis Statistical analysis was carried out using SPSS software system (SPSS Inc., Chicago, IL, USA).Data were expressed as the mean ± SD of at least three-times independent experiments.Student's t-test and ANOVA analysis were applied to verify the statistical difference as P < 0.05 between experimental groups. COX-2 Inhibitor Induced Growth Inhibition of Hepatoma Cells After treatment of Hep3B cells with NS-398, cell number progressively decreased up to 72 h, while the number of DMSO-treated cells exponentially increased within the same time period (Figure 1).This pattern of COX-2 inhibitor-induced growth inhibition was more prominent in cells treated with 200 μM concentration than in cells treated with 100 μM concentration of NS-398, indicating the both concentration-dependent and time-dependent inhibition of hepatoma cells. COX-2 Inhibitor Induced DNA Fragmentation and Cell Cycle Redistribution of Hepatoma Cells Regardless of concentrations of this compound applied in the present study, genomic DNA of Hep3B cells was fragmented as ladder-pattern at 48 h after the treatment of NS-398, which represented an apoptosis induced by this selective COX-2 inhibitor, but no definite DNA ladder was found after DMSO treatment (Figure 2).Flow cytometric shifting to sub-G1 phase, indicating an apoptotic redistribution of cell cycle, was gradually intensified from 6 to 48 h after the exposure of NS-398, while it was not observed in control part of DMSO treatment (Figure 3(a)).Namely, sub-G1 fraction of NS-398-exposed cells increased from 3.0% ± 2.1% at 6 h to 7.0% ± 3.8% at 48 h (100 μM NS-398) and from 3.0% ± 1.9% at 6 h to 18.0% ± 6.4% at 48 h (200 μM NS-398) (Figure 3(b)).This cytometric redistribution was more conspicuous in 200 μM NS-398-treated cells than in cells treated with 100 μM concentration. COX-2 Inhibitor Induced Caspase-Dependent, Mitochondria-Mediated Apoptosis The signaling pathway in HCC cells. DISCUSSION The greater parts of HCC are usually beyond the therapeutic indications of locoregionally curative measures, allowing many clinical researchers to keep attempting to excavate the therapeutic targets with its corresponding therapeutic agents in clinical studies.Although HCC has appeared to be chemoresistant in the response rate and to show no or minimal survival benefit in meta-analysis for the results of randomized controlled trials of systemic chemotherapy [24], extensive efforts for further improvement of clinical outcome in this liver cancer are ongoing under intensive investigations. In malignant tumors, COX-2 is one of the therapeutic targets, which has been comprehensively studied around the world.Up to date, there have been several preclinical studies in vitro that up-regulation of COX-2 was known to reduce the rate of apoptosis, to promote angiogenesis and to increase the invasiveness of tumor cells [25][26][27][28].Furthermore, selective COX-2 inhibition was reported to elicit an antineoplastic effect on HCC cells, to prevent the resistance to apoptosis as well as to suppress the growth of human HCC implants in vivo study using a nude mice [12,13,18,29].A series of epidemiologic studies have revealed that non-steroidal anti-inflammatory drugs and aspirin could reduce the relative risk of death by colon cancer [30,31].A couple of COX-2 selective drugs is also known to have the therapeutic potential to decrease the number and size of colonic polyps in patients with familial adenomatous polyposis [32][33][34], resulting in the advent of celecoxib approved by US Food and Drug Administration for chemoadjuvant therapy in these patients.However, the exact mechanisms responsible for explaining these growth-inhibitory effects of selective COX-2 inhibitor are not clear even to this time.Based on our knowledges, together with experimental and clinical evidences mentioned above, that COX-2 might play a pivotal role in tumorigenesis and overexpression of COX-2 has been observed in a number of tumor tissues, including colorectal cancer [9], pancreatic cancer [35], gastric cancer [27], esophageal cancer [36] and hepatocellular carcinoma [37], we conducted the present study that was designed for the clarification of inhibitory mechanisms and chemotherapeutic impact of NS-398, a selective COX-2 inhibitor, on the growth of hepatoma cells. OPEN ACCESS Our results showed that NS-398 definitely suppressed the growth of Hep3B HCC cells in both concentrationdependent and time-dependent manner with a resultant consequence of decreased tumor cell number.Besides the decrease of cell number, cell morphology was changed to be microscopically elongated with its significance being not defined.This growth-inhibitory effect of NS-398 to hepatoma cells was verified as a result of the induction of apoptosis by indicating a distinct ladder patternedfragmentation of genomic DNA and significant redistri- bution of cell cycle shifted to sub-G1 phase following NS-398 treatment.The strength of apoptosis-mediated growth inhibition of hepatoma cells was proportionally intensified according to the concentration of COX-2 inhibitor, which is allowed to be able to predict the dosedependent growth suppression of tumor cell.Not only NS-398 but other COX-2 inhibitors such as nimesulide [13], CAY 10404 [13,17,38], celecoxib [16,18], 2,5-dimethyl-celecoxib [39], meloxicam [12], JTE-522 [40], sulindac [12] and indomethacin [13], have been introduced to the preclinical and clinical investigations designed for the identification of its growth-inhibitory mechanisms in tumorous conditions.Until now, the regulatory mechanisms of COX-2 inhibitors on the growth of tumor cells have appeared to depend on the various conditions, namely, a kind of COX-2 inhibitor compounds, a selectivity of COX-2 inhibition, a type of tumor cells, whether or not COX-2 expression in tumor cells, and an inhibitory dominancy of COX-2 inhibitor to COX-2 activity.Especially, The pattern of NS-398-induced growth suppression is thought to be variable according to the type of tumor cells: COX-2 expressing tumor cell lines such as GKCI-4 as well as Hep3B and COX-2 nonexpressing cell line such as HepG2 and PLC/PRE/5 [41]. The antitumor effects of NS-398 COX-2 inhibitor in HCC cells have been reported to be performed through the inhibitory signals via apoptosis, necrosis, or cell cycle arrest [19,[40][41][42][43], however, most COX-2 inhibitors are generally accepted to be involved in the activation of apoptosis pathways which are advanced through the death receptor-mediated, mitochondria-mediated, or both signaling.There are several death receptors such as CD 95 (Fas receptor), tumor necrosis factor (TNF)-R, TNFrelated apoptosis-inducing ligand (TRAIL)-R1, and TRAIL-R2, that are known to be triggered by COX-2 inhibition [17,18]. Caspase 3, one of down-stream proteases for the execution of apoptotic programmed death in any type of insulted cells, is a final common pathway for the propagation of both extrinsic/death receptor-mediated or intrinsic/mitochondria-mediated apoptosis signals [19].Our study revealed that the enzymatic activation of caspase 3 increased up to 72 h following treatment of Hep3B cells with NS-398.This finding was accompanied by a gradual accumulation of cytoplasmic fraction of cytochrome c up to 72 h after NS-398 treatment in both concentrations of 100 and 200 μM, suggesting a gradual release of cytochrome c from mitochondria.Both activetion of cleaved form of down-stream caspase and cytoplasmic translocation of cytochrome c are a hallmark of intrinsic aspoptosis signaling pathway. CONCLUSION NS-398, a selective COX-2 inhibitor, induced the growth inhibition of hepatoma cells through the caspase-dependent, mitochondria-mediated intrinsic apoptosis, providing a strong insight into the anti-neoplastic effects of selective COX-2 inhibitors as novel one of therapeutic agents for hepatocellular carcinoma.In the near future selective COX-2 inhibitors would be expected to try for the application to managements of HCC in clinical fields. Figure 1 .Figure 2 .Figure 3 . Figure 1.Microscopic morphology of NS-398-treated cells.NS-398 induced a progressive decrease in cell number from 24 h to 48 and 72 h after treatment of NS-398, indicating a growth inhibition of hepatoma cells with the both concentration-dependent and time-dependent manners.NS-398, μM Figure 4 . Figure 4. Caspase activity after treatment of NS-398.(a) The activity of caspase 3, a down-stream caspase of apoptosis, was evaluated by Western blot analysis; (b) The expression of cleaved form (19 kDa) of caspase 3 gradually increased on time sequence from 24 h to 72 h after both concentrations 100 and 200 μM of NS-398 treatment to Hep3B cells in time-dependent manner.It may be indicated that NS-398 involves a caspasedependent apoptosis signaling pathway in HCC cells.* P < 0.05, compared with the relative expression of activated caspase 3 at 24 h after treatment of NS-398. Figure 5 . Figure 5. Localization of cytochrome c after treatment of NS-398.(a) The activity of cytochrome c was evaluated by Western blot analysis; (b) The activity of cytosolic fraction of cytochrome c (14 kDa) gradually increased on time sequence from 24 h to 72 h, in opposition to that of mitochondrial fraction, following the treatment of both concentrations 100 and 200 μM of NS-398, which means the cytoplasmic release of cytochrome c from mitochondria.It may be indicated that NS-398 engages a mitochondria-mediated intrinsic apoptosis signaling pathway in HCC cells.* P < 0.05, compared with the relative expression of cytosolic fraction of cytochrome c at 24 h after treatment of NS-398.
2017-06-22T04:18:11.394Z
2012-10-29T00:00:00.000
{ "year": 2012, "sha1": "f89ba99da3696105223610cdc4d29eea42d620a1", "oa_license": "CCBY", "oa_url": "http://www.scirp.org/journal/PaperDownload.aspx?paperID=23618", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "f89ba99da3696105223610cdc4d29eea42d620a1", "s2fieldsofstudy": [ "Biology", "Medicine", "Chemistry" ], "extfieldsofstudy": [ "Biology" ] }
265175712
pes2o/s2orc
v3-fos-license
IFE-Net: An Integrated Feature Extraction Network for Single-Image Dehazing : In recent years, numerous single-image dehazing algorithms have made significant progress; however, dehazing still presents a challenge, particularly in complex real-world scenarios. In fact, single-image dehazing is an inherently ill-posed problem, as scene transmission relies on unknown and nonhomogeneous depth information. This study proposes a novel end-to-end single-image dehazing method called the Integrated Feature Extraction Network (IFE-Net). Instead of estimating the transmission matrix and atmospheric light separately, IFE-Net directly generates the clean image using a lightweight CNN. During the dehazing process, texture details are often lost. To address this issue, an attention mechanism module is introduced in IFE-Net to handle different information impartially. Additionally, a new nonlinear activation function is proposed in IFE-Net, known as a bilateral constrained rectifier linear unit (BCReLU). Extensive experiments were conducted to evaluate the performance of IFE-Net. The results demonstrate that IFE-Net outperforms other single-image haze removal algorithms in terms of both PSNR and SSIM. In the SOTS dataset, IFE-Net achieves a PSNR value of 24.63 and an SSIM value of 0.905. In the ITS dataset, the PSNR value is 25.62, and the SSIM value reaches 0.925. The quantitative results of the synthesized images are either superior to or comparable with those obtained via other advanced algorithms. Moreover, IFE-Net also exhibits significant subjective visual quality advantages. Introduction Obtaining a clear and haze-free image is crucial in photography and computer vision applications.Due to the presence of a large amount of dust, smoke, mist, or other floating particles in the atmosphere, when the camera captures images in this environment, significant quality degradation often occurs in the resulting images.These degradations, in turn, may have a negative impact on the performance of many computer vision systems [1][2][3][4], such as detection, tracking, and classification.Therefore, restoring clean images from damaged inputs through image dehazing is extremely important in the field of computer vision. To overcome quality issues caused via haze in captured images, the atmospheric scattering model [5][6][7] has been proposed to restore clean images; it can be formally written as follows: where I(x) is the observed hazy image, J(x) is the true scene radiance, α is the global atmospheric light, t(x) is the medium transmission map, and x is the pixel index in the observed hazy image I. Furthermore, the medium transmission map can be expressed as follows: where d(x) is the distance from the scene point to the camera, and β represents the attenuation coefficient of the atmosphere.From Equation (1), it can be seen that the dehazing process requires the accurate estimation of the transmission map and atmospheric light.A small portion of research mainly focuses on estimating atmospheric light [8][9][10][11][12], but the accuracy of the atmospheric light obtained will directly affect the results after dehazing and excessive errors will lead to a decrease in the dehazing performance on the image.Alternative other algorithms focus more on accurately estimating transmission maps, and the estimation of a transmission map mainly falls into two categories: prior-based methods [13,14] and learning-based methods [15,16].In order to compensate for information loss during image processing, some methods use different priors to obtain atmospheric light and transmission maps.For example, Berman et al. [17] proposed a non-local prior-based dehazing algorithm based on the assumption that the colors of clean images are well approximated by different colors.Based on the difference in brightness, the saturation of blurred images is blurred, and color attenuation prior (CAP) [18] is proposed to estimate scene depth.The image prior obtained using prior-based algorithms can easily be inconsistent with practice, which may lead to incorrect transmission approximations.Learning-based methods are effective and superior to prior based-algorithms, exhibiting significant performance improvements.In [19], two subnetworks were designed to estimate the transmission map and atmospheric light, respectively.In [20], the authors created three different images from the hazy image and fused the results of the three images after dehazing.However, deep learning-based methods require training on a large number of real hazy images and their corresponding images without haze.The methods of estimating atmospheric light and transmission maps separately have made significant progress, but both have limitations.On the one hand, the inaccurate estimation of transmission maps may lead to low image quality; on the other hand, the separate estimation of atmospheric light and transmission maps leads to difficulties in finding the inherent relationship between them. In order to find the intrinsic relationship between the parameters of Equation (1), Boyi Li et al. [21] first proposed a dehazing model that does not estimate α and t(x).This model directly generates clean images from blurred images, rather than relying on any separate intermediate parameter estimation steps.Recently, many methods have used end-to-end learning instead of atmospheric scattering models to directly obtain clean images from networks [22][23][24][25].Another widely used method tends to predict the residual of potential haze-free images or haze-free images relative to hazy images, as they often achieve better performance [26][27][28][29][30].Although these recent dehazing methods have made significant progress, due to the complex haze distribution and the difficulty in collecting image pairs during the training process, it is easy to lose image details during the dehazing process using limited a dataset. Due to the difficulty in collecting image pairs during the training process, IFE-Net uses end-to-end models, adaptively learns network features, and adopts multiscale feature extraction to better extract information.In addition, parallel convolutional layers of different sizes are used to extract features from input images of different scales [31,32].This feature extraction structure is conducive to preserving more information and reducing the loss of image details. Considering the potential cumulative error caused via the separate estimation of atmospheric light and the transmission map, IFE-Net unifies atmospheric light and transmission maps as one parameter to directly obtain a clean image.In addition, attention mechanisms have been widely applied in the design of neural networks [19,[33][34][35][36], which can provide additional flexibility in the network.Inspired by these works and considering the different weights of features in different regions, a feature attention mechanism module called attention mechanism (AM) is designed in the network, which processes different types of information more effectively. In deep learning networks, the activation function is a nonlinear function that enables neural networks to learn and represent complex patterns and relationships.The selection of the final activation function has a significant impact on the output results of the model, as different activation functions have different characteristics and applicable scenarios.We considered that the output of the last layer of the image after dehazing should have upper and lower boundaries.In IFE-Net, we designed a new activation function called a bilateral constrained rectifier linear unit (BCReLU).The specific details of BCReLU and its comparison with other activation results in the network are detailed in Section 3.2.3. The main contributions are as follows: 1. IFE-Net directly produces the clean image from a hazy image, rather than estimating the transmission map and atmospheric light separately.All parameters of IFE-Net are estimated in a unified model. 2. We propose a novel attention mechanism (AM) module, which consists of a channel attention mechanism, pixel attention mechanism, and texture attention.This module has different weighted information for different features and focuses more on strong features in areas with thick haze. 3. A bilateral constrained rectifier linear unit (BCReLU) is proposed in IFE-Net.To our knowledge, no one else has proposed BCReLU.Its significance in obtaining image restoration is demonstrated through experiments. 4. The experiments show that IFE-Net performs well both qualitatively and quantitatively.The extensive experimental results also illustrate the effectiveness of IFE-Net. Related Work Recently, single-image dehazing has attracted widespread attention in the field of computer vision.Due to the unknown global atmospheric light and transmission map, single-image dehazing is an inherently ill-posed problem.Many different methods have been proposed to address the issue.These methods can be roughly divided into prior-based and learning-based methods.The main difference between these two methods is that the prior-based methods mainly utilize prior statistical knowledge and hand-crafted features to process the hazy images, while the learning-based methods can automatically learn from the training set through a neural network and save it in the network's weights. Single-image dehazing methods that extensively utilize prior knowledge have emerged.A patch-based dark channel prior (DCP) [11] method proposed by He et al. is one of the representative prior methods.Based on the assumption that hazy images may have extremely low intensity in at least one color channel, DCP uses an atmospheric scattering model for haze removal.Pixel-based dehazing methods [37,38] provide another solution to estimate the transmission map; however, pixel-based dehazing methods may result in insufficient information and an inability to estimate transmission maps.In addition, a method for establishing a linear model based on local prior images was proposed by Zhu et al. [18] to restore depth information.Although prior-based methods have achieved good results, the existence of priors is conditional.These hand-crafted priors are only applicable to specific cases and may not be applicable in changing scenarios. The human brain is able to quickly distinguish hazy regions in natural images without other information, and convolutional neural networks have been inspired by this to be applied in image dehazing.These learning-based methods demonstrate extremely strong capabilities in dehazing.For example, Cai et al. [31] proposed Dehaze-Net, which is a trainable end-to-end network consisting of four parts: feature extraction; multiscale mapping; local extremum; and nonlinear regression.It is used to estimate the transmission map, and then the output transmission map is restored to a clean image through an atmospheric scattering model.Ren et al. [39] further proposed a multiscale convolutional neural network (MSCNN) for estimating scene transmission maps.Qin et al. [36] proposed an end-to-end feature fusion attention network (FFA-Net) to directly recover clean images, taking into account different weighted information.Due to the difficulty in obtaining paired clean images and hazy images in nature, Li et al. [40] studied the implementation of image dehazing without training on real clean image sets on the ground.These learning-based methods have achieved good performance in dehazing and are more widely used in image dehazing. The Proposed Method In this section, we first introduce the transformed atmospheric scattering model.Then, a detailed introduction to the specific structure of the proposed IFE-Net is provided. The Transformed Atmospheric Scattering Model We can rewrite Equation ( 1) for the clean image as the output: Existing works such as [19,31,41] usually utilize empirical rules to estimate α and deep learning models to estimate t(x).Estimating α and t(x) separately will lead to certain errors.The output clean image obtained by combining α and t(x) may have a greater cumulative error. The transformed atmospheric scattering model is proposed [21] to reduce the cumulative error caused by separate estimating.The two parameters, α and t(x), are unified into one formula to avoid the potential cumulative error caused by estimating the two parameters separately.Model (3) can be rewritten as follows: where + (α − 1) It is worth noting that in Equation ( 5), α and t(x) together form a new variable D(x).The clean image can be obtained by estimating D(x).The unified variable D(x) can effectively reduce the cumulative error caused by estimating α and t(x) separately. Network Design The architecture of the proposed IFE-Net contains three essential parts: (i) fused filters of different sizes concatenated them together to form a multiscale feature block; (ii) an attention mechanism composed of channel attention, pixel attention, and texture attention; and (iii) a bilateral constrained rectifier linear unit (BCReLU).As illustrated in Figure 1, the input image is first passed to the multiscale feature extraction block to produce multiscale features.Next, we process multiscale features using an attention mechanism block.The combination of the multiscale feature block and attention mechanisms module forms the D(x) estimation block.Finally, we employ BCReLU to perform nonlinear regression on D(x), thus obtaining the clean image. Multiscale Feature Extraction Multiscale feature extraction is very effective in the field of dehazing, while maintaining scale invariance and extracting information [42][43][44][45].Moreover, parallel convolutions with different filter sizes are used to capture features at different scales.To compensate for the loss during information convolution, we connect network features of different scales to each other before extracting information from the next feature layer.Inspired by feature extraction methods, we used convolutional layers of different sizes to densely extract the features of the input image at different scales.As depicted in Figure 2, we choose to use five convolutional operations in the multiscale feature extraction block of IFE-Net, where the size of any convolution filter is among 1 × 1, 3 × 3, 5 × 5, 7 × 7, and 9 × 9. "Conv1" uses a 1 × 1 convolution kernel to extract features, while "Conv2" uses a 3 × 3 convolution kernel to extract features; then, "Conv1" and "Conv2" layers are concatenated into the "concat1" layer.During the forward propagation process, a 5 × 5 sized convolution kernel is used to extract features from the "concat1" layer to obtain the "Conv3" layer.The "Conv2" layer, and the "Conv3" layer are concatenated into the "concat2" layer, and a 7 × 7 sized convolution kernel is used to extract features from the "concat2" layer to obtain the "Conv4" layer.Then, the "Conv3" layer and the "Conv4" layer are concatenated into the "Concat3" layer, and a 9 × 9 convolution kernel is used to extract features from the "concat3" layer to obtain the "Conv5" layer.Finally, the "Conv1" layer, "Conv2" layer, "Conv3" layer, "Conv4" layer and "Conv5" layer are concatenated to obtain the output of the multiscale feature feature extraction block.Importantly, the multiscale design of IFE-Net reduces information loss during convolutions and captures features at different scales. Attention Mechanism Most previous networks have treated channel and pixel features equally during image dehazing, resulting in unsatisfactory results after dehazing.Meanwhile, for images with uneven haze distribution, such networks cannot achieve good results.In order to better handle different parts of information, we designed an novel attention mechanism module, as shown in Figure 3. Compared to networks that treat channel and pixel features equally, the attention mechanism module of IFE-Net assigns different weights to different regions based on the importance of features.The more information the features contain, the greater their weight values.The application of the attention mechanism in IFE-Net focuses more on learning important information with high weights.The channel attention mechanism, pixel attention mechanism, and texture attention of the attention mechanism module can be expressed separately as follows: where y is the output of the multiscale feature extraction block, which serves as the input of the attention mechanism module; Conv 6 (y) and Conv 7 (y) denote the 1 × 1 convolution layer; Conv 8 (y) and Conv 9 (y) denote the 3 × 3 convolution layer; pool represents the 5 × 5 channel maxpooling operation; F 1 (y) denotes the output of the channel attention mechanism; F 2 (y) denotes the output of the pixel attention mechanism; and F 3 (y) denotes the output of texture attention.The attention mechanism block effectively assigns different weights to the features of different regions, enabling the entire network architecture to better retain effective information while suppressing the impact of unimportant information. Bilateral Constrained Rectifier Linear Unit Common choices for nonlinear activation function in deep networks include sigmoid [46], tanh [47], and rectified linear unit (ReLU) [48].Sigmoid is a function of saturation at both ends, but it has a high computational cost and easily suffers from vanishing gradients, which may result in poor local optimality for the training process.Compared to sigmoid, tanh has an output mean of 0, which leads to faster convergence speed and fewer iterations.However, tanh, like sigmoid, has soft saturation, resulting in gradients vanishing.ReLU is proposed to alleviate the vanishing gradient problem of neural networks to a certain extent and accelerate the rate of convergence of gradient descent.It is worth noting that ReLU maintains unilateral suppression and has a wide area when it is greater than 0, which may lead to response overflow, especially in the final layer.For image restoration, the output of the last layer should have upper and lower boundaries, and the range of values should be relatively small.To this end, we propose a bilateral constrained rectifier linear unit (BCReLU) activation function to overcome the limitations of sigmoid and ReLU, as shown in Figure 4.As a novel linear unit, BCReLU keeps bilateral constraint and local linearity.Its output is centered around zero, making the latter layer of neurons less prone to bias and neuronal necrosis.In addition, BCReLU saves computational time and converges faster than other activation functions, which can help solve the gradient attenuation phenomenon as the number of layers increases.The marginal value of BCReLU is y max and y min (y max = 1 and y min = −1).BCReLU can be expressed as We compared the activation functions of the last layer in the network.Table 1 shows the quantitative evaluation results of different activation functions in the last layer on SOTS and ITS datasets (see Section 4.2 for details of PSNR and SSIM indicators).When using BCReLU in the last layer, the network achieved the best results, which confirms its effectiveness in IFE-Net. Experiments To verify the superiority of IFE-Net, the dehazing results of IFE-Net were qualitatively and quantitatively compared with those of existing advanced dehazing methods using real-world images and benchmark datasets. Datasets and Implementation Details We chose the ground truth images with depth meta-data from the indoor NYU2 Depth Database [49].Over 1440 clean images were selected from the NYU2 database and used to create synthesized hazy images using Equation (1).We chose β ∈ {0.4,0.6, 0.8, 1.0, 1.During the course of the experiments, we adopted the simple mean square error (MSE) loss function.Moreover, we utilized the BCReLU neuron in the last convolutional layer, as we find it more effective than other neurons, in our specific environment.The IFE-Net only needs a few epochs to converge and exhibits stability after approximately 10 epochs.In this study, we save the model parameters for 10 epochs of training for dehazing.We notice that an appropriately large batchsize can yield good performance in the batch normalization layer [50].Due to limited physical memory on the GPU cards, the batch size of images is set to 16 during training.All experiments are performed on an NVIDIA RTX 3060 Ti GPU, NVIDIA, Santa Clara, CA, USA. Quantitative Results on Synthetic Images We adopt the peak signal-to-noise ratio (PSNR) and structure similarity index measure (SSIM) [51] as image quality indicators for quantitative analysis.PSNR is generally used to measure the reference value of image quality between the maximum signal and background noise, and the larger the value, the lower the image distortion.PSNR can be expressed as where MSE is the mean square error of two images, and MaxValue is the maximum value that can be obtained from image pixels.SSIM is an indicator that measures the similarity between two images.From the perspective of image composition, SSIM defines structural information as independent of brightness and contrast, reflecting the properties of object structures in the scene.SSIM models distortion as a combination of three different factors: brightness, contrast, and structure.It uses the mean as the estimate of brightness, standard deviation as the estimate of contrast, and covariance as the measure of structural similarity. where C 1 = (K 1 L) 2 and C 2 = (K 2 L) 2 are used to avoid situations where the denominator is 0; L is equivalent to MaxValue in PSNR, which is a very small constant; u x and u y are the mean; σ 2 x is the standard deviation; and σ xy is the variance.Compared to PSNR, SSIM is more in line with human visual characteristics in evaluating image quality. High PSNR and SSIM scores indicate low image distortion and a more similar structure.We compare IFE-Net with the powerful methods in recent years based on PSNR and SSIM indicators.DCP [11] does not require precise physical modeling of haze in images but only relies on the prior principle of dark channels to reliably calculate the transmission matrix for image dehazing.Dehaze-Net [31] is an end-to-end system that utilizes prior knowledge to obtain atmospheric light, only learns the medium transmission map through the network, and ultimately obtains clean images.AOD-Net [21] is the first end-to-end trainable dehazing model, which does not separately estimate the parameters in Equation (1) but rather unifies all parameters into one and directly obtains a clean image from the hazy image.FFA-Net [36] has a feature fusion attention mechanism, and the design of the network allows it to perform well with dense hazes, textures, and details.GCA-Net [52] applies gated subnetworks and smooth extended convolutions, which is beneficial for fusing features of different scales and removing possible grid artifacts.DWGAN [53] introduces 2D discrete wavelet transform, aiming at restoring clear texture details and retaining sufficient high-frequency information.GUNet [54] significantly reduces overhead while effectively removing haze.The images in the RESIDE dataset [55] were selected for experimental evaluation of our method. Figure 5 shows the dehazing results of some randomly selected synthetic images from the SOTS datasets.DCP [11], Dehaze-Net [14], and DWGAN [53] successfully remove heavy haze, but they exhibit color distortion and increased brightness.There are also issues with brightness enhancement and contrast in the results generated via FFA-Net [36], GCA-Net [52], GUNet [54], and AOD-Net [25].IFE-Net handles details better and maintains color consistency with the ground truth.From the results, it can be observed that the results of IFE-Net are significantly better than other networks in terms of fidelity of image details and color.Table 2 shows the average quantitative results of the quality evaluation indicators in Figure 5, and the PSNR and SSIM values of IFE-Net are superior to the other methods.Tables 3 and 4 show the PSNR and SSIM results of our images after dehazing, respectively.Meanwhile, Table 5 shows the average time it takes for different networks to process each image with a size of 548 × 412.The results in Tables 2-5 indicate that IFE-Net is effective and efficient.In addition, we also removed haze from the hazy image of a large area of the sky and compared it with several other advanced methods.Most dehazing algorithms have poor dehazing effects on images containing large areas of sky, resulting in color distortion and uneven color blocks in the restored haze-free images.We show the results of several methods in Figure 8. Figure 8a shows the input hazy image, and Figure 8b-d show the dehazing results of GCA, GUN-Net, and IFE-Net, respectively.From Figure 8b, it can be observed that the results obtained show a thorough removal of haze on the ground, but uneven color blocks appear on the ground and also in the sky area.In Figure 8c, there are no issues with image color distortion, but the dehazing effect in the ground and sky areas is not significant.In Figure 8d, the haze in the sky is suppressed without significant color distortion blocks.Simultaneously, the dehazing effect in the ground area is significant, and the results obtained are good in terms of dehazing and details. Ablation Research Both IFE-Net and AOD unify the atmospheric light and transmission map in the atmospheric scattering model into one parameter, directly obtaining clean images.In order to evaluate the contribution of the AM module in the network, we compared the networks with and without it in AOD and IFE-Net, respectively.Table 6 shows the experimental results on two datasets, indicating that the addition of AM modules resulted in better PSNR and SSIM results.Figure 9 shows a comparison of the visual effects of images; networks without an AM module have darker colors, while networks with an AM module achieve better visual effects.The quantitative and qualitative results in the ablation research demonstrate the effectiveness of an AM module in the networks. Conclusions We proposed a novel end-to-end adaptive enhancement dehazing network, called IFE-Net, to address the challenge of single-image dehazing.IFE-Net consists of a multiscale feature extraction block, an attention mechanism (AM) module, and a bilateral constrained rectifier linear unit (BCReLU).Considering the cumulative errors that may arise from estimating atmospheric light and transmission maps separately, IFE-Net estimates a parameter that is unified by both.Its novel network design effectively performs feature extraction.In addition, we designed an attention mechanism (AM) module to address the varying importance of information in different regions.The importance of BCReLU in image restoration was also demonstrated through experiments.We compared IFE-Net with other dehazing methods using PSNR and SSIM, and the results show that IFE-Net achieved good scores for both indicators.At the same time, we used subjective criteria to analyze the results obtained via different methods on natural hazy images.Our conclusion is that the proposed IFE-Net combines feature extraction blocks, attention mechanism, and a BCReLU activation function, making it significantly effective in natural and synthetic image dehazing.Although our IFE-Net has a simple structure, it shows strong capabilities in haze removal.The experimental results confirm the superiority and efficiency of IFE-Net.At present, IFE-Net has achieved good results in dehazing, and another promising area for our future research is to apply it to image enhancement algorithms. 2 , 1 . 4 , 1 . 6}, and each channel was set with different atmospheric light A, with a range of [0.6, 1.0].The synthesized training set includes 27,193 haze images, and the learning rate is set to 0.0001 during the training process. Figure 5 . Figure 5. Quantitative comparison of IFE-Net with other methods on SOTS. Figure 6 Figure 6 . Figure6shows a comparison of the results between IFE-Net and other methods using real scenes.As shown in Figure6, DCP and GCA suffered from visual artifacts on the real hazy images.AOD-Net, FFA-Net, and GCA produced unrealistic colors in one or several images, such as the results of AOD-Net and FFA-Net in the fourth row as well as the results of AOD-Net and GCA in the fifth row.Dehaze-Net and FFA-Net retained a thin layer of haze, as shown in the second row.However, IFE-Net achieved excellent results in both thin and thick haze areas while maintaining colors consistent with real scenes.A similar result can be observed in the outdoor images shown in Figure7.We enlarged the upper left corner of Figure7a-g,h-n to show the enlarged results.The results of AOD-Net, FFA-Net, GCA, and GUNet exhibited color distortions and many non-natural characteristics.Additional white haze appeared in FFA-Net, resulting in incomplete dehazing.The result of DWGAN contained too much white.GCA and DWGAN showed unclear outlines of buildings in hazy areas above the sky.In contrast, IFE-Net successfully removed almost all of the haze while preserving the essential properties of the images, with obvious advantages in preserving edges, texture, contrast, brightness, and other image characteristics, as shown in Figure7. Figure 7 . Figure 7.This figure shows our ability in detail and color processing compared to other networks. Figure 8 . Figure 8. Results of dehazing in sky areas with dense haze. Figure 9 . Figure 9.Comparison of the visual effects of AM block. Table 1 . Quantitative results of quality evaluation indicators on SOTS and ITS datasets using different activation functions in the final layer. Table 2 . The average quantitative results of the quality evaluation indicators in Figure5. Table 3 . Quantitative results of quality evaluation indicators on SOTS dataset. Table 4 . Quantitative results of quality evaluation indicators on ITS dataset. Table 5 . The average time taken by different networks to process each image. Table 6 . Effectiveness of AM module.
2023-11-15T16:02:25.942Z
2023-11-11T00:00:00.000
{ "year": 2023, "sha1": "9634ccf4e81e8c35c2a973763382c2ea779a45ed", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2076-3417/13/22/12236/pdf?version=1699687307", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "38c3c9f5db6ccdbda2f7bafbfcbaac64283fca28", "s2fieldsofstudy": [ "Computer Science", "Engineering", "Environmental Science" ], "extfieldsofstudy": [] }
52030068
pes2o/s2orc
v3-fos-license
The effect of pelvic tilt and cam on hip range of motion in young elite skiers and nonathletes Background Current knowledge of the effect of changes in posture and the way cam morphology of the hip joint may affect hip range of motion (ROM) is limited. Purpose To determine the effect of changes in pelvic tilt (PT) on hip ROM and with/without the presence of cam. Study design This was a cross-sectional study. Materials and methods The hip ROM of 87 subjects (n=61 young elite skiers, n=26 nonathletes) was examined using a goniometer, in three different seated postures (flexed, neutral, and extended). The hips of the subjects were further subgrouped into cam and no-cam morphology, based on the magnetic resonance imaging findings in the hips. Results There was a significant correlation between the hip ROM and the seated posture in both extended and flexed postures compared with the neutral posture. There was a significant decrease in internal hip rotation when the subjects sat with an extended posture with maximum anterior PT (p<0.0001). There was a significant increase in internal hip rotation when the subjects sat with a flexed posture with maximum posterior PT (p<0.001). External rotation was significantly decreased in an extended posture with maximum anterior PT (p<0.0001), but there was no difference in flexed posture with maximum posterior PT. The hips with cam morphology had reduced internal hip rotation in all three positions, but they responded to the changes in position in a similar manner to hips without cam morphology. Conclusion Dynamic changes in PT significantly influence hip ROM in young people, independent of cam or no-cam morphology. Introduction Femoroacetabular impingement syndrome (FAIS) is defined as a combination of symptoms, clinical signs, and imaging findings (abnormal morphology). 1 The abnormal morphology of FAIS can be divided into two categories, occurring alone or as a combination of both: cam (femoral based) and pincer (acetabular based). [2][3][4][5] Cam morphology refers to a less spherical femoral head. A measure that quantifies this sphericity is the α-angle; the larger the α-angle, the less the sphericity, and in previous studies a threshold of >55° has been considered clinically relevant ( Figure 1). [6][7][8][9] The mechanism of cam-type impingement is a collision between the abnormally formed femoral neck/head (cam) and the acetabular margin during hip flexion and internal rotation of the hip. 4 FAIS has been associated with reduced internal rotation of the hip 148 Swärd Aminoff et al in 90° of flexion, reduced passive hip flexion, and a positive impingement test. 3,[10][11][12] Hip range of motion (ROM) is affected by many parameters such as age, pain, degenerative changes, and hip morphology. [12][13][14] Skiing, both Mogul and Alpine, is a sport that exposes the body to great forces (high speed and G-forces). [15][16][17] The hips and spine act as important dampers for these forces and are placed in vulnerable positions in both flexion and extension. There is a constant shift in the motion of the hips, from extended to an almost maximally flexed position. In Mogul skiing, acrobatic jumps also lead to high forces that affect the hips and spine when landing. Force transfer is dependent on adequate ROM, where joints of adjacent segments interact and their positions affect each other. Ross et al demonstrated in patients with FAIS, using three-dimensional models of the hips from computed tomographic scans, that an increase in anterior pelvic tilt (PT) resulted in a significant decrease in internal hip rotation and an increase in posterior PT resulted in a significant increase in internal hip rotation. 18 No previous studies, that we are aware of, describe how hip ROM is clinically affected by posture; this study therefore focuses on how hip ROM is affected by posture assessed by clinical measurements. The purpose of the present study is to 1) investigate how different postural positions and PT affect hip ROM, 2) investigate whether there is a difference in response to these dynamic changes between hips with a magnetic resonance imaging (MRI)-verified α-angle ≥55° (cam morphology) and hips with an α-angle <55° (no-cam morphology), and 3) validate the study method of MRI and goniometer examinations. Materials and methods All the students attending the Åre Ski Academy (grades 1-4, n=76), elite skiers between 16 and 20 years of age of both genders, were invited, both orally and in writing, to participate in this cross-sectional study. To recruit nonathletes, two of the authors visited several high schools and presented the project orally in class. The nonathletes were included in the study to cover different aspects of society. Written information was also handed out. The invited nonathletes were all first-year high-school pupils, of both genders, and lived in the same geographical area as the skiers. Seventy-five skiers and 27 nonathletes agreed to participate in the present study. The exclusion criteria for both groups were previously diagnosed hip, spine, or pelvic disease, anomalies, and previous surgery on the hips, spine, or pelvis ( Figure 2). The MRI examinations were performed at the Radiographic Department, Östersund Hospital, Sweden, and clinical testing was carried out at the Åre Ski Academy and at the Orthopedic Department, Östersund Hospital, Sweden. Participation was completely voluntary and the participants could withdraw at any time. Written informed consent was given by all the individuals and, for participants younger than 18 years, written informed consent was also obtained from their parents. The present study was approved by the Regional Ethical Review Board at Sahlgrenska Academy, Gothenburg University, Gothenburg, Sweden (ID number: 692-13). The subjects in the images in the present study are identifiable and therefore written informed consent for publishing the images was obtained prior to publication. MRI examination All subjects underwent MRI on both hips without contrast. The MRI equipment was a GE Optima 450 Wide 1.5T (GE Healthcare Bio-Sciences Corp, Piscataway, NJ, USA), at Östersund Hospital, Sweden. Cor T2 Fat Sat and Ax 3D Cube sequences were obtained at an angle on the femoral neck using a coil surface of HD 8 Channel Cardiac Array (GE Healthcare Bio-Sciences Corp). The α-angle was measured on the superior half of the femoral head. Seven measurements, from 9 o'clock to 3 o'clock (180°), were used to determine the morphological features of the femoral head-neck junction (Figures 3 and 4). 4 Measurement of the α-angle was performed according to Nötzli et al. 19 The α-angle was measured between the femoral neck axis and a line from the center of the femoral head to a point at which the contour of the femoral head-neck junction exceeds the radius of the femoral head. In the present study, cam morphology was considered to be present when the α-angle was ≥55°. 6-9 Figure 1 The α-angle is used to define the presence of cam morphology and, in this study, a threshold of >55° has been considered relevant. Reliability of α-angle measurements A resident radiologist, under the guidance of a senior consultant radiologist, measured the α-angle. The radiologists had no prior information about the subjects. The subjects' names and the social security number of each subject were removed from the MRI examinations and replaced by a number. The images were measured according to a standardized protocol, including standardized assessments of the α-angle, as previously described. To test the interobserver reliability, MRI images were selected randomly from 10 participants and were reexamined by a consultant radiologist. Clinical examination All the examinations were performed by two of the authors (CA and ASA) following a standardized schedule to optimize the accuracy of the measurements. Both CA and ASA performed the intra-observer tests. Four months passed between the first and second examination. The subjects included in the intra-observer test confirmed that there were no clinical relevant changes in their health between the two examinations, otherwise they would be excluded. Interobserver tests were performed comparing CA and ASA; the test was performed on the same day and the examiners were blinded to each other's measurements. Both the intra-and interobserver tests included 10 of the skiers. The sitting position was selected because it made it possible in a standardized manner to investigate the relationship between the position of the pelvis and lumbar spine and the hip ROM. To increase the reliability of the sitting examination, according to Reichenbach et al, a special chair was constructed to allow participants to sit freely with their legs hanging over the edge ( Figure 5). 20 Sitting with both hips and knees placed at a 90° angle, the thighs were held in position by four wooden bolsters to prevent hip abduction/ adduction translation. Due to the anatomical differences in the thigh circumference distally, a 1-cm thick pad was placed under the distal femurs to ensure that the femurs were in a horizontal position. To standardize the sitting position, the subjects were instructed to focus on a point straight ahead on the wall and fold their arms across their chest, with their hands on opposite shoulders. Sitting hip joint internal and external rotation ROM testing A goniometer is widely used for evaluating patient ROM and it has been used in previous studies in terms of cam morphology in athletes. 7,21,22 In the present study, a digital goniometer (DG) was used (HALO Medical Devices, Australia). 23 Measurements of the internal and external rotation of the hip joints were made using the DG, calibrated, zeroed, and handheld, along a marked reference line along the tibial border ( Figure 6). The reference line made it possible to set the goniometer laser beams during the measurement, to optimize the accuracy. Measurements of the lumbar spinal sagittal position using the Debrunner kyphometer (Protek AG, Bern, Switzerland), were carried out as described by Todd et al. [24][25][26] PT was measured clinically using the PALM palpation meter (Performance Attainment Associates, St Paul, MN, USA) as previously described by Todd et al and Azevedo et al. 26,27 The anterior superior iliac spine (ASIS) was palpated anteriorly to the most superior prominent protrusion of the iliac crest and the posterior superior iliac spine (PSIS) was palpated posteriorly to the most prominent protrusion of the iliac crest. The caliper tips, of the PALM palpation meter, were placed on ASIS and PSIS and firmly compressed as suggested by When measuring the internal and external rotation of the hips, the lumbar spine position was reevaluated, using the kyphometer, before changing sides, to ensure the same lumbar position when measuring both hips. One examiner stabilized the thigh and pelvis on the examined side and passive rotation (internal or external) was then performed, to the point of initial resistance, by the other examiner. The examiner stabilizing the thigh and pelvis also observed the initial movement of the pelvis, which matched the end point of hip rotation palpated by the other examiner. In this way, the accuracy of the hip rotation was double-checked. The rotation was recorded in degrees. Internal and external rotation was measured in three different sitting postures as follows. Neutral posture In the neutral posture, the subject was instructed to sit in an upright position, thus creating a vertical line from his/her shoulder to the hip. The lumbar spine position and the angle of PT were measured in degrees. Internal and external rotations of the hips were examined as described above and the rotation was recorded in degrees ( Figure 7). Extended posture The subjects were instructed to tilt their pelvis forward maximally and arch their lumbar spine in order to increase pelvic anterior tilt and lumbar lordosis. The lumbar spine position and the angle of PT were measured in degrees. In this position, passive internal and external hip rotation was measured in degrees. The participants were instructed to adopt a neutral posture between each test for a short rest duration, before measuring the other hip ( Figure 8). Flexed posture (slump position) The participants were instructed to tilt their pelvis backwards maximally and flex their lumbar spine, essentially increasing pelvic posterior tilt and lumbar kyphosis. The lumbar spine position and angle of PT were measured as previously described. In this position, passive internal and external hip rotation was measured in degrees. The participants were instructed to adopt a neutral posture between each test for a short rest duration and the slump position was remeasured before measuring the other hip ( Figure 9). Statistical analysis Data were analyzed using IBM SPSS Statistics for Windows, version 24.0 (IBM Corporation, Armonk, NY, USA). The description of data was given in terms of the mean and standard deviation. The normal distribution of the data was tested with a Kolmogorov-Smirnov test. The intra-and inter-rater reliability of the measurements was determined with the intraclass correlation coefficient (ICC, 2,1) (two-way random model, absolute agreement, single measurements). To categorize the level of agreement between ICC values, we used the classification system proposed by Shrout and Fleiss. 29 ICC values <0.40 represent poor reliability, values between 0.4 and 0.75 represent fair to good, and values >0.75 represent excellent reliability. Standard error of the mean (SEM), a reliability statistic which quantifies measurement error in the same units as the original measurement, was calculated as SEM = SD √ (1 -ICC), where SD is the standard deviation of the difference between observations. All the tests were two-sided and significance was set at p<0.05 for each test. An independent two-sample t-test was used to compare hip ROM, pelvic and lumbar positions between the hips with cam morphology and those without. A dependent t-test for paired samples was used to compare hip ROM dependent on the position of the pelvic and lumbar spine. Pearson chi-square test was used to evaluate the distribution of cam between the genders. Results The result of the interobserver test (ICC) analysis for the α-angle indicated a good level of agreement (α-angle ICC 0.75 [SEM 1.8]). Table 1 presents the baseline characteristics of the entire study population. Seventy-five skiers, 35 females and 40 males, agreed to participate. Twenty-seven nonathletes, 18 females and 9 males, agreed to participate in the present study. Three subjects had to withdraw from the study due to the exclusion criteria. Failure to attend investigations meant that MRI data from only 89 participants and physical examination data from 87 participants were available for the final analysis (Figure 2). In 87 participants, altogether 174 hips were analyzed. According to the MRI results, a total of 53 hips were shown to have cam morphology with no difference in the distribution between right and left. Thirty-five subjects had cam and 21% had bilateral cam. There was a significant difference (p=0.001) between the prevalence of cam morphology between females (22%) and males (61%) ( Table 1). Compared with the neutral posture, there was a significant decrease in internal hip rotation when the subjects sat with an extended posture with maximum anterior PT (p<0.001). On the other hand, there was a significant increase in internal hip rotation when the subjects sat with a flexed posture with maximum posterior PT (p<0.001, Table 2). Compared with the neutral posture, there was a significant decrease in external hip rotation when the subjects sat with an extended posture with maximum pelvic anterior tilt, compared with the neutral posture (p<0.001), but no significant difference was observed between neutral posture and when the subjects sat with a flexed posture with maximum posterior PT (Table 2). With an 11.5° increase in anterior tilt of the pelvis there was a 10.8° decrease in internal hip rotation compared with internal rotation when sitting with a neutral posture. When the posterior tilt was increased by 10.5°, compared with a neutral posture, there was a 4.1° increase in internal hip rotation. No significant differences were found between the hips with or without cam when analyzing how the posture of the pelvis and lumbar spine affects hip ROM (Table 3). Hips with cam morphology had reduced internal hip rotation (but not external hip rotation) in all three positions, but they responded to the changes in position in a similar manner to hips without cam morphology. There was a significant correlation between the α-angle and internal rotation of the hip. Hips with large α-angles demonstrated a significantly reduced internal rotation meaning that the larger the α-angle the lower the internal rotation. This was found among both the skiers and nonathletes. Discussion The most important finding in the present study shows that there is a correlation between hip ROM and the position of the pelvis and the lumbar spine (flexed, neutral, extended posture). Hips with cam morphology had reduced internal hip rotation (but not external hip rotation) in all three positions, but they responded to the changes in position in a similar manner to hips without cam. Moreover, the study method displayed good to excellent reliability. Using three-dimensional models of the hip, Ross et al demonstrated that, in patients with FAIS, an increase of 10° in anterior PT resulted in a significant decrease in internal hip rotation in 90° of flexion. 18 They also demonstrated that a 10° increase in posterior PT resulted in a significant increase in internal hip rotation in 90° of flexion. This correlates well with the clinical findings in the present study. Moreover, the present study showed that there was no difference in the response to the position changes between hips with MRI-verified cam morphology (α-angle >55°) and those without. However, the hips with cam had reduced internal hip rotation in all three positions, ranging between 20.3°and 32.5°, compared with the hips without cam, ranging between 25.5° and 40.9°. Agnvall et al previously described this in detail in the same group of asymptomatic subjects. 12 Clinical relevance The results of this study contribute to the increasing knowledge on cam morphology in young athletes. The effect of the pelvic and lumbar position on hip ROM and vice versa are clinically relevant not only for preventing injuries on these anatomic areas among young athletes but also for a general understanding of the function of the hip and spine. It is of importance to identify a decreased hip ROM, caused by a cam morphology, as early as possible not only so that the athletes can be guided when training to prevent overload injuries on surrounding structures, but also to avoid damaging and painful collisions of the cam morphology with the acetabulum. Strengths and limitations The method included both athletes and nonathletes of both genders living in the same geographical area with a relatively large cohort size. A larger sample group might, however, have revealed greater differences between the hips with and without cam. The inclusion criteria in the present study selected only a healthy population; however, this may have limited the ability to distinguish greater differences in ROM in the presence of cam morphology, compared with a symptomatic group. Other limitations include accuracy and interpretation of the radiological measurements. Clinical examination is always dependent on the examiner, but we attempted to increase the accuracy by limiting the number of examiners to just two and using a standardized method. The study method was validated with good results for both the MRI and goniometer methods and this observation strengthens the results in the present study. It is believed that the development of cam morphology does not occur once the growth plate is closed and the skeleton is mature. 9,31 All the subjects in the present study had closed growth plates in the hip and were in this way comparable. The results of this present study indicated that cam morphology results in potential premature contact between the proximal femur and the acetabulum, thereby reducing hip ROM, but the hip joint is affected by changes in posture in the same way, independent of cam morphology. In a laboratory study of cadaveric human pelvises, Birmingham et al showed that, when a hip with cam morphology was internally rotated, the motion at the pubic symphysis increased significantly more compared with a hip without cam morphology. 30 This implies that the loss of hip ROM imposes higher demands on surrounding structures, increasing the risk of overload injuries. For this group of young elite athletes, this may be highly relevant. In young male football players, Agricola et al showed that cam morphology develops gradually during growth, but, after growth plate closure, there is no significant increase in the prevalence of cam. 31 Baranto et al showed that the weakest part of the growing porcine lumbar spine, when compressed into flexion and extension, was the growth zone. 32 The ring apophysis might fuse to the vertebrae as late as at the age of 24-25 years, which is several years later than the development of cam morphology. 33,34 Therefore, it may be possible that a reduction in hip ROM, caused by cam morphology, forces the lumber spine to increase kyphosis, to measure the demands of elite skiing; perhaps, this could increase the anterior load on the open ring apophysis causing overload injuries/growth disturbances. Thoreson et al showed that elite Mogul skiers have significantly greater spinal radiological abnormalities compared with nonathletes. 35 Witwit et al showed that Alpine and Mogul skiers have significantly more degenerative disc changes compared with nonathletes. 36 The correlation between back problems and hip ROM has been recognized among patients undergoing total hip arthroplasty where it has been showed that patients with multilevel degenerative disc disease (DDD) sit with significantly more hip flexion than spine flexion compared with patients without DDD. 37 They also sit and Open Access Journal of Sports Medicine 2018:9 submit your manuscript | www.dovepress.com 155 Position of pelvis and spine influences hip ROM Conclusion Changes in PT and posture (flexed, neutral, or extended) significantly influence hip ROM in hips with or without cam morphology. The hips with cam morphology had reduced internal hip rotation in general, but the effect of PT and posture on hip ROM was the same in both the hips with or without cam. The intimate relationship between hip ROM, PT, and lumbar spine posture is important and must be taken into consideration during a clinical examination.
2018-08-21T22:42:14.753Z
2018-08-01T00:00:00.000
{ "year": 2018, "sha1": "302860c7591f05941ca4c1ea8c7cca95a69a7341", "oa_license": "CCBYNC", "oa_url": "https://www.dovepress.com/getfile.php?fileID=43522", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "ba5986bfafbd6cfc4d7eddc5e6f8ec3d2f317b39", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
17321358
pes2o/s2orc
v3-fos-license
Role of Vitamin D3 in Modulation of ΔNp63α Expression during UVB Induced Tumor Formation in SKH-1 Mice ΔNp63α, a proto-oncogene, is up-regulated in non-melanoma skin cancers and directly regulates the expression of both Vitamin D receptor (VDR) and phosphatase and tensin homologue deleted on chromosome ten (PTEN). Since ΔNp63α has been shown to inhibit cell invasion via regulation of VDR, we wanted to determine whether dietary Vitamin D3 protected against UVB induced tumor formation in SKH-1 mice, a model for squamous cell carcinoma development. We examined whether there was a correlation between dietary Vitamin D3 and ΔNp63α, VDR or PTEN expression in vivo in SKH-1 mice chronically exposed to UVB radiation and fed chow containing increasing concentrations of dietary Vitamin D3. Although we observed differential effects of the Vitamin D3 diet on ΔNp63α and VDR expression in chronically irradiated normal mouse skin as well as UVB induced tumors, Vitamin D3 had little effect on PTEN expression in vivo. While low-grade papillomas in mice exposed to UV and fed normal chow displayed increased levels of ΔNp63α, expression of both ΔNp63α and VDR was reduced in invasive tumors. Interestingly, in mice fed high Vitamin D3 chow, elevated levels of ΔNp63α were observed in both local and invasive tumors but not in normal skin suggesting that oral supplementation with Vitamin D3 may increase the proliferative potential of skin tumors by increasing ΔNp63α levels. Introduction 1a,25-dihydroxyvitamin D 3 (1,25(OH) 2 D 3 ) has been investigated as an adjuvant to anti-cancer therapies. Upon binding to Vitamin D Receptor (VDR), 1,25(OH) 2 D 3 induces expression of genes involved in apoptosis, differentiation and growth suppression while down regulating expression of genes that are involved in proliferation (reviewed in [1]). Keratinocytes synthesize 7-dehydrocholestrol, which is then converted to cholecalciferol by exposure to ultraviolet B (UVB) light between 280-320 nm. Intriguingly, these wavelengths of UVB are also the primary cause of skin cancer. Unlike keratinocytes, no other cell types can produce 1,25(OH) 2 D 3 from 7-dehydrocholestrol and must rely on the sequential transport of cholecalciferol to the liver and kidneys to produce 25-hydroxyvitamin D 3 and 1,25(OH) 2 D 3 , respectively. Due to the relative instability of 1,25(OH) 2 D 3 , dietary supplements commonly consist of cholecalciferol, also referred to as Vitamin D 3 and rely on the conversion to 1,25(OH) 2 D 3 by the liver and kidneys. Severe Vitamin D 3 deficiency, measured by serum 25-hydroxyvitamin D levels, or deletion of the VDR gene is associated with increased cancer risk [2,3]. Although topical application of 1,25(OH) 2 D 3 reduced UVB-induced tumor burden in the SKH-1 mouse model of squamous cell carcinoma [4], protective effects of dietary Vitamin D 3 against the development of skin cancer has not been examined. This is an important study due to recent reports highlighting the frequency of Vitamin D 3 deficiency, and its association with a myriad of disease states which has led to an increase in Vitamin D 3 supplement intake by the general public [5]. On a cellular level, 1,25(OH) 2 D 3 , a downstream metabolite of Vitamin D 3 , exerts its biological function by binding the transcription factor VDR to control the expression of target genes. We have previously demonstrated that p63 inhibits cell invasion by directly regulating VDR and that both VDR and p63 are needed to inhibit cell invasion [6,7]. The transcription factor p63 is essential for normal epidermal stratification and the proliferative potential of the epithelial stem cells [8,9]. The Tp63 gene can form several isoforms with contrasting functions, using alternate promoters and 39 splicing. The TA isoforms (TAp63a, TAp63b and TAp63c) have a full-length N-Terminal transactivation domain, whereas the DN isoforms (DNp63a, DNp63b and DNp63c) have a unique truncated transactivation domain [10]. Our laboratory as well other researchers have previously shown that DNp63a is the only detectable p63 isoform expressed in the epidermis, specifically found in the proliferative basal layer [7,[11][12][13][14][15]. DNp63a is overexpressed in squamous cell carcinomas (SCC) and basal cell carcinomas (BCC) [11][12][13]16,17]. Contrary to its known roles in promoting epidermal differentiation, VDR levels, much like DNp63a, are also elevated in BCC and SCC [18,19]. Through its ability to induce VDR, DNp63a could enhance 1,25(OH) 2 D 3 signaling in non-melanoma skin cancers. In cell culture systems, 1,25(OH) 2 D 3 seems to have paradoxical pro-growth and pro-apoptotic functions. 1,25(OH) 2 D 3 can prevent apoptosis of UV-irradiated keratinocytes in culture through the stabilization of DNp63a [20], or promote apoptosis through increased expression of the tumor suppressor phosphatase and tensin homolog deleted on chromosome 10 (PTEN) [21]. We have demonstrated that DNp63a negatively regulates PTEN expression and localization in keratinocytes to maintain normal growth rates. Moreover, the ratio of DNp63a to PTEN expression is significantly perturbed in human non-melanoma skin cancers [15]. In this study, we sought to delineate whether dietary Vitamin D 3 offered any protection against UVB induced tumor formation and whether it preferentially induced expression of DNp63a, VDR, or PTEN in vivo. We fed SKH-1 hairless mice chow containing increasing concentrations of Vitamin D 3 (cholecalciferol) and chronically exposed them to UVB light modeling the process of UV induced skin carcinogenesis in humans [14]. It has been shown that development of skin tumors in the SKH-1 hairless mice resemble UV induced squamous cell carcinomas in humans both morphologically as well at the molecular level [22]. Our results demonstrated that dietary Vitamin D 3 offered no protection from UVB induced tumor formation and in fact increased tumor size at the highest dose tested. We observed differential effects of Vitamin D 3 diet on DNp63a and VDR but not PTEN expression in chronically irradiated, but otherwise normal skin and in UVB induced tumors. Effects of dietary Vitamin D 3 on epidermal structure To investigate the effect of increasing dietary Vitamin D 3 on epidermal biology we first measured the skin thickness in SKH-1 hairless mice exposed to chronic UVB irradiation. Dietary Vitamin D 3 alone did not alter the epidermal thickness of unirradiated mice at any dose tested indicating that dietary Vitamin D 3 alone is insufficient to change epidermal proliferation. Chronic UVB exposure significantly increased epidermal thickness in all mice (Figure 1a &b). Interestingly, animals fed chow with higher concentrations of dietary Vitamin D 3 displayed increased epidermal thickness in response to chronic UVB as compared to a standard (3 IU) diet (Figure 1a & b). To assess changes in proliferation, non-tumor dorsal skin sections were stained for Ki67. As shown is Figure 1c regardless of Vitamin D 3 diet there was an increase in Ki67-positive cells in irradiated skin when compared to un-irradiated skin from SKH-1 mice. Epidermal thickness is mediated by changes in keratinocyte proliferation and differentiation, both of which are regulated by VDR and DNp63a. 1,25(OH) 2 D 3 has also been shown to stabilize both VDR and DNp63a [20,23]. To determine whether the increase in epidermal thickness caused by increased dietary Vitamin D 3 was the result of enhanced VDR or DNp63a expression, we stained skin tissues from UVB irradiated or control SKH-1 mice fed varying doses of dietary Vitamin D 3 for VDR, DNp63a and their common transcriptional target PTEN. Since, DNp63a is the only detectable p63 isoform found in the epidermis we used a pan p63 antibody to detect DNp63a expression levels in the skin tissues [7,[11][12][13][14][15]. In unirradiated skin, increasing concentrations of dietary Vitamin D 3 had little effect on the expression of VDR (Figure 2a, quantitated in lower panel). Lower doses of dietary Vitamin D 3 significantly increased VDR expression in chronically UVB irradiated skin as compared to unirradiated skin ( Figure 2a). Interestingly, the increase in VDR was not observed with higher concentrations of dietary Vitamin D 3 in irradiated skin and in fact VDR was significantly down regulated in mice fed 1000 IU of Vitamin D 3 diet compared to irradiated mice fed the standard (3 IU) diet (Figure 2a). Similarly, Vitamin D 3 diet did not drastically alter DNp63a expression in unirradiated skin (Figure 2b, quantitated in lower panel). In mice fed a standard diet of Vitamin D 3 , chronic exposure to UVB led to a significant increase in DNp63a expression in the epidermis as compared to unirradiated mice ( Figure 2b). Contrary to previous reports in cultured keratinocytes treated with calcitriol and exposed to acute UV radiation [20], increasing concentrations of dietary Vitamin D 3 led to a reduction in the DNp63a expression in response to chronic UVB exposure ( Figure 2b). Epidermal growth is also regulated by the tumor suppressor PTEN, which inhibits cell proliferation [24,25]. Interestingly, increasing concentrations of dietary Vitamin D 3 (25 and 1000 IU) significantly decreased PTEN expression in the epidermis of unirradiated mice as compared to mice fed a standard 3 IU Vitamin D 3 diet (Figure 2c, quantitated in lower panel). Chronic exposure to UVB significantly reduced the expression of PTEN in the epidermis compared to unirradiated mice ( Figure 2c). Increasing dietary Vitamin D 3 in UVB irradiated mice did not further reduce PTEN levels. Dietary Vitamin D 3 trends toward increased UVB-induced tumor development We next wanted to determine whether dietary Vitamin D 3 affects tumor formation, specifically tumor size and grade, in response to chronic UVB exposure. Representative images of the histology of the normal skin, papilloma, micro-invasive squamous cell carcinoma (MiSCC) and SCC are shown in Figure S1, as described previously [22]. Although increasing the amount of Vitamin D 3 in the diet trended toward an increase in the average tumor area ( Figure S2a) it was not statistically significant. Moreover, mice fed higher doses of dietary Vitamin D 3 displayed a higher frequency of fully invasive squamous cell carcinomas (SCC) as compared to mice fed a standard diet ( Figure S2b), but again this trend was not statistically significant. The increase in SCC in mice fed 1000 IU VD 3 did not alter the frequency of papillomas, but rather correlated with a decrease in MiSCC as compared to the mice fed standard diet, suggesting that higher dietary Vitamin D 3 may enhance tumor progression rather than tumor initiation ( Figure S2b). Dietary Vitamin D 3 differentially affects proteins involved in epidermal maintenance during tumor progression VDR has been shown to inhibit cell invasion [7], a hallmark of tumor progression, and yet it has also been reported to be elevated in BCC and SCC [18,19]. To determine whether there is a correlation between VDR expression, Vitamin D 3 diet, and tumor grade, we determined VDR intensity in tumors of each grade from mice fed increasing doses of dietary Vitamin D 3 . VDR expression was significantly reduced in papillomas when compared to normal epidermal tissue regardless of dietary levels of Vitamin D 3 ( Figure 3). VDR levels were also significantly reduced in MiSCC and SCC as compared to normal epidermal tissue for all doses of dietary Vitamin D 3 tested. Interestingly, VDR expression was significantly reduced in SCCs formed in mice fed a 1000 IU Vitamin D 3 diet when compared to SCCs formed in mice fed a standard diet. The lack of VDR, which has tumor suppressive functions [3], in SCCs from mice fed 1000 IU Vitamin D 3 diet ( Figure 3b) may explain the trend toward increased frequency of SCC in animals on this diet ( Figure S1b). DNp63a, known to increase the proliferation of epidermal keratinocytes, was significantly down regulated in normal epider- mal tissue at all doses of dietary Vitamin D 3 when compared to mice fed a standard diet (Figure 4). Similar to VDR, DNp63a expression was also increased in a dose dependent manner in papillomas fed increasing doses of vitamin D 3 chow. However, unlike VDR, DNp63a expression levels were also increased in both MiSCCs and SCCs (Figure 4b) with increasing doses of Vitamin D 3 diet. Interestingly, papillomas and MiSCC from mice on the higher dietary Vitamin D 3 (150 IU and 1000 IU) expressed significantly more DNp63a than normal epidermal tissue from mice of the same diet (Figure 5b). Loss of p63 has been associated with increased cell invasion in urothelial and bladder cancers [26,27]. Our results also demonstrated a significant reduction in DNp63a expression in SCCs compared to MiSCC and normal epidermal tissues from mice fed a standard diet (Figure 4b). However, SCCs from mice fed increasing concentrations of Vitamin D 3 diet exhibited a dose dependent increase in DNp63a expression levels suggesting that dietary Vitamin D 3 enhances the proliferative nature of SCC by preventing the down regulation of DNp63a (Figure 4b). To investigate if dietary Vitamin D 3 leads to a reduction in the expression of tumor suppressor PTEN, we measured the expression of PTEN by immunofluorescence in normal skin and tumors from UVB irradiated mice fed each of the Vitamin D 3 diets. Increasing the concentration of Vitamin D 3 in the diet did not have consistent trends on the expression of PTEN between tumor types ( Figure 5). Consistent with previous reports [28], PTEN was significantly reduced in UVB induced SCC compared to normal skin independent of the Vitamin D 3 diet ( Figure 5), suggesting that dietary Vitamin D 3 does not increase the tumor size or burden by augmenting UVB mediated degradation of PTEN. We have previously demonstrated that the ratio of DNp63a to PTEN is critical for mediating keratinocyte proliferation and that this ratio is significantly perturbed in human BCC and SCC [15]. To determine if perturbation of the balance between DNp63a and PTEN by dietary Vitamin D 3 was contributing to the increase in tumor size and SCC frequency, we calculated the ratio of DNp63a to PTEN fluorescence intensity in normal skin and tumors from UVB irradiated mice fed each of the Vitamin D 3 diets. Mice fed a diet of 1000 IU Vitamin D 3 displayed consistently higher ratios of DNp63a to PTEN, indicative of an increased proliferation potential, in all tumor types as compared to normal skin ( Figure 6). Taken together, these studies suggest that increased dietary Vitamin D 3 may enhance UVB induced tumor formation and progression, at least at supra-physiologic doses, by decreasing the expression of VDR while increasing the DNp63a to PTEN ratio. Discussion 1,25(OH) 2 D 3 has been investigated as an adjuvant to anticancer therapies because of its growth suppressive and prodifferentiation properties. Although the association of Vitamin D 3 consumption and serum 25-hydroxyvitamin D with the prevention of a wide range of cancers has been widely studied [29], evidence supporting the role of 1,25(OH) 2 D 3 in protecting against skin cancer is often conflicting [30][31][32]. In this study we demonstrate that increased consumption of dietary Vitamin D 3 in the SKH-1 mouse model of squamous cell carcinoma does not protect against UVB-induced tumor formation ( Figure S1). Moreover, supraphysiologic levels (1000 IU) of dietary Vitamin D 3 may actually promote epidermal proliferation and tumor formation as evidenced by increased epidermal thickness and Ki67 staining ( Figure 1) and dose-dependent trends toward larger, more aggressive tumor development ( Figure S2). The enhanced proliferation and tumor development in UVB irradiated mice fed 1000 IU Vitamin D 3 may be related to the stabilization of the DNp63a (Figure 4), which is often overexpressed in human non-melanoma skin cancers [11][12][13]16,17]. Numerous models of acute UVB irradiation have demonstrated that DNp63a must be down regulated to allow for apoptosis in the epidermis [33][34][35]. It has been previously shown that ablation of the basal layer cells of the interfollicular epidermis comprising of mutant p53 and p63-positive cells led to a significant delay in the onset of tumor formation in SKH-1 mice, suggesting that DNp63a likely contributed to tumor formation [36]. Our studies show that, unlike acute UVB exposure, DNp63a levels were significantly higher in chronically UVB irradiated skin (Figure 2b) potentially predisposing skin to tumor development. While we did not observe an increase in DNp63a levels in response to increased dietary Vitamin D 3 in normal skin, we found that dietary Vitamin D 3 was able to limit the down regulation of DNp63a during tumor progression (Figure 4). The sustained expression of DNp63a by dietary Vitamin D 3 could contribute to the proliferation and expansion of UVB induced tumors. Interestingly, the increase in DNp63a expression did not correlate with increased expression of VDR, a direct transcriptional target of p63 (Figures 3-4) [6]. This suggests that dietary Vitamin D 3 , at least in the context of concomitant UVB irradiation, may enhance the oncogenic properties of DNp63a by increasing the ratio of DNp63a to PTEN (Figure 6), rather than altering its tumor suppressive attributes, namely induction of VDR. Unlike previous studies conducted in 1,25(OH) 2 D 3 deficient rats, we did not observe an increase in epidermal VDR expression in response to increased dietary Vitamin D 3 (Figures 2a and 3) [37]. This could be attributed to the inherent differences between rats and SKH-1 mice and/or the differences in experimental approach. In the studies conducted by Zineb et al., VDR expression was measured in Wistar rats that were kept in the dark, preventing the cutaneous production of 1,25(OH) 2 D 3 , and fed a diet lacking Vitamin D 3 to induce 1,25(OH) 2 D 3 deficiency before re-supplementation of dietary Vitamin D 3 [37]. To better mimic the environmental conditions experienced by humans, our studies utilized a hairless mouse strain chronically exposed to UVB without inducing 1,25(OH) 2 D 3 deficiency prior to dietary Vitamin D 3 supplementation. It is important to note that while UVB is the most common cause of non-melanoma skin cancers and its use as a carcinogen is most physiologically relevant, the ability of keratinocytes in the epidermis to generate 1,25(OH) 2 D 3 in response to UVB can confound the interpretation of how dietary Vitamin D 3 affects tumor formation. Our results suggest that increased dietary Vitamin D 3 may enhance UVB induced tumor formation and progression ( Figure S2) by decreasing the expression of VDR in the epidermis ( Figure 3) while increasing DNp63a (Figure 4). The deleterious effects of dietary Vitamin D 3 observed in this study are consistent with previous epidemiological studies showing that the risk for non-melanoma skin cancers was positively correlated with increasing serum 25-hydroxyvitamin D levels [30]. The U.S. Preventive Services Task Force has reported that there is insufficient data to support Vitamin D 3 supplementation as a cancer prevention method [38]. However, more efficient delivery of 1,25(OH) 2 D 3 to keratinocytes may also be critical to generating protective rather than deleterious effects with regard to UVB induced skin cancer. A study by Dixon et al. demonstrated that topical application of 1,25(OH) 2 D 3 led to a reduction in the development and size of UV-induced tumors in the SKH-1 mouse model of squamous cell carcinoma [4]. In contrast to our data obtained with dietary Vitamin D 3 ( Figure S2), topical 1,25(OH) 2 D 3 led to a reduction in the incidence and progression of UV induced tumors [4]. Aside from choice and route of delivery of vitamin D, there were differences in the light source, UV exposure protocol, and sex of mice used in our study compared to the topical calcitriol study. Exposure of keratinocytes to UVB compared to solar simulated light can alter signaling pathways in the skin [39,40]. Additionally, our lab has demonstrated significant differences in the response to UV light between the sexes [41] and also in response to treatment [42]. Topical application of the active Vitamin D 3 metabolite 1,25(OH) 2 D 3 allows for direct activation of VDR and its downstream effects in the skin. In contrast, the dietary Vitamin D 3 used in our study, must be absorbed by the intestines, converted by liver and the kidney to 1,25(OH) 2 D 3 and shuttled back through the blood stream to the tumor site where it has to reach critical levels to inhibit tumor progression. Xenograft mice models of breast cancer have shown that dietary vitamin D 3 inhibited tumor formation in breast fat pad, metastases to the lungs and reduced tumor size [43]. In this study they observed that mice fed diets of up to 5000 IU/kg dietary vitamin D 3 had elevated 25(OH)D 3 serum levels but no hypercalcemia as evidenced by lack of increased calcium levels in serum. [43]. Moreover, mice fed 5000 IU/kg of dietary vitamin D 3 showed a reduction in the number and size of breast tumors. Differences in the effects of dietary Vitamin D 3 supplementation in the two studies may be attributed to a 5 fold higher dose used in the breast cancer xenograft model when compared to the 1000 IU/kg used in our study as well as the tumor type being studied. The current studies did not specifically examine the role of interfollicular vs follicular cells and Vitamin D 3 supplementation in SCC formation. However, it has previously been shown that while removal of the interfollicular epidermis by abrasion in CD-1 haired mice decreased the quantity of papilloma developed by half, it did not delay or stop the development of papillomas [44]. Similarly, CO 2 laser ablation of the interfollicular epidermis of hairless mice did not delay or stop the development of tumors, suggesting that a pool of cells deep in the hair follicle might be responsible for the SCC development [45]. UV-induced ablation of the epidermal basal layer in hairless mice further showed SCC originated from the interfollicular epidermis which was being repopulated from the hair follicle [36]. These studies suggest that the decrease in hair follicles in our hairless mice, observed as they age, did not impact tumor development in our study. These studies demonstrate the complexity of Vitamin D 3 supplementation and suggest the necessity for additional studies to determine whether dietary Vitamin D 3 or topical 1,25(OH) 2 D 3 are viable therapeutic options since the application of 1,25(OH) 2 D 3 to un-irradiated normal hairless mouse skin results in dose and time dependent increases in mitosis and hyperplasia [46]. Taken together these studies demonstrate that Vitamin D 3 may have differing effects depending on the target organ and mode of delivery. In the case of non-melanoma skin cancers it may be detrimental at high levels because of its ability to stabilize DNp63a levels and increase, rather than prevent, UVB induced tumors. demonstrating that the dietary Vitamin D 3 concentrations needed for modeling human borderline deficiency (25-40 nmol/L) average (50-60 nmol/L) and optimal (80-100 nmol/L) serum 25-hydroxyvitamin D concentrations as defined by NRC are 25-50, 100, and 400 IU Vitamin D 3 /kg diet in growing rodents [47]. Twenty-five mice were assigned to each diet. Fifteen mice per diet were dorsally exposed to 2240 J/m 2 UVB, previously determined to be to one minimal erythemic dose, 3 times weekly for a total of 25 weeks. UVB dose was calculated using a UVX radiometer and UVB sensor (UVP, Upland, CA) and delivered using Philips TL 40W/12 RS SLV UVB broadband bulbs emitting 290-315 nm UVB light (American Ultraviolet Company, Lebanon, as previously described [48]. Ten mice per diet served as age matched, unirradiated controls. All mice were sacrificed by CO 2 inhalation. Quantitation of epidermal thickness Epidermal morphology was analyzed using the Accustain trichrome stain (Masson) kit according to manufacturer's instructions (Sigma-Aldrich, St. Louis, MO). Epidermal thickness was measured using ImageJ software at a magnification of 10x in all tissue samples. Dorsal skin morphology was examined using H&E staining and visualized/imaged using a Leica CTR 6000 Microscope (Leica Microsystems, Wetzlar, Germany) and Im-agePro 6.2 software (Media Cybernetics, Bethesda, MD). Tumor development and grade Neoplastic lesions located on the dorsal skin measuring greater than 1 mm in size were counted and measured (length 6 width). Tumors were measured using digital calipers throughout the duration of the study. Tumor grade was determined from hematoxyliln and eosin (H&E)-stained sections of tumors isolated from UVB irradiated mice graded in a blinded manner by a board certified veterinary pathologist as previously described [48]. Briefly, papillomas were exophytic tumors (tumors that grow outward from the originating epithelium) that showed no invasion of the stroma [22]. MiSCCs were distinguished by the depth of penetration into the dermis [22]. Only tumors that invaded the panniculus carnosus were classified as fully invasive SCCs [22]. Average tumor percentages were calculated using the total number of graded tumors per treatment group. Immunofluorescence Tumors excised from dorsal skin as well as non-tumor dorsal skin were formalin fixed, paraffin-embedded and stained for p63, VDR and PTEN as previously described [7,15]. Ki67 staining was preformed analogous to previously described staining of p63 [7,15]. For detection of VDR, paraffin was removed by four 10 minute washes in Histo-Clear (National Diagnostics, Atlanta, GA) and rehydrated in graded series of alcohols with a final wash in distilled water. After rehydration slides were incubated at 37uC for 20 minutes at 60uC in 2 N HCl. Slides were neutralized with 3 washes of 0.1 M sodium borate buffer (pH 8.5), followed by three washes in PBS. Tissues were blocked for 3 hours with 5% normal goat serum followed by overnight incubation with anti-VDR at 4uC (clone 9-A7, Thermo-Scientific, Fremont, CA). Excess primary antibody was removed with three consecutive washes in PBS followed by incubation with AlexaFluor 568 goat anti-rat antibody for 1 hour at room temperature. Excess secondary was removed with three consecutive 5 min washes in PBS prior to mounting with Vecta-Shield plus DAPI Mounting Media (Vector Laboratories, Burlingame, CA). Cells were visualized and imaged using a Leica CTR 6000 Microscope (Leica Microsystems, Wetzlar, Germany) and ImagePro 6.2 software (Media Cybernetics, Bethesda, MD). Mean fluorescence intensity for each tissue sample was calculated using ImagePro 6.2 software after normalization for background intensity. Multiple measurements (at least 5), all of the same size, were taken of the epidermal tissue for each tissue sample. Average mean fluorescence intensity was calculated as previously described [15]. Statistics Differences in mean fluorescence intensities were analyzed by one-way ANOVA followed by pairwise multiple comparison testing (Tukey test method, SigmaPlot 12, Dundas Software). Supporting Information Figure S1 SKH-1 mice skin following UVB induced tumor development. SKH-1 mice fed chow with increasing concentration of Vitamin D 3 were irradiated thrice weekly for 25 weeks with UVB. Tumor excised from dorsal skin as well as nontumor (normal) dorsal skin were formalin fixed, paraffin embedded and subjected to H&E staining. Representative images of a normal skin, papilloma, MiSCC and SCC were taken at a 20x magnification. Scale bar = 20 mm.
2018-04-03T02:12:01.920Z
2014-09-05T00:00:00.000
{ "year": 2014, "sha1": "5eb2b078ccc80f714e089373181c93a1c40933c5", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0107052&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "5eb2b078ccc80f714e089373181c93a1c40933c5", "s2fieldsofstudy": [ "Medicine", "Environmental Science", "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
270176294
pes2o/s2orc
v3-fos-license
Successful treatment with tislelizumab plus chemotherapy for SMARCA4-deficient undifferentiated tumor: a case report SMARCA4-deficient undifferentiated tumor (SMARCA4-dUT) is a devastating subtype of thoracic tumor with SMARCA4 inactivation and is characterized by rapid progression, poor prognosis, and high risk of postoperative recurrence. However, effective treatments for SMARCA4-dUT are lacking. Herein, we describe a patient with SMARCA4-dUT who exhibited an impressive response to the anti-programmed cell death protein-1 (PD-1) antibody (tislelizumab) in combination with conventional chemotherapy (etoposide and cisplatin). To the best of our knowledge, this is the first case of SMARCA4-dUT treated with chemotherapy, comprising etoposide and cisplatin, combined with anti-PD-1 inhibitors. Immunotherapy combined with etoposide and cisplatin may be a promising strategy to treat SMARCA4-dUT. Introduction The new 2021 World Health Organization (1) Classification of Tumors of the Lung, Pleura, Thymus, and Heart has renamed the entity previously described as SMARCA4deficient thoracic sarcoma (SMARCA4-dTS) to SMARCA4-deficient undifferentiated tumor (SMARCA4-dUT) (2).SMARCA4-dUT is a devastating subtype of lung cancer with a poor prognosis and a median survival time of < 6 months, despite surgical intervention (3). The optimal treatment for SMARCA4-dUT has not been determined due to a lack of evidence.Immunotherapy seems to be a feasible strategy for SMARCA4-dUT.Potential treatment regimens include mono pembrolizumab (4,5), mono nivolumab (6), mono tislelizumab (7), ipilimumab combined with pembrolizumab (8), and ABCP (atezolizumab in combination with bevacizumab, paclitaxel, and carboplatin) (9,10).Although chemotherapy with etoposide and cisplatin (EP) is widely employed for thoracic malignancies, especially to treat tumors with poor prognosis, such as small cell lung cancer, neuroendocrine carcinoma, and mediastinal vitelline cyst tumors, the efficacy of EP regimen for SMARCA4-dUT remains unclear.Herein, we present the first case of successful treatment of SMARCA4-dUT with tislelizumab, an anti-programmed cell death protein-1 (PD-1) inhibitor, in combination with conventional cytotoxic agents (such as EP). Case presentation A 56-year-old heavy smoker (Brinkman index: 600) was referred to The First People's Hospital of Changde City with sudden onset coughing lasting approximately three weeks.The patient expressed fear regarding inadequate tumor control and the potential for a life-threatening illness.The patient had no previous history of tumors, and there were no issues with the axillary lymph nodes at the time of surgery.Apart from the aforementioned symptoms, physical examination and patient history afforded unremarkable findings.No risk factors or familial background for neoplasms were identified.Previously, the patient underwent a treatment regimen comprising chemotherapy and immunotherapy, followed by maintenance immunotherapy.Computed tomography (CT) revealed a thoracic mass in the left lower lobe (Figure 1).Preoperative CT imaging revealed nodular thickening in the medial and combined branches of the left adrenal gland, with mild enhancement observed during the enhancement scan.Conversely, the right adrenal gland exhibited no obvious abnormalities.Additionally, cystic hypodense shadows were observed in both kidneys, with no significant enhancement noted during the enhancement scan.Moreover, no enlarged lymph nodes were detected in the retroperitoneum (Supplementary Figure S1). Three months after surgery, the patient rapidly developed disease recurrence, including left lung, left mediastinal lymph nodes, left axillary lymph nodes, and left pleural metastases.Symptoms of compression in the axillary lymph nodes included discomfort and a sensation of a foreign body.The patient was able to palpate the lymph nodes, which were hard, approximately 2 cm × 2 cm in size, with unclear borders, poor mobility, and no tenderness.The postoperative MRI scan of the head revealed speckled long T1T2 signal shadows surrounding the ventricles bilaterally, with a high signal observed on TIR.No evident abnormal enhancement foci were detected upon enhancement.The ventricular morphology signal appeared intact, with a centered midline, and no apparent abnormalities were noted in the sulcal fissure.Additionally, mucosal thickening was observed in the paranasal sinuses.Moreover, a mound-like short T1 signal was observed in the subcutis of the left parietal region (Supplementary Figure S2).Six cycles of tislelizumab plus EP induction therapy were administered as first-line chemotherapy, with cisplatin (40 mg/m 2 on days 1-3), etoposide (100 mg/m 2 , days 1-3), and the immune checkpoint inhibitor (ICI) tislelizumab (200 mg, day 1), and then switched to the maintenance phase (continued tislelizumab 200 mg, day 1) without any special side effects (only mild nausea).Chest CT Axial computed tomography images of the lung (A) and mediastinal (B) window show a lobular soft tissue mass in the left lower lobe.images (Figure 3) were obtained after completing six courses of tislelizumab plus EP, revealing the disappearance of the left lung metastasis, shrinkage of all lymph nodes, and left pleural dissemination (best overall response: partial response, according to the RECIST version 1.1).Postoperative CT imaging of the lungs showed that the bronchial tubes in the dorsal segment of the lower lobe of the left lung were not shown, and a metallic dense shadow was seen, which was a postoperative change; the vascular bronchial bundles of the two lungs were increased, thickened and fuzzy, and the translucency of both lungs was increased, and cystic translucent shadows of varying sizes were seen; a nodular shadow of about 12 mm × 11 mm was seen in the upper lingual segment, with clear borders and mild enhancement; a mass-like hyperdense shadow of about 28 mm × 14 mm was seen in the posterior segment of the upper lobe of the left lung, and it was moderately unevenly intensified.In the remaining lungs, there were scattered small nodules and flocculent shadows, and enlarged lymph nodes were seen in the mediastinum and the left hilar, with blurred borders and uneven enhancement, and irregular fluid density shadows were seen in the left thoracic cavity and nodular thickening of the pleura, with uneven enhancement.Multiple enlarged lymph nodes were seen in the left axilla, partially fused and with blurred borders.The liver parenchyma is shown to be hypodense.Nodular non-enhancing foci are seen in both kidneys.Thoracic spine bone is unevenly hypodense (Supplementary Figure S3).The patient continued treatment for 9 months without disease progression (Figure 4). 3 Discussion SMARCA4-dUT was first defined in the 5th edition of the WHO Classification of Thoracic Tumors, published in 2021 (1).SMARCA4-dUT was initially defined as SMARCA4-dTS, originally discovered in 2015 by Le Loarer et al. (11) by RNA sequencing of unclassified sarcomas, and subsequently classified as a new type of undifferentiated lung malignancy of pulmonary epithelial origin.Thoracic SMARCA4-dUT exhibits consistent immunohistochemical loss of both SMARCA4 and SMARCA2 (12).In addition, this tumor frequently displays the robust expression of one or more stem cell markers, such as SOX2, CD34, and SALL4, which can facilitate the differential diagnosis of SMARCA4-deficient non-small cell lung cancer (NSCLC) (SMARCA4-dNSCLC), typically negative for these markers (13,14).SMARCA4-dNSCLC usually expresses more diffuse and stronger keratin than thoracic SMARCA4-dUT and is often negative for thyroid transcription factor (TTF)-1 (15).Morphologic features of SMARCA4-dUT can be either undifferentiated small-cell or large-cell malignancies (16, 17).All areas of the tumor are undifferentiated with slightly discohesive cells arranged in sheets and nests, abundant geographic necrosis.Cytologic features included many areas with rhabdoid cells and a high mitotic rate.In this case, the tumor is mainly comprised of undifferentiated carcinoma of large round cells, cytologic features (undifferentiated tumor cells with slightly discohesive cells arranged in sheets and nests, abundant geographic necrosis), and immunohistochemical staining results (complete loss of both SMARCA4 and SMARCA2, reactivity of SALL-4) matching histopathology characteristics of SMARCA4-dUT.SMARCA4-dUT is common in heavy male smokers and has a poor prognosis, with early postoperative recurrence in operable patients (10).Typically, patients present with large, infiltrative, and compressive thoracic masses frequently associated with necrotic lymphadenopathy (13,14).Although mediastinal involvement may be prominent, most cases present with focal parenchymal continuity, often with substantial emphysema.Metastatic Chest computed tomography images showing the disappearance of the left lung metastasis (A), the shrinkage of the left axillary lymph nodes (B), and the shrinkage of the left pleural dissemination (C) after completing six cycles of tislelizumab plus etoposide and cisplatin induction therapy.Chest computed tomography images demonstrating a durable response after completing ten doses and fourteen doses of tislelizumab maintenance phase (Partial axillary lymph nodes disappeared after ten doses of tislelizumab).involvement is common, involving the lymph nodes, bones, lungs, brain, and adrenal glands, a pattern notably similar to that observed in NSCLC (18).SMARCA4-dUT also can be central or peripheral lung lesions, with involvement of the parietal pleura and chest wall. In general, the clinical presentation and radiological features of SMARCA4-dUT are nonspecific (19).In this case, the patient presented with a peripheral lung lesion and involvement of lymph nodes.Thoracic SMARCA4-dUT is a new type of devastating neoplasm, and most cases of SMARCA4-dUT exhibit an advanced stage at presentation with considerably poor survival outcomes (20).SMARCA4-dUT of the thorax is a rare but aggressive neoplasm primarily occurring in heavy-smoking adults aged 30-59 years (21).The median overall survival (OS) ranges from five to seven months, with a 2-year OS of only 12.5% with traditional therapies (11,13,22,23).SMARCA4-dUT carries a high risk of recurrence after surgical resection, even among stage I (23).In this case, the patient had a relapse (intrathoracic recurrence and axillary lymph node metastasis) less than 10 weeks after surgery. Common mutations in SMARCA4-dUT include TP53, KRAS, STK11, KEAP1, ARID1A, and NF1 (24).Currently, guidelines for treating SMARCA4-dUT are yet to be established.Few reports have explored the feasibility of ICI combined with chemotherapy for thoracic SMARCA4-dUT, demonstrating that immunotherapy might be a promising strategy for treating thoracic SMARCA4-dUT (4)(5)(6)(7)(8)(9)(10).The marked response to ICI therapy may be mediated via dysfunction of polybromo-and Brahma-related gene 1 (BRG1)associated factor (PBAF) associated with the loss of SMACA4 (24).SMARCA4 is a subunit of the switch/sucrose non-fermentable (SWI/SNF) chromatin-remodeling complex.Reisman et al. (25,26) have demonstrated that loss of SMARCA4 can occasionally be observed in otherwise conventional NSCLC and is associated with a markedly aggressive clinical course.Loss of PBAF function may induce the upregulated expression of interferon (IFN)-gresponsive genes and the secretion of T cell chemoattractants to recruit effector T cells to tumors (24).Therefore, dysfunction of such chromatin-remodeling complexes can lead to increased efficacy of anti-PD-1 blockade against malignancies.Accumulating evidence has revealed that the clinical benefits of ICI can be associated with the loss of function of PBAF complexes in a subset of neoplasms (27). ICI is associated with largely improved survival among SMARCA4-dUT (28).There have been few reports on the effectiveness of ICI in SMARCA4-dUT, which might be a new promising therapy option.Among immunotherapy-based strategies, mono-immunotherapies [such as pembrolizumab (4), tislelizumab (7) and nivolumab (29)], combined immunotherapy [Ipilimumab and pembrolizumab (8)], and combined immunotherapy with chemotherapy (ABCP) (9, 10) have been explored.ABCP is the most popular of these therapies owing to its durable response.According to reported cases, ABCP, when used as the first-line therapy, could provide progression-free survival (PFS) ranging from 6 to >17 months; however, the OS was only several months without immunotherapy (30).Chemotherapy with EP can induce sufficient antitumor activity with acceptable toxicity in thoracic cancer, especially in tumors with a poor prognosis, including small cell lung cancers, mediastinum vitelline cyst tumors (31), and neuroendocrine carcinomas (32).SMARCA4-dUT is a novel thoracic malignancy with poor survival outcomes.Herein, we attempted to treat a patient with SMARCA4-dUT using tislelizumab (an anti-PD-1 antibody) in combination with EP.The patient experienced a durable response (more than 9 months).TEP (tislelizumab in combination with etoposide and cisplatin) dramatically suppressed the growth of malignancies.The objective response rate (ORR) for SMARCA4-dUT patients treated with ICI was only 50% (23).The favorable ABCP can provide PFS ranging from 6 to > 17 months, while other immunotherapies can provide 5 to >14 months (4-10).Without immunotherapy, the median OS for SMARCA4-dUT is only 5.2 months (14).In our case, a CT scan confirmed an impressive partial response, and PFS is more than 9 months with TEP.The > 9 months PFS is considerably excellent compared with other immunotherapies, including ABCP.The prognosis for SMARCA4-dUT cannot always be reliably predicted based solely on PD-L1 expression.Even cases with negative or low PD-L1 expression might exhibit a lasting response to immune therapies.At least 3 reported cases with negative PD-L1 expression exhibited more than 10 months of PFS.TEP also provided > 9 months PFS for the patient with PD-L1 TPS 1% in this case.Case reports of SMARCA4-dUT treated with ICI are summarized in Table 1.Although there is still a lack of evidence of large-scale clinical trials for chemotherapy combined with immunotherapy in SMARCA4-dUT, tislelizumab plus EP is likely to become one of the most popular strategies for SMARCA4-dUT for economic reasons.EP is a widely used chemotherapy with a lower cost than ABCP, and the cost of tislelizumab plus EP therapy is less than one-tenth that of ABCP therapy in China (33).Furthermore, bevacizumab is unsuitable for patients with symptoms of hemoptysis, a markedly common symptom in patients with thoracic malignancies (34).Therefore, tislelizumab plus EP is a promising strategy for treating SMARCA4-dUT. Given the limitations of single case reports, the efficacy of anti-PD-1 antibodies combined with EP chemotherapy needs to be further validated in a larger patient cohort with SMARCA4-dUT. FIGURE 4 FIGURE 4Time since treatment initiation. TABLE 1 Case reports of SMARCA4-UT treated with immunotherapy.
2024-06-02T15:22:00.930Z
2024-05-31T00:00:00.000
{ "year": 2024, "sha1": "7dc50a3a7bd2f3d4b4f9482c6ba332cf1607113e", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/journals/immunology/articles/10.3389/fimmu.2024.1371379/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "3510a958ae7ef0c24d891b81c49ba9b1864f6966", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
234149392
pes2o/s2orc
v3-fos-license
Use of Extra-label Drugs in Commercial Aquaculture The present study focused on the use of extra-label drugs in commercial aquaculture. Data was collected through questionnaire interview with 30 drug retailers and 30 commercial aqua farmers in Mymensingh Sadar and Trishal upazila of Mymensingh district. All together 94 extra-label drugs of different groups were identified which included antibiotics, disinfectants, nutritional supplements, probiotics, gas remover and saline. Six groups of antibiotics having 10 different active compounds with 46 trade names were found in the drug retailer shops. These drugs were primarily prepared either for the use in veterinary or poultry but were found using indiscriminately in aquaculture. All these drugs were marketed by 18 companies in the study areas. It was observed that 83% drugs were not labeled for aquaculture purpose. Majority (77%) of the commercial aqua farmers used extra-label drugs in their farms and 73% of them never received any prescription from qualified personnel before use. Most of the farmers were unable to calculate appropriate doses and had no idea about risk, safety issue and toxicity reaction of using extra-label drugs. Farmers generally got suggestion from the drug retailer regarding application of drugs. The results also revealed that extra-label use of veterinary and poultry drugs in aquaculture is a common practice by commercial aqua farmers. Thus, the use of drugs in aquaculture should have a sufficient regulatory system in place. It is important to produce and use appropriate labeled drugs under a sufficient regulatory system for safe fish production in the aquaculture of Bangladesh. Article history Received: 28 Nov 2020 Accepted: 21 Jan 2021 Published: 30 Mar 2021 Introduction Bangladesh has made a tremendous breakthrough in sustainable aquaculture through securing 56% contribution to the total fish production (DoF, 2018). The recent and rapid development has boosted Bangladesh to 5 th in world aquaculture production (DoF, 2018). The shift of aquaculture towards the use of intensive culture systems during the last decades has led to several serious problems, particularly the misuse of extra-label veterinary medicines and chemicals used for the treatment of disease outbreaks, consequently raising concern with regard to the safety of aquaculture products (Baoprasertkul et al., 2012). A wide variety drugs, chemicals and biological have been using in different aquaculture activities for various purposes. Vast majority of the drugs are used for health management and treatment of fish diseases. Other major uses including water quality management, hatchery management and feed formulation. However, due to indiscriminate use of drugs, aqua farmers are not getting expected result. Also, antimicrobials used in aquaculture cause negative impact on aquatic environment, contribute to the development of antimicrobial-resistance and the presence of residue in aquaculture products (WHO, 2006;Jaime et al., 2012). Extra label drug use describes the use of a drug in a manner for which it was not approved. It occurs when a drug approved for one species of animal is used in another animal, or when a drug is used to treat a condition for which it was not approved. Extra label use of veterinary or poultry drugs have now become a common practice in commercial aquaculture of Bangladesh. When an approved human or animal drug is used in a manner other than what is stated on the drug's label, then it is called an extra-label use, because the drug is used in a way that is "off the label" (FDA, 2020). In most of the countries, government agencies exert some controlling actions on the use of drugs. In Bangladesh, the uses of drugs are rising tremendously and concern is now growing on the use of unapproved drugs in aquaculture. There are a number of reports on the use of extra-label veterinary drugs in aquaculture in other countries (Breton, 2009;Yang and Zheng, 2007;Bravo, 2012;Zarza, 2012;Love et al., 2020). However, although few works have been carried out in Bangladesh on the use of aqua-drugs (Faruk et al., 2008;Ahmed et al., 2014;Ahmed et al., 2015;Hassan, 2016), but there is hardly any study conducted on the use of extra label drugs in aquaculture of the country. The objective of the present study was therefore to identify type, source and status of use of extra-label drugs in commercial aquaculture. Materials and Methods The study was carried out in Trishal and Mymensingh sadar upazila under Mymensingh district of Bangladesh for six from February to July 2017. Data were collected from 30 drug retailers and 30 commercial fish farmers through questionnaire interview. The major topic of questionnaire included types, constituents, sources, uses, price and label of drugs. In addition, idea about extra label use, receiving prescription, dose calculation ability and proper application methods were also included. Data were analyzed using descriptive statistics. Extra-label drugs In this study, 94 extra label drugs were found in the drug shops which were used by the commercial aqua farmers. The drugs could be grouped as antibiotics, disinfectant, ammonia reducer, probiotics, nutritional supplements, saline and pesticides. All these drugs were provided by 18 companies. Drugs from ACI, Eon, Acme, Square, Renata, SK+F and Novartis were found prominently at drug shops. Group wise description of identified extralabel drugs are given below: Antibiotics Six groups of antibiotics having 10 different compounds with 46 trade names were found in drug shops. The groups included beta-lactams, macrolides, fluroquinolones, tetracyclines, sulfonamides and quinolones. Major compound and trade names of extralabel antibiotics under different groups are summarized in Table1. Seven various trade named antibiotics were found under the group of beta-lactams. The major constituent of this group was amoxicillin and their price varied from Tk. 110-270 per 100g. Major companies that provide these products included Acme, ACI, Navana, Square and Reneta (Table1). In poultry and veterinary, this broad spectrum antibiotic is used for preventing and treating different bacterial and mycoplasmic diseases. In dairy industry, they are effectively used against Mycoplasmosis, Streptococcus and Staphylococcus infection. They are also found to retard growth against bacteria those are sensible to penicillin. In aquaculture, beta-lactams have been using against red mouth, septicemia infection and epizootic ulcerative syndrome in fish. Under the group of macrolides, twelve various trade named antibiotics were found. Major constituents of this group were erythromycin, azithromycin and tylosin. Generally, macrolides antibiotics are used for the treatment of Mycoplasmosis, associated with secondary bacterial infections in poultry. They are also effective against bacteria those are responsible for respiratory and genetic disorder. In aquaculture, they have been using to control mortality associated with enteric septicemia in catfish and forthe treatment of secondary bacterial infections. Ten trade named extra-label antibiotics of the group fluroquinolones were found also identified in the drug shops. These were found effective against Streptococcus infection in tilapia and carps. Under the group of tetracycline, 11 different trade named antibiotics were found. Major constituents were oxytetracycline, doxycycline and chlortetracycline. In poultry, they were generally used against Colibacillosis, Cholera, Streptococcosis and Staphylococcosis and also effective against bacterial infection like respiratory and genetic disorder, Clostridium, CRD, Mycoplasma and Entarytis infection. According to drug seller and information leaflet, these antibiotics were effective against fin rot, Edwardsiellosis, Columnaris and some other bacterial diseases found in freshwater fishes of Bangladesh. Moreover, five trade named sulfonamids with sulfadiazine and trimethoprim as their major active ingredients were also found in the shops. According to the package these extra label drugs could be useful in killing all kind of bacterial, viral and fungal infections. Finally, under the group of quinolones, one antibiotic, peflox vet was found. The price of this antibiotic was 140 Tk. per 100 ml. In aquaculture, it could be effective against Columnaris, Vibriosis and bacterial gill disease. Ammonia remover Four ammonia removing substances were found in the drug shops (Table 3). There major constituents were glycocomponent, saponins, extract of Yucca and the price varied from 315-350 Tk. per 100 ml with different doses. Probiotics Probiotics of 3 different trade names like FRAC 12, protexin and protimin were found (Table 4). According to the information leaflet and drug retailer information, major constituents are well balanced mixture of 1monolaurim and essential oils, amino acids and minerals. Major companies those provided these products were ACI, SK+F and Novartis. Saline A number of extra label saline were found with different trade names (Table 5). Their major constituents were dextrosenhydrus, ascorbic acid, sodium bicarbonet, sodium chloride, potassium chloride, dextrosenhydrus and vit-Aacitet. Doses of saline also varied during fry transportation. Major companies those provided these products were ACI, Eon, Acme and AV agro Ltd. In aquaculture, it is used for particularly for giving instant energy of fish fry during fry transportation. Nutritional supplements A great variety of nutritional supplements with various trade named were found. According to their constituents, nutritional supplements with vitamin premix are summarized in Table 6. About 17trade names of such products were found in the drug shops. Other micronutrients The component of calcium, zinc and phosphorus supplements were listed from drug shops (Table 7). In aquaculture, they are used for prevention and treatment of deficiency of calcium, phosphorus and vitamin D3. It also helps for sexual maturity and bone development of fish. Moreover, information provided in the label indicates that it enhances neurological system and hormonal production and reduces fat deposition in fish. Pesticides Extra label pesticides including sumithion, energy and rise 10 EC were available in the drug shops (Table 8). Major companies that provided these pesticides were Eon and Setu Corporation Ltd. In agriculture, pesticides are used to prevent or control of pest. In aquaculture, they are used to control noxious organisms like Hash poka in the pond. Sumithion is used particularly to control Argulus sp. in fish pond. Labeling of the packet and information leaflet Most of the packet and information leaflet were not well labeled for use in aquaculture. Most information provided on the labels focused either use in veterinary or poultry. About 83% drugs were unlabeled for aquaculture having no information for use in fish though they were found to be used by commercial fish farmers and only 7% were well labeled and about 10% had little labeling (Fig. 2). Extra-label drugs used by commercial fish farmers All the 30 interviewed commercial fish farmers practiced polyculture system for growing fish. Most frequently used extra-label drugs by the farmers Majority (77%) of the commercial fish farmers used a number of extra-label drugs in their farms. They used these drugs because of the availability and effectiveness in their culture system. Most frequently used extra-label drugs by farmers were antibiotic (66.66%), nutritional supplement (50%) and disinfectant (40%) followed by saline (26.66%), ammonia remover (23.33%), probiotics (10%) and pesticides (6.66%) (Fig. 3). Individually, Renamycin and Timsen were the mostly used drugs by the farmers followed by Pondkleen, Acmezyme and Oralyte. Misuse of drugs It was found that about 73% farmers used drugs without having any prescription from any qualified person while 27% accepted counseling sometimes in case of disease problem or any other environmental problems of their farms. Farmers usually received prescription from different sources including technical personnel of drug company and upazila fisheries officers. Farmers were questioned about whether they could calculate the amount of drugs needed before use and it was found that 87% farmers were unable to calculate whereas 13% farmers were quite able to calculate the dose. Using over dose and dosage of drugs was found as very common phenomena in commercial fish farming. Most of the time, farmers practiced over dose intentionally without following any guidelines. It was noticed that overall 90% farmers did not maintain the guideline of applying dose and dosages while 10% did that practice. Majority (88%) of farmers mentioned that they had no idea about risk, safety issue and toxicity reaction of using these drugs and only 7% farmers had little idea about it. When farmers became unable to use any drugs, they took suggestions from drug sellers and it was reported that most (60%) of the farmers took suggestion from the drug retailers although literally they had no right to suggest. About 72% farmers of Mymensingh Sadar received suggestion from drug sellers and 45% in Trishal did the same. Discussion Aquaculture activities in Bangladesh are being influenced by huge number of drugs and chemicals. The present study was conducted in Trishal and Sadar upazillas of Mymensingh district to know the present status of use of extra-label drugs in commercial aquaculture. It was found that a wide range of extra label drugs were marketed by various companies for use in aquaculture. Most of the drugs were seen to use mainly for disease treatment, disinfecting aquaculture facilities, as nutrient supplements, during pond preparation and water management and as probiotics. Fish health management and disease treatment were the major areas where farmers were seen to use a lot of such extra label-drugs. Only few approved and conditionally approved drugs are available to use in aquaculture. Therefore, a number of extra-label drugs are allowed to apply in fish under specified conditions with the supervision of a qualified veterinarian (FDA, 2020). It is evident that extra-label drugs have commonly been usingin different countries in different aquaculture systems (Zarza, 2012;Baoprasertkul et al., 2012). The primary benefit of the use of extra label veterinary drugs in aquaculture is that their prudent and responsible use supports the development of intensive, industrial-scale food production systems. In addition, they are indispensable for the treatment of epizootic disease outbreaks having the potential to cause mass mortalities, the failure of individual aquaculture enterprises and the occasional collapse of entire industries (FAO, 2019). In the present study, 6 groups of antibiotics with 46 different trade names were recorded in the drug shops of the study areas. Fish farmers usually bought these antibiotics from drug shops without any prescription by qualified person. Among the antibiotics, the following active compounds were found which included tetracyclines (oxytetracycline), amoxicillin, erythromycin, ciprofloxacin, sulfadiazine and chlortetracycline. The occasional uses of tylocin, azithromycin, pefloxacin, enrofloxacin etc were also reported. Antibiotics like oxytetracycline and Percentage sulfonamides (sulfadiazine and trimethoprim) had been used widely to treat several diseases such as Vibriosis and ulcerative diseases in aquaculture (Nogueira-Lima et al., 2006), although other broad spectrum antibiotics such as oxolinic acid and flumequine were also used (Sapkotaet al., 2008). Yang and Zheng (2007) reported that the extra-label veterinary medicines in China were classified by their functions and ingredients. Five different kinds of veterinary medicine were reported to be used in aquaculture which included disinfectants, antiparasitics, water-quality treatments, antimicrobial agents and herbal treatments. Bravo (2012) also informed the use of extra label veterinary medicines at the beginning of the salmon industry to control Piscirickettsia salmonis in Chile. Antibacterial substances are utilized in aquaculture production to combat bacterial diseases. They are mainly applied through medicated feed and enter the environment as a result of leaching from feces and uneaten treated feed (Lalumera et al., 2004). Among the chemicals used in aquaculture, special oversight should be given to veterinary drugs used to prevent and treat bacterial diseases. Only three fish grade antibiotics including oxytetracycline, sulfadimethoxine and sulfamerazine are approved by the FDA to use in aquaculture (FDA, 2020). In the present study, almost all the identified antibiotics were made for use in poultry and veterinary but were being used indiscriminately in aquaculture for disease treatment of fish. These antibiotics were not approved by appropriate authority to user in aquaculture. In recent study, Hassan (2016) also found some poultry and veterinary drugs used immensely at the farm level to produce fish. Hazards due to the use of unapproved or banned antibiotics differ depending on the type of antibiotic, dose level, national regulations and there are no harmonised regulations yet to deal properly with this situation at an international level. Unapproved antibiotics or extra-label uses of antibiotics were used in two main situations which included extra label use of an approved antibiotic in aquaculture and extra-label use of an antibiotic not specifically approved for use in aquaculture (Subasinghe, 2009). Use of unapproved extra-label drugs in commercial fish farming can create potential human health hazard. These substances may be toxic, allergenic and carcinogenic. Presence of drug residue because of drug abuse in fish products may create negative impact on export business and even products may be rejected by the foreign buyers. Also, farmers are to pay additional money for the extra drugs and they lose financially. Okocha et al., 2018;Beyene, 2016). According to the Aqua Medicinal Products (AMPs) guideline of the Department of Fisheries, certain information of the name of drugs, amount of purchased drugs, name and address of drug retailer, date of drug application, amount of applied drugs, harvesting date after drug application, batch number, expired date, followed withdrawal period, description of aquatic animal have to be recorded during drug application (DoF, 2015). Also, records have to be stored for two years. Record keeping activity of farmers and retailers were found very poor. The present study also revealed a lack of information regarding the leaflets of particular drugs with dose, dosage, withdrawal period and method of application. Consequently, most of the farmers were found to be unable to calculate proper dose and dosages before using and hence, drug abuse happened. Even in some instances, there were wrong information on the leaflet like the disease mentioned there might not exist in Bangladesh aquaculture at all. As per the guideline, it is mandatory to receive prescription from the authorized person before using antibiotics and other drugs. Majority of commercial farmers used drugs without receiving any prescription from qualified personnel. There was a lack of qualified personnel, providing prescription, in Bangladesh as well as in the study area. As a result, majority of the farmers could not receive proper prescription and they used to use drugs indiscriminately and thus drug abuse happened regularly. Conclusion The practice of using drugs and chemicals in aquaculture operations in Bangladesh are not fully regulated and controlled by competent authorities. Hence, the extralabel use of veterinary and poultry drugs in aquaculture is a common practice of commercial fish farmers. The uncontrolled and indiscriminate use of drugs in aquaculture may lead to the emergence of antimicrobial resistant organisms, loses of aquatic ecosystem and make negative impact on human health. Thus, the use of drugs in aquaculture should have a sufficient regulatory system in place. It is also important to produce and use appropriate labeled drugs in aquaculture for safe fish production. Further in-depth research is needed to understand the impact of extra-label drug on fish, aquatic ecosystem and human health.
2021-05-11T00:06:18.564Z
2021-01-01T00:00:00.000
{ "year": 2021, "sha1": "b66bce39e2d864d7578d8cd64999ea81736ca241", "oa_license": "CCBY", "oa_url": "https://doi.org/10.5455/jbau.14479", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "f24ab8eeaef65bc673dbc66f2587b54ee48e3ac3", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Business" ] }
223040649
pes2o/s2orc
v3-fos-license
Dental Anxiety and Its Possible Effects on Caries Prevalence Among Group of Dental Students In Kathmandu Medical College Introduction: The prevalence of high dental anxiety varies from 2% to 30% worldwide depending on the study population, the methods applied, and the cut-off scores used. There is strong evidence that dental anxiety is associated with dental attendance; it has been reported that individuals with higher dental anxiety tend to visit the dentist irregularly, which in turn may lead to deterioration in oral health. Studies have demonstrated that dental anxiety is associated with poor self-reported and clinically assigned oral health, more decayed and missing teeth, fewer filled teeth and worse periodontal health. Dental students are the future dental doctors who will be dealing with fearful patients in future. Knowing the facts on dental anxiety will have positive impact while treating and dealing such patients. Objectives: The overall objectives of the study were to assess level of anxiety and its possible effect on prevalence of caries among dental students studying at Kathmandu medical college and Dental hospital. Specific: To access the level of anxiety among dental students of different years (from first year to final year) along it was further focused to analyse the level of anxiety among male and female dental students. Methodology: A cross sectional study was conducted to choose a random convenient sample. The data were collected from dental students of first year to final year studying at Kathmandu medical college dental hospital–KMCDH. A structured questionnaire based on modified dental anxiety scale was used to collect the data. Patients were examined for dental caries prevalence using decay, missing and filled teeth (DMFT) index according to World Health Organisation guidelines. Results: The highest MDAS was seen among the younger batches and the mean values for MDAS declined with higher batch of dental students. The mean dental anxiety score for males was 8.9 and 15.5 for females. The difference was statistically significant the most fearful stimulus in dental clinic for both genders was local anesthetic injection, followed by drilling of teeth. Conclusion: Dental anxiety remains a significant problem for many patients of both gender and different age groups of examined students. Dental anxiety has a negative effect on oral health status by increasing the prevalence of decayed teeth. Further studies should be carried out using large random samples before generalizing this conclusion.   INTRODUCTION Oral health is one of the most important integral parts of general well-being and a significant public health issue. Despite increased awareness among dentists and patients of a preventive approach to oral diseases, and innovations in dental equipment and pain reduction, dental anxiety always persists 1,2 . Dental anxiety (DA) is described as a state of excessive and unreasonable apprehension that "something dreadful is going to happen in relation to dental treatment, and it is coupled with a sense of losing control". Dental fear is related to dental anxiety and is described as a normal unpleasant emotional reaction to perceived threat or danger in a dental situation 3,4,5 . The concepts of dental fear and dental anxiety are frequently used interchangeably in dental studies, implying "strong negative feelings associated with dental treatment". Several psychometric tests have been developed to differentiate people with and without dental anxiety. Along with single-item questions, Corah's Dental Anxiety Scale (DAS), the Modified Dental Anxiety Scale (MDAS), and Kleinknecht's Dental Fear Survey are the most commonly used tools in epidemiological studies to measure dental anxiety in adults, although none of the existing instruments are regarded as a gold standard 6,7 . The prevalence of high dental anxiety varies from 2% to 30% worldwide depending on the study population, the methods applied, and the cut-off scores used 8 . There is strong evidence that dental anxiety is associated with dental attendance; it has been reported that individuals with higher dental anxiety tend to visit the dentist irregularly, which in turn may lead to deterioration in oral health 9,10 . Studies have demonstrated that dental anxiety is associated with poor selfreported and clinically assigned oral health, more decayed and missing teeth, fewer filled teeth and worse periodontal health [11][12][13][14] . Dental students are future dental doctors who will be dealing with fearful patients in the future. Knowing the facts on dental anxiety will have a positive impact while treating and dealing with such patients. Considering those facts this study is intended to check the level of dental anxiety using MDAS and caries prevalence following the WHO guidelines for DMFT (Decayed, Missing, Filled treatment needs) index among the undergraduate dental students at Kathmandu Medical College. This study imparts the level of anxiety and caries prevalence among the dental students of a different year studying at Kathmandu Medical College and Dental Hospital, Duwakot, Bhaktapur, Nepal. METHODOLOGY A cross-sectional study was done with a sample size of 246 adults. The study consisted of 57 males and 189 females studying from the first year to the final year at Kathmandu Medical College Dental Hospital (KMCDH). This dental college provides dental education as well as health care facilities for a large group of population. A well-structured questionnaire was used for the collection of the data. The questionnaire composed of two parts: the first part included the demographic details of the patients. The second part consisted of the 5-item modified dental anxiety scales (MDAS) of the Humphris G et al which is an improvement over a 4-item Corah Dental Anxiety Dental Scale (DAS), where an important item regarding local anesthetic injection was added. This Modified Dental Anxiety Scale(MDAS) is a questionnaire that contains five multiplechoice questions related to dental anxiety. Each question has five possible answers, the answers for each item range from "not anxious" with a score of 1 to "extremely anxious" with a score of 5. The scores are summed together with a minimum score of 5, and a maximum of 25. The MDAS is a reliable, valid, has good psychometric properties, and require just 2-3 minutes to complete, The MDAS has been validated in the United Kingdom and a number of other countries. A cut-off value of 19 and above was used in the MDAS to indicate high dentally anxious medical students studying in the Kathmandu Medical College and dental hospital who may require special attention in the college. Inclusion and Exclusion Criteria All those students willing to participate in the study and patients who were consented to examine were only included in the study whereas none of the respondents denied participating and provided an accent to participate in the study. Thus, all the students from the first year to the final year were included in the study. Clinical Examination After the acceptance of written consent from each individual, the patients filled the questionnaire. The dental caries status was evaluated by using the decayed (D), missing (M) and filled (F) teeth (DMFT) index, according to WHO guidelines using plane mouth mirror and an explorer under good light. Dental radiographs were not included and third molars were excluded. Decayed teeth (D) were defined as the number of teeth with primary and secondary caries, missing teeth (M) were defined as a number of missing teeth, irrespective of the reason, filled teeth (F) was defined as the number of teeth filled, including all types of filling materials and crowns. The individual values of D, M, and F for each subject then the sum of these three values gave the corresponding DMFT score, which is an indicator of dental disease and previous dental treatment experience until the examination was taken. Ethical approval for the study was obtained from the Institutional Review Committee of Kathmandu Medical College. Statistical Analysis All the data collected by a set of questionnaires were entered in MS-Excel and coded and imported in SPSS version 18, for further statistical analysis RESULT The study population sample consisted of 246 dental students, 23.1% males, and 76.9% females. Only 44 patients (8.8%) showed high dental anxiety. Table I shows the distribution of student's respondents according to gender and academic year. The year-wise student distribution was from the first year to the final year. The involvement of female students in the study process was found greater as compared to that of males. The highest MDAS was seen among the younger batches and the mean values for MDAS declined with a higher batch of dental students, with a statistically significant difference between batches of students, regarding the association between dental anxiety and gender, dental anxiety was higher among females compared to males. The mean dental anxiety score for males was 8.9 and 15.5 for females. The difference was statistically significant for the most fearful stimulus for both genders was a local anesthetic injection, followed by drilling of teeth (Table II). The least fearful situation for both genders was picking up an appointment for a dental procedure in the following day. Individual with high dental anxiety had a statistically significant higher number of decayed teeth however there were no statistically significant differences for missing (M), filled (F) teeth and total DMFT index scores between high and low dental anxiety groups (Table III). DISCUSSION The prevalence of dental anxiety among the students was 8.8%. This result is within the range between 5-20% reported from other countries such as Saudi Arabia (8.5%) 13 , Netherlands (17.9%) 11 , United Kingdom (11.6%) 15 , USA (12.2%) 16,17 , Australia (9.5%) 18 , China (8.7%) 19 and Denmark (10.2%) 20 . A lower range of dental anxiety was seen in the present study. This may be due to the fact that this study was carried out among the dental students studying and working at the same dental college. Since, many individuals with extremely high dental anxiety would not attend or will to study dental education voluntarily; this may have resulted in an underestimation of the prevalence of dental anxiety. This study population does not reflect dental anxiety among all Nepali adult student population, and further studies that include more representative samples are required. It has been found out that the dental anxiety scale among the dental students significantly decreased with the higher batch students. Various cross-sectional studies have reported that the prevalence of dental anxiety decreases with age 15,21 . This study revealed that females were significantly more anxious than males. The result has an agreement with many studies that assessed dental anxiety between both genders and reported that the prevalence of dental anxiety was higher in females than in males 15,16 . However, some studies failed to find a significant difference in dental anxiety between genders 17 . The explanation for this gender difference may be due to actual differences in anxiety levels between both genders, a greater readiness among females to acknowledge feelings of anxiety, or minimum level to cope with the dental situation, or may simply reveal gender differences in self-reporting dental anxiety with male's denial or maybe a combination of multiple factors. On the basis of MDAS, needle injection during dental treatment was found to be the most common anxietyproducing stimulus for both genders. This is consistent with other studies 3 . The authors explained these situations as a four-dimension problem. The dimensions were in terms of fear of pain, fear of local anesthetic solution, fear from acquired diseases and physical injury. Avoidance of necessary dental treatment is said to be related to dental anxiety, furthermore, if anxious dental patients attend for an emergency dental visits, they will likely avoid necessarily follow up appointments to complete dental treatment properly 22 . This dental avoidance behavior will lead mostly to more extensive development of carious lesions, which ultimately requires more invasive and painful treatment, that will augment the level of dental anxiety and the patient will be in the zone of "vicious cycle of fear". The effect of dental anxiety on caries prevalence was discussed by many researchers and found that avoidance of dental treatment was highly correlated with anxiety scores and with increased caries morbidity. The present study supports these findings and was related to the fact that the one with high dental anxiety had a statistically significant higher number of decayed teeth (D), compared with low dental anxiety patients. It is found that individuals with high dental fear, had a statistically significantly higher number of decayed and missing teeth, but statistically significant lower number of filled teeth. The present study supports its finding regarding the differences in DMFT between both groups and precisely explained that the dentally anxious patients had significantly more missing and fewer filled teeth compared to low dental fear subjects. In general, dental anxiety had a negative effect on the utilization of dental services and oral health status. So, breaking this "vicious cycle" is important to improve the oral health status of those fearful individuals. This needs efforts from both the dentists and patients. The dentists should have more understanding, patience, higher communication skills, and behavioral management procedures for better treatment outcomes. From the patient perspective, they should be able to recognize and control their fears from dental treatment and improve dental utilization behaviors to improve their oral health status. If this approach fails, pharmacological means may be used to solve this problem. CONCLUSION Dental anxiety has a negative effect on oral health status by increasing the prevalence of decayed teeth. It has been found out that the dental anxiety scale among dental students significantly decreased with the higher batch students. In order to generalize the results of such studies among adults, future studies should be carried out using larger random samples.
2020-10-16T18:05:45.926Z
2019-12-31T00:00:00.000
{ "year": 2019, "sha1": "4e56b099f6db8d813e8dc0941e8445a3fc473c0f", "oa_license": null, "oa_url": "https://doi.org/10.3126/jngmc.v17i2.31661", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "4e56b099f6db8d813e8dc0941e8445a3fc473c0f", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
236610132
pes2o/s2orc
v3-fos-license
A semianalytical solution of the modified two‐dimensional diffusive root growth model Accurate estimation of the temporal and spatial root water uptake patterns in root zone is needed for an improved understanding of water and chemical transport dynamics in vadose zone. Rooting system is of great importance to describe the plant root water uptake directly. In this study, the diffusive root growth model was coupled with the Environmental Policy Integrated Climate (EPIC) crop growth model through changing the boundary condition, and a new semianalytical solution of the modified root diffusive model was derived considering the dynamic root length density distribution simulated by the EPIC crop growth model. To test the modified root growth model, the field‐measured data of maize (Zea mays L.) and tomato (Solanum lycopersicum L.) root length density distribution were used for parameter optimization. A MATLAB program was developed by coupling the modified root diffusive model with the genetic algorithm to facilitate the parameters optimization. Results showed that the simulated root length density distribution at different times was in a good agreement with the observed values. The RMSE and bias values ranged from 0.22 to 0.25 cm cm–3 and from −3.0 to 24.5%, respectively. The modified diffusive root growth model can therefore be used to simulate the two‐dimensional root growth during the crop growing period. management strategy (Wöhling & Schmitz, 2007). It is well known that the interaction between soil water dynamics and crop growth is very complicated; however, both soil water process and crop growth process can be quantified with mathematical models. The interaction between the two processes can be described with the term "root water uptake." Therefore, accurate estimation of the temporal and spatial root water uptake patterns is needed for an improved understanding of the magnitude of soil water transport to atmosphere and groundwater (Vrugt et al., 2001). Actually, root water uptake in field condition is a threedimensional (3D) problem. However, many crops (e.g., maize [Zea mays L.], tomato [Solanum lycopersicum L.], and pepper [Capsicum annuum L.]) are planted in rows. The distance between two plants is quite small, whereas the distance between two rows is much larger than the former one. Plant roots develop and connect each other along the row direction, whereas they develop but do not connect along the direction perpendicular to the row. Thus, for simplicity, soil water flow and root water uptake are treated as two-dimensional (2D) (Li et al., 2015;Skaggs et al., 2004;Wang et al., 2014;Wöhling & Schmitz, 2007). Many models and software packages (i.e., SWMS_2D [Šimůnek, Vogel, & van Genuchten, 1994], CHAIN_2D , Nitrogen-2D [Lu et al., 2004], and HYDRUS-2D [Šimůnek et al., 1999]) have been developed to quantify the 2D soil water flow. However, the root water uptake term in most models does not account for the root dynamics. For example, the HYDRUS model (Šimůnek et al., 2006) used the root water uptake term of Vrugt et al. (2001) without considering the plant root developing process. Some other models have included the root distribution pattern and root development in the root water uptake terms (Roose & Flowler, 2004;Wang et al., 2014). For example, Wang et al. (2014) developed a model to simulate the root growth and root water uptake, and in turn to simulate soil water flow and crop yield on the basis of coupling CHAIN_2D with crop growth model EPIC (Environmental Policy Integrated Climate). In this developed model, the root depth growth model was included in the Vrugt root water uptake term (Vrugt et al., 2001) with simplified assumption that the horizontal root length is a proportion of rooting depth. Furthermore, Wöhling and Schmitz (2007) developed a physically based coupled model for simulating one-dimensional (1D) surface-2D subsurface flow and plant water uptake. Within this model, an empirical exponential function with uniform root density distribution was applied to describe the root water uptake. However, these simplifications cannot quite well describe the root growth in the horizontal direction. Therefore, a multidimensional root growth model for describing the dynamic root distribution in vertical and horizontal directions is of great importance to accurately model soil water movement in vadose zone. Core Ideas • A novel diffusive root growth model was developed by coupling the EPIC crop growth model. • A semianalytical solution of the modified diffusive root growth model was derived. • The modified root growth model was tested using field experimental data of maize and tomato. The root system architecture model and diffusive-type root growth model are the two main types of models capable of describing the 2D/3D dynamic root distribution in vertical and horizontal directions. The architecture model can explicitly describe and visualize the root system and analyze plantenvironment interactions, especially in 3D cases (Dunbabin et al., 2002;Dupuy et al., 2010;Pagès et al., 2004). However, the root system architecture model is complex with incorporating the space and considering the biophysical interactions between the root and its environment. Consequently, it requires a large number of input parameters that are difficult to obtain (Dupuy et al., 2010). Furthermore, the root architecture model requires sophisticated algorithms as coupling with soil process models (Dupuy & Vignes, 2012), whereas the diffusive-type root growth model is a simple model and provides good spatial description of root growth with fewer parameters (Dupuy et al., 2010). Page and Gerwitz (1974) first developed the diffusion root growth model in analogy to the equations for describing diffusion or heat flow. They assumed that the root proliferation was driven by the gradient of root density, that the root growth rate was a constant, and that the downward proliferation was independent of soil state. With the assumptions of variable diffusivity and root growth rate, Hayhoe (1981) developed a more general diffusion model. Acock and Pachevsky (1996) proposed a generic 2D convective-dispersive root growth model. They added the model with a convection term including the convection-like downward propagation caused by geotropism. In addition, they also assumed that the roots were classified as two types: young roots and mature roots. Elsewhere, de Willigen et al. (2002) considered the anisotropic coefficients in the diffusion model, which can better characterize the root development in different directions. Although it is difficult to attribute explicit biological meaning to the parameters, the diffusive root growth model has been proven to be powerful in describing the root growth and proliferation, as well as water distribution and uptake (Acock & Pachevsky, 1996;Hayhoe, 1981;Heinen et al., 2003;Reddy & Pachepsky, 2001). As a plant develops, its biomass accumulates and is partitioned into the root system and shoot system; all these biological processes can be described by a crop growth model. However, plant biomass accumulation was not counted in the diffusive root growth model. Thus, it is necessary to integrate the diffusive root growth model and crop growth model to provide better physical insight into the root proliferation process and obtain more accurate estimation of root distribution. Several have tried to integrate the diffusive root growth model with the crop growth model (Dathe et al., 2014;Mollier et al., 2008;Pronk et al., 2005). For example, Dathe et al. (2014) described potato (Solanum tuberosum L.) root growth using a diffusive root growth model that was coupled with a specific crop growth model (SPUDSIM). However, the root depth growth dynamics were not considered in the above coupled crop growth models. At present, there are many simple and universal crop growth models including the root depth growth process, such as CropSyst (Stöckle et al., 2003), WOFOST (Spitters et al., 1989), the Daisy crop production model (Abrahamsen & Hansen, 2000), and the EPIC crop growth model (Williams et al., 1989). Among these models, the EPIC crop growth model with less demanding data input, would be preferred to be coupled with the diffusive root growth model. In general, the diffusive model can be solved by using either a numerical method or an analytical method. With the assumptions of variable diffusivity and root growth rate, the numerical solution was obtained using the finite element method (Acock & Pachevsky, 1996;Hayhoe, 1981;Heinen et al., 2003;Pronk et al., 2005). The analytical solution can also be derived in more limiting situations, such as those with a constant diffusivity or a constant root growth rate (de Page & Gerwitz, 1974). As the diffusive model was coupled with the crop growth model, it will be rather difficult to obtain its analytical solution similar to that of de . A semianalytical solution is an option for the coupled model. Therefore, the objectives of this study were (a) to develop a modified diffusive root growth model integrating with the EPIC crop growth model; (b) to derive a semianalytical solution of the modified diffusive root growth model with considering the constrained rooting depth; and (c) to test the modified root growth model using field experimental data of maize and tomato. Diffusive root growth model Root growth can be considered as a diffusion process driven by the gradient of root density, and the diffusive root growth model was developed in analogy to the equations for describing diffusion or heat flow (Page & Gerwitz, 1974). With the assumptions of neglecting the geotropic velocity, the impacts of water stress and overwetting on root growth and soil heterogeneity, and so on, de developed a 2D diffusive root growth model. In this model, the root decay was treated as a first-order sink term. The root biomass accumulation at soil surface was treated as a flux boundary condition at a rate of Q. The diffusivity coefficient (D) was used to reflect the effect of soil variables on root growth. D was assumed as a constant. To describe the root growth in a row planting system, the root zone was conceptualized as a 2D rectangular geometry. In the horizontal direction (X direction) perpendicular to the row, the domain is finite. In the vertical direction (Z direction), the region is considered to be infinite . Then, the governing diffusive equation for root length density in 2D Cartesian coordinates can be described as The boundary conditions are The initial condition is where C is the root length density (cm cm −3 ), X and Z are the spatial coordinates in the horizontal and vertical directions, respectively, T is time (day), λ ′ is the decay rate constant (d −1 ), X 1 and L are the half distance between two crops and two rows (cm), respectively, D x and D z are the diffusivity coefficients (cm 2 d −1 ) in the X and Z directions, respectively, and Q is the fine root growth rate located at the surface in root system (cm cm −2 d −1 ). Modification of boundary conditions In this study, to better predict the root distribution, the diffusive root model was coupled with the EPIC crop growth model (Williams et al., 1989). The EPIC model needs less data input and uses a unified approach to simulate the growth for >80 types of crops (Williams et al., 2006). The EPIC model includes crop phonological development based on daily accumulated heat unit, a harvest index for partitioning grain yield, the Monteith's approach for potential biomass accumulation, and water and temperature stress adjustments. With the coupled model, the rooting depth estimated by both the diffusive model and the EPIC crop growth model should be the same at any given time. Therefore, the bottom and lateral boundary conditions of the diffusive root growth model should be changed as follows: where RD is the root depth, which was considered as a constant value on a certain day; it can be derived from the EPIC crop growth model (Williams et al., 1989): where RD max is the maximum root depth, RD i is the root depth of ith day, ΔRD is the daily change of root depth, and HUI i is the heat unit index and is given as (Williams et al., 1989) where PHU j is the potential heat unit required for maturity of crop j. PHU may be set as a default value or calculated by the model from planting to harvest. HU k is the value of heat unit, which is specified by (Williams et al., 1989) where T mx,k and T mm,k are maximum and minimum air temperatures for the kth day (˚C), and T b,j is the crop-specific base temperature of crop j (˚C). In this study, the parameter T b,j for maize and tomato was set as default values referring to the manual of EPIC model (Williams et al., 2006). Semianalytical solution of modified diffusive root growth model For simplicity, we define the following dimensionless terms to transform Equations 1-3 and Equations 6-9 into the dimensionless ones: where ref is the reference root length density, which is 1 cm cm −3 in this study. Substituting Equation 14 into Equations 1-3 and Equations 6-9) leads to Applying the Laplace transform to Equation 15, one obtains where the Laplace transform of c is denoted by u, and the Laplace parameter is s. The boundary conditions retain their original forms except for Equation 16, which becomes in which φ( ) is the Laplace transform of ( ). Then, the finite cosine transform with respect to x is taken, and this transformation is defined as (Churchill, 1972, p For the boundary condition of Equation 18, the general solution of Equation 25 should bē Therefore, the suitable solution of Equation 25 should bē ) , for = 1, 2, 3, … (33) As the root growth diffusive model was coupled with a crop growth model, then ( ) can be considered as a constant value in a certain time (i.e., 1 d). Eventually, A 0 ,ū 0 , A n , andū n become (Equations 30-33): Furthermore, the solution for u is given by (Churchill, 1972, p. 355): Then dimensionless root length density c can be obtained as follows where L −1 denotes the inverse Laplace transform. In Equation 39, 200 summations were used to obtain the analytical solution. However, there are no analytic expressions for the inverse Laplace transformation of u 0 and u n . Therefore, the root length density in the real-time domain was calculated by the numerical inverse Laplace transform algorithm such as Stehfest (1970), which is presented as (41) where n must be a positive even integer, and P is a Laplace function. The V i is coefficient matrix and calculated by using the Fortran program of Zhan and Zlotnik (2002). Field description and measurements The modified diffusive root growth model was tested by using the observed maize and tomato root length density distribution data obtained from field experiments (Gao et al., 2010;Zotarelli et al., 2009 (Gao et al., 2010). Maize was sown on 16 April and harvested on 18 August in 2007 and 20 August in 2008. Root samples were collected from the soil profile by pressing sharp-edged iron boxes vertically into the soil surface. The soil cores were collected from the top layer at 10-cm intervals along the north-south axis and the east-west axis. Root length was measured with the modified Newman-line-intersect method (Tennant, 1975). Only a part of the roots was measured directly, and the rest of the roots were measured by using the regression of root length and root weight. In this study, the treatment of sole maize was selected to test the modified diffusive root growth model (Gao et al., 2010 Note. D x , diffusivity coefficient in the x direction; D z , diffusivity coefficient in the z direction; λ′, decay rate constant; Q, ; PHU, potential heat unit. Parameter optimization and model test Genetic algorithm (GA) was applied to get the optimized parameters of the modified diffusive root growth model based on the observed root length density of the soil core samples. For a relatively fast convergence to the global optimum, we used a relatively high crossover fracture value of 0.85 (Vrugt et al., 2001). In addition, the population size and generations were 1,000 and 30, respectively. The fitness function for GA was described as follows (Vrugt et al., 2001): where OF(s) is the objective function, n is the number of measurements; b′(x, z) and b(x, z; s) are the measured and predicted root length density at the point (x, z) according to the semianalytical solution of the modified model, respectively; and s is the parameter vector representing the fitting parameters. Smaller values of the objective function OF(s) imply that better fitting is obtained. The allowable ranges of the parameters included in s are shown in Table 1 according to Heinen et al. (2003). The GA optimization was carried out using MATLAB (version 7.12.0 [R2011a], the Math Works). We developed a MATLAB program by coupling the modified root diffusive model with GA for facilitating the computation. A GA is a powerful tool for parameter identification, when the number of fitted parameters is large (Vrugt et al., 2001). Although GA is an effective way to determine the global minimum region, it is not the most efficient way to find the exact optimum location (Vrugt et al., 2001). In order to obtain the optimization parameters, three steps were conducted in this study. Firstly, GA was used to determine the global minimum region. Then, 100 series of the parameters of the modified diffusive root growth model were tested to search the exact optimal location. Finally, the fitted parameters were determined as the RMSE being the smallest in the F I G U R E 1 Simulated maize root length density vs. observed values 100 tested series. To calibrate the diffusive root growth model, the observed maize and tomato root length densities were used to optimize the parameters. The observed root length densities were compared with the simulated ones. Then, the fitted value, the medium value, and the 95% confidence interval for each parameter were analyzed. The RMSE was used to evaluate the model performance for predicting root length density according to the criteria proposed by Willmott (1982). It represents the discrepancy between observations and predictions. The closer RMSE is to 0, the more accurate the model is. The RMSE was described as follows: where O i and P i are the observed and simulated values respectively, and n is the number of the pair values. In addition, bias was used to identify the model over-or underpredicted root length density and calculated as It becomes positive when the model over predicts and negative when the model under predicts observed values. Statistical analysis of optimized parameter series was performed by the descriptive statistical of SPSS 16.0 software (SPSS). RESULTS AND DISCUSSIONS To calibrate the modified root growth model, the observed root length density values were compared with the simulated ones based on the fitted parameters. As shown in Figure 1, F I G U R E 2 Simulated tomato root length density vs. observed values a good agreement can be found between the simulated and measured maize root length densities. However, large discrepancy appeared when the root length density was high. In addition, statistical test results showed that the bias was 24.5%, which was lower than the result of Dathe et al. (2014), in which the bias ranged from −50.7 to 226.3% for potato root density at harvest. In addition, the bias between the predicted tomato root length densities and the observed ones was −2.8% ( Figure 2). It indicated that the prediction produced a good agreement with the observation results. The fitted parameters were shown in Tables 2 and 3 for the maize and tomato root growth model. To show the stability of the optimization, the predicted medium value and 95% confidence interval for each parameter of maize in 2007 and 2008 are presented in Table 2. As shown in Table 2, the decay rate constant λ′ has a relatively higher uncertainty as compared with the other parameters. It may be because decay rate is a sensitive parameter. If the observed decay rate value was considered as an input value, the predicting accuracy of root growth and distribution can be improved. However, the lowest CV value was observed for PHU. The fitted values of parameters D x , D z , λ′, Q, and PHU were in the ranges of 0.4320-0.6073 cm 2 d −1 , 3.2086-3.6974 cm 2 d −1 , 0.0001 d −1 , 0.1682-0.225 cm cm −3 d −1 , and 3,447-3,460°C, respectively. Similar results were obtained by de , who estimated D z = 5.6 cm 2 d −1 when solving the analytical solution of diffusive root growth model for maize. Heinen et al. (2003) got a numerical solution and the diffusive coefficient in horizontal and vertical directions were 2.234 and 4.552 cm 2 d −1 for maize-row, respectively. For tomato, the predicted medium value and 95% confidence interval as well as the fitted values for each parameter are shown in Table 3. The parameter λ′ had the greatest CV value. The fitted values of parameters D x , D z , λ′, Q, and PHU T A B L E 2 The fitted parameters, the 95% confidence intervals and the standard deviation of the parameters of the diffusive root growth model for maize in optimization within and without considering potential heat unit (PHU) Note. D x , diffusivity coefficient in the x direction; D z , diffusivity coefficient in the z direction; λ′, decay rate constant; Q, fine root growth rate located at the surface in root system. T A B L E 3 The fitted parameters, the 95% confidence intervals and the standard deviation of the parameters of the diffusive root growth model for tomato in optimization within and without considering potential heat unit (PHU) When the diffusive root growth model was coupled with the crop growth model, the parameter PHU can be calibrated by the leaf area index, crop biomass, and yield. To reduce the number of optimized parameters and increase the optimization accuracy and efficiency, the parameter PHU was consid-ered as a constant value calibrated by the crop growth model in this study. Based on the above optimized parameters, the values of PHU were set as the medium of the optimization values (i.e., 3,244°C in 2007, 3,183°C in 2008 for maize, and 1,020°C for tomato). Parameters The predicted medium value and 95% confidence interval and the fitted values for each parameter of maize in Table 2. The fitted values of parameters D x , D z , λ′, and Q were in the ranges of 0.4320-0.6073 cm 2 d −1 , 3.2086-3.6974 cm 2 d −1 , 0.0001 d −1 , and 0.1682-0.225 cm cm −2 d −1 , respectively. There was little discrepancy of the fitted parameter values with or without con-sideration of PHU in optimization. It indicated that the three steps' optimization scheme is a powerful tool for parameter calibration. Similar results were obtained for tomato (Table 3). The fitted values of tomato root growth parameters D x , D z , λ′, and Q were 1.0448 cm 2 d −1 , 5.6928 cm 2 d −1 , 0.0001 d −1 , and 0.4526 cm cm −2 d −1 , respectively. With these fitted parameters, the bias values between simulated and observed for maize and tomato root length densities were 17.8 and −3.0%, respectively. The simulated root length density distributions with fitted parameters at different time along the growing period were compared with the observed ones. As shown in Figure 3, a good agreement was obtained between the simulated and the observed root length density distributions on 5, 17, and 28 June 2007. However, the greatest discrepancy was found on 25 May 2007. The simulated root length density was higher than the measured values, especially in the topsoil of 0-10 cm. A similar result was obtained for maize in 2008 ( Figure 4) and tomato ( Figure 5). As shown in Figure 5, the greatest difference between the predicted and the measured root length densities was found in the topsoil (0-20 cm) on 24 DAT. Thereby, it can be concluded that the predicted root length density of the upper root zone was greater than the observed values during the early growing period. This may be attributed to the constant value of Q (Heinen et al., 2003). In this study, Q was considered as an optimized parameter, similar to Pronk et al. (2002) and Heinen et al. (2003). However, as a linkage between the diffusive root growth model and the crop growth model, Q can be transformed from the rate of root biomass input (g cm −2 d −1 ) Q M , which was estimated by the crop growth model: where R is the mean root radius (cm), ρ is the bulk density of root (g cm −3 ), d is the dry matter content of root (g dry g −1 fresh), L surface is the surface length occupied by a single plant (cm), and A is the sum of area where root input occurs (cm 2 ). Mollier et al. (2008) integrated the diffusive root growth model with the crop growth model, and the input rate of the root length density Q was calculated by the crop growth model predicted root biomass input rate Q M . Furthermore, Heinen et al. (2003) indicated that the root biomass input Q M changed during the growth stage. It is a very small value during the early growth stage and then increases rapidly after middle growth stage. Therefore, if the root biomass input variation during the growing period was considered in the modified diffusive root growth model, the predicted root length distribution would fit well with the observed values. The RMSE values for maize and tomato root length density during 100 runs are presented in Figure 6a. The RMSE values ranged from 0.22 to 0.39 cm cm −3 , and the highest F I G U R E 6 The RMSE of the root length density for maize and tomato in optimization (a) with and (b) without considering potential heat unit (PHU) value was obtained for tomato root length density. This may be because root samples were smaller for tomato optimization. The number of samples for maize and tomato were 84 and 36, respectively. Without considering PHU, the RMSE values for maize in 2007 and 2008 were in the range of 0.22-0.27 cm cm −3 and 0.22-0.28 cm cm −3 , and that for tomato was 0.25-0.39 cm cm −3 , respectively (Figure 6b). Similar results for tomato were obtained by Heinen et al. (2003). They indicated that the RMSE values for numerical simulation of tomato root length density were in a range of 0.21-4.49 cm cm −3 , whereas they were about 0.10 cm cm −3 for maize. The bias of maize roots ranged from 17.7 to 24.5% in this study. The greater RMSE and bias for maize predicted by the semianalytical solution of the modified root growth model may be attributed to the two reasons. One is the measurement error. Dathe et al. (2014) indicated that several factors influenced experimentally obtained root distribution. When roots are washed from the surrounding soil, medium fine roots will get lost, yielding a lower total root mass than that actually present in the soil, especially in deeper soils. The other reason may be the limitation of the modified diffusive root growth model. In this study, the modified diffusive root growth model was developed with several assumptions, without considering the regulated or controlled factors, including soil temperature, soil water content, soil resistance to penetration, soil structure, soil chemical transport and advection condition, and soil microbiological conditions. In addition, Dathe et al. (2014) simulated the effect of water stress on potato root growth in diffusive process by using the root biomass input rate. A similar approach was adopted by Mollier et al. (2008). Nevertheless, although the semianalytical solution of the root diffusive model was easily used to describe the rooting pattern and distribution, it cannot handle complex problems such as the situa-tion that root input occurs inside the rooting medium (Heinen et al., 2003). If abovementioned factors were all integrated into the proposed root distribution function, the numerical model developed by coupling the crop growth to soil water and solute transport dynamics could be improved. Then, the modified diffusive root growth model is expected to perform better in simulating root length density distribution under a 2D domain. CONCLUSIONS To establish the process-based simulation tools for modeling water flow in the surface-soil-crop system, rooting system is of great importance to accurately estimate the temporal and spatial root water uptake patterns in root zone. In this study, integration of the diffusive root growth model and EPIC crop growth model was carried out by changing the boundary conditions. In the modified model, the diffusive root growth was constrained by the root depth simulated with the EPIC crop growth model. Then, the semianalytical solution of the modified diffusive root growth model was derived. Moreover, the modified root growth model was tested by using field experiment data of root length density for maize and tomato. A MATLAB program was developed to integrate the modified root diffusive model and the GA for facilitating the parameters optimization. With the optimized parameters, the modified root growth model performed well in predicting root length density, with RMSE and bias values ranging from 0.22 to 0.25 cm cm −3 and from −3.0 to 24.5%, respectively. It can be concluded that the modified root growth diffusive model can be used to simulate 2D root growth conditions during the crop growing period.
2021-08-02T00:05:20.872Z
2021-05-17T00:00:00.000
{ "year": 2021, "sha1": "7bbaec6bb682974c6b5199936fd97777dab26a3e", "oa_license": "CCBY", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/vzj2.20132", "oa_status": "GOLD", "pdf_src": "Wiley", "pdf_hash": "2ffec032a0438e026d2e96c9ac4bd1a68f06be39", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Mathematics" ] }
12758011
pes2o/s2orc
v3-fos-license
Topological frames in sign-based grammars The paper presents some ideas on how topological frames can be integrated in HPSG-like grammatical descriptions and be used for parsing. Phrase structure is taken to be purely hierarchical and is represented by the special feature DTRS. The topological frames account for basic word order constraints of major categories, while linear precedence rules account for word order constraints within the positions of a topological frame. The paper presents some ideas on how topological frames can be integrated in HPSGlike grammatical descriptions and be used for parsing. Phrase structure is taken to be purely hierarchical and is represented by the special feature DTRS. The topological frames account for basic word order constraints of major categories, while linear precedence rules account for word order constraints within the positions of a topological frame. In trod u ction In a context-free phrase structure grammar, whether augmented with features or not, a rule expresses simultaneous constraints on hierarchical and sequential relationships. Gazdar et al. (1985) showed how general rules of word order (LP-rules) could be formulated independently of hierarchical relations and, together with a set of unordered phrase structure rules (ID-rules), define a phrase structure grammar of a special form. The local tree in (1) is licenced either by the rewriting rule (2) or by the ID-and LP-rules of (3a,b). V p[V NP PP] ( 2 ) VP V NP PP (3) (a) VP V, PP, NP; (b) V < NP, V < PP, NP < PP Pollard & Sag (1987) developed these ideas by showing how general rules of (unordered) phrase structure can be stated within a formalism employing typed feature structures. Sequential relationships are still handled by LP-rules, but have a different domain; they no longer order dominated constituents directly, but apply to values of the special attribute PHON. The phonological expression associated with a mother must then be some permutation of the phonological expressions associated with the daughters that respect all LP-rules. An HPSG-like grammatical representation of (1) is shown in (4), where the value of PHON is determined by analogs of the LP-rules in (3b). There are problems, however, for grammars relying on LP-rules as the sole means for stating word order constraints. Languages with discontinuous constituents, such as the Scandinavian languages, and especially German, pose difficulties. There have accordingly been many proposals to augment LP-rules in various ways. Reape (1989) proposes a more complex combinatoric operation, sequence union, which allows access to non-immediate daughters of a constituent, while Engelkamp et al. (1992) propose to widen the domain of LP-rules to what they call head-domains, i.e. sets of constituents consisting of a lexical head with all its complements and adjuncts. In this paper I propose instead to restrict the use of LP-rules to smaller domains, called clusters, while augmenting the grammar with another device to handle word order regularities: the topological frame. The frames encode word order regularities that are valid for a class of constituents. They can basically be thought of as formalizations of the topological schemas used by Diderichsen (1962) and several other linguists working in his tradition. A cluster can similarly be seen as a sequence of constituents occuring within a specific position (or field) of a frame. For reasons of space the full motivations and implications of this proposal cannot be dealt with here, though see Ahrenberg (1990) for some of the motivations. Instead I will develop a small, illustrative grammar fragment to make the proposal more tangible. E lem en ts o f the gram m ar The language fragment used is small and simplified in many respects. What I propose is quite compatible with the general assumptions of HPSG, however, apart from the account of word order regularities; I assume that it is necessary to restrict the domain of word order rules in languages like Swedish and German to types. This is after all quite a natural assumption to make in a theory assuming grammars to be organized as type hierarchies. In particular, topological frames apply to phrase types while LP-rules apply to clusters. The basic elements of the grammar are signs and clusters. While both elements have overt expressions, indicated by the attribute STRING, only signs carry substantial linguistic information, indicated by the attribute FEATS. A cluster is basically a sequence of signs, indicated by the attribute ITEMS, which is connected and contracts specific sequential relations w r t other signs and dusters. It is often, though not always, the case that items of a cluster have a common grammatical status. Some putative examples of clusters are: • The complements of a head, e.g put / the books on the table-, • A sequence of adjacent modifiers, e.g. a / big black / building Signs are either phrasal or lexical (i.e. words). A phrase is distinguished from a word by having a constituent structure indicated by the attribute DTRS. The value of DTRS is a feature structure where attributes such as HEAD, SUBJ (for subject), CDTRS (complements other than subjects) and ADTRS (adverbials and adjuncts) appear. A phrase also has a structure imposed on its expression, which is registered under the attribute PATTERN. The value of PATTERN is a topological frame, i.e. a finite list of elements constructed out of strings and dominated patterns. The value of the attribute STRING is a list of strings with no embedded lists (cf. PHON of Pollard & Sag, 1987). The value of FEATS is a feature structure where we find attributes representing morphosyntactic properties such as MOOD and SUBCAT (subcategorization). A partial description of the sentence Johan lade väskan på bordet (John put the bag on the table) can be found in (5). It should be observed that the phrase structure shows more branching than the topological structure. Although a verb phrase (a predicate) is part of the phrase structure, there is no distinct topological frame for it. Instead, its topology is identified with that of the clause as the two paths, PATTERN and DTRS:HEAD:PATTERN, share the same frame. lx, y, z, ... are variables indicating structure sharing. Numbers 1, 2, 3, ... are also variables but always used for strings or patterns. Type names are written at the very beginning of a node. The types clause, np, vp and pp are all assumed to be subtypes of 'phrase', while v is a subtype of 'word'. The clause frame is assumed to have five positions. Its structure is further explained below. The grammar as a whole defines the possible grammatical descriptions. In addition to the feature structures representing individual words, the grammar contains rules describing hierarchical and sequential relations and principles applying across rules. Every phrase structure rule expresses a relation between values of the attributes STRING, PATTERN, FEATS and DTRS for a local phrase, comprising a dominating item (a mother) and one or more items that it dominates (the daughters). The string of the unit can actually be computed from the pattern by a simple function. The relation between the string and the pattern of a phrase thus need not be specified for each individual rule. However, if the grammar is supposed to be used by a parser, we need to go in the opposite direction, which is not as simple. There are many patterns that yield the same string; e.g. the patterns <np v e <pp> e>, <np V <pp> e e>, <np v e e pp>, <np v e e <pp>>, where 'e' represents the empty string, all yield the string <np v pp>. Moreover, to filter out hypotheses we also need access to information about features and constituent structure. For this reason it is probably a good idea to compile the grammar into a form which allows efficient parsing. In the end we would like an automatic compiler, of course, but here I can only illustrate how the topological frames can be taken as the basis for an augmented contextfree grammar, using a PATR-style notation. Thus, I will simultaneously develop two sets of rules. The first set, the base grammar, applies to items which are daughters of the same node in phrase structure, while the second set, the string grammar, applies to units which are adjacent in the string. A string grammar of the chosen format can be parsed in different ways. As will be evident there is a close relationship between the string grammar and ATNs with sub-networks corresponding to positions. Our current implementation, however, uses a bidirectional chart-parser, with a mixed strategy. Predictions are made bottom-up when heads are encountered. From there, parsing continues top-down and inside-out with material appearing to the left of the head being consumed before material to the right. In this way the information associated with the head can be exploited to full advantage. As the parser is still being developed, it is too early to report any results on its behaviour. C o m b in a to rics Although the phrase structure rules cannot be stated with the same level of generality as in HPSG, they are far more general than an ordinary phrase structure grammar. Moreover, principles such as the Head Feature Principle and the Subcategorization Principle can still apply. An assumption we will make is that lexical heads have fixed positions within the frames. In our example grammar the frame for the Swedish main clause will have five positions, where the second position is occupied by a (finite) verb and nothing else. Its structure, with type constraints associated with positions, is displayed in (6). (6) The main c lau se schema (S): <phrase, verb, c lu s t e r , c lu s t e r , c lu ste r> For ease of reference the positions will be called the foundation (F), the V2-position, the nexus field (N), the complement field (C) and the adverbial field (A), respectively. For parsing purposes the lexical head is a good predictor for the occurrence of a projection. Given a finite verb it is a good chance that it is part of a main clause. In the string grammar we merge the positions appearing on either side of the lexical head and use (upper case) labels for sequences of clusters, as in (7). NCAi Here s and v represent strings of the indicated sign types, while F represents the contents of the foundation, and NCAi represents the joint contents of the last three positions. We can think of the upper case labels as representing a state of a top-down parser. This state is given by a current position (here indicated by the first letter of the label) and a state associated with parsing that position (indicated by the number attached to the label, if any). As illustrated in (5), a constituent corresponding to a traditional predicate, is assumed, i.e. a VP consisting of a verb and all of its complement except the subject. This constituent is formed according to the following rule: The rule should be interpreted basically in the same way as an HPSG grammar rule, it states one way in which a phrase can be formed, in this case one option for the expression of finite VPs in Swedish, with the lexical head linked to the V2-position and the complements linked to the C-position. Thus, the relation between phrase structure and topology is accounted for by a specific mapping between the daughters of the phrase and the positions of the frame. The relation between phrase structure and subcategorization information follows the Subcategorization Principle (Pollard & Sag, 1987: 71). If a verb is subcategorized for a subject, an object and a prepositional object, as the verb lägga (put), we can augment (8) When we look at this rule from the point of view of the string grammar, we see that it involves non-adjacent positions. The part of the rule concerned with the V2-position is already covered by (7), but the role of the verb and the complement position must also be accounted for. Moreover, we need to do this in a way that ensures that the dependencies between verb and complements are maintained. To accomplish this we first extend (7) with some equations: The first pair of equations links the cluster categories to the clause via the attribute SOURCE. Through the third equation they are also linked to the head. The third equation states that the lexical head is two levels below its resulting projection. This is not necessarily always the case, but we make this simplifying assumption here. The source will be inherited by all other concerned cluster categories. For instance we have a rule admitting an empty nexus position: (9) S tr in g grammar: Empty nexus r u le NCAI ^ CAi 0 : SOURCE = 1 :SOURCE For clusters having complements as initial parts, we will have rules of the following form: These rules are actually schemas that cover a number of rules which together describe the possibilities for complementation in the language. They should be interpreted as follows: in position C of the clause schema, in state i, a category xp is possible, provided no more complements follow, or only complements allowed in state j of position C. The exact number of rules will depend on how we use the LP-rules. If the LP-rules are taken as a separate component of the string grammar, there will be a relatively small number of rules, but if we want the string grammar to respect the LP-constraints we can encode their effect in the states of the cluster categories. When a finite VP combines with a subject a complete clause is generated. The position of the subject depends on the type of clause. In the case of unmarked declarative clauses (and the corresponding wh-clauses) it is placed in the first position, while in other clauses, including interrogatives and topicdized clauses, it is placed in the third position. In (11) the subject string is identified with the string of the first position, as it is a unary position. In (12) on the other hand, the subject is merely included among the elements forming the third position cluster and its sequential order will be determined by LP-rules. For the application of these rules a language-specific principle is supposed to be at work, the Frame Unification Principle, which says that a non-maximal projection must share its topological frame (and hence basic rules for linearization) with a maximal projection. 1 (13) The Frame U n ific a t io n P rin c ip le [phrase; DTRS = [h e ad e d -p h rase;] ] => [PATTERN = DTRS:HEAD:PATTERN] Thus, the complement rule (7) and the subject rules combine to fill one and the same schema with orthographic material. The corresponding rules of the string grammar are as in (14) and (15) There are similar rules placing adjuncts in the first and fifth positions of a main clause. lln addition to unification of complete frames there is also the possibility of unifying positions of two frames with one another. There seems to be little use for this in a Swedish grammar, but for the scrambling phenomena of German, it could turn out to be useful. In these sentences, all complements of verbs in a chain of verbs dominating each other turn up in the same position, the Mittelfeld. As for the string grammar we have the following corresponding rules, saying that a sentence adverb can be accepted in any state associated with the nexus position and be followed by anything accepted in that state, including nothing.
2017-01-07T08:35:44.032Z
1993-01-01T00:00:00.000
{ "year": 1994, "sha1": "6ccf5dcc7c72417204d93dbf0d1c50a3482df9cf", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "ACL", "pdf_hash": "6ccf5dcc7c72417204d93dbf0d1c50a3482df9cf", "s2fieldsofstudy": [ "Linguistics" ], "extfieldsofstudy": [ "Computer Science" ] }
56574312
pes2o/s2orc
v3-fos-license
Hypoxia Response Elements Can Cause the Overexpression of the BAX mRNA Under Hypoxic Condition Background: Suicide gene therapy is one of the modern methods of cancer treatment. However, transmission for tumor cells is one of the main challenges to overcome. Hypoxia is a common phenomenon in solid tumors that lead to changes in tumors microenvironment. Hypoxia-responsive element sequences are regulatory sequences that lead to activation of their upstream and downstream genes in hypoxic time. Bax is a strong proapoptotic gene that causes apoptosis in the time of over expression in cells. Objectives: The aim of this study is to use this sequence in order to specify suicide gene therapy by the help of a gene producing Bax protein under control of CMV promoter. Methods: The gene of BAX, BAX3HRE and 3HRE were cloned into interested vectors. In the next step, the function of HRE sequence on over expression of upstream gene under hypoxic condition was evaluated through western blot, MTT assay and real time PCR. Results: The results of this study indicate that cells transected by pcDNA3.1/BAX 3HRE. The rate of apoptosis in them significantly increased in comparison with pcDNA3.1/BAX in hypoxic conditions. Conclusions: Regarding the role of HREs in increasing the expression of its upstream genes, it can be used to specify suicide gene therapy in treatment of solid tumors. Background Suicide gene therapy is one of the interesting methods of gene therapy that recently has drawn lots of interest to itself in research.By definition suicide gene therapy is a method in which vector containing killer gene is transferred into the target cell and after the gene expression in the given cell, the cell is moved toward apoptosis or necrosis (1).Of different types of killer genes identified up to now, the genes encoding proteins such as P53, E1A, P202, PEA3, BAX, Bik and enzymes metabolizing drugs such as thymidine kinas and cytosine deaminase (2,3). Bax is a proapoptotic gene that causes apoptosis when over-expressed in cell lines such as prostate, colon, cervical, and ovarian cancers (4). Although lots of research has been done related to the suicide gene therapy, there are different challenges to overcome.The limitation of expression of killer genes in tumor cells is one of these challenges.One of the applied solutions is the use of specific promoters of tissue and tumor like human telomerase reverse transcriptase (5) and Survivin (6).However, the main problem of these promoters is their low efficiency.Some researchers are working on virus promoters like SV40 and CMV that benefits of high efficiency (7).However, these promoters do not have any specific targeting for tumor cell.It is activated in all tissues to some extent.To have a specific promoter, Hypoxiaresponsive element (HRE) can be attached to virus promotores.HRE sequences are sequences that are located upstream or downstream of different genes and function as enhancers in hypoxia conditions on downstream and upstream promoters (8). Binley et al. could achieved a targeted and specific expression of human cytochrome p450 in breast tumors in hypoxia conditions through the combination of HRE element and SV40 promoters (9). Hypoxia is a common phenomenon in solid tumors that lead to changes in tumors Microenvironment (10), when the size of tumor reaches to 44 mm in tumor cells this phenomenon occurs (11).Factor HIF-1 provides molecular basis for cancer cells' adaptation under hypoxia conditions that are necessary for tumor progression in becoming malignant (12,13). The balance between poroapoptotic molecules like BAX and anti-apoptotic ones like BCL-2 play an important role in forming apoptosis in cancer cells.In order to enhance specifically the expression of Bax gene in MCF-7 cell line in hypoxic condition.we used 3HRE sequence (GTCGT-GCAGGACGTGACATCTAGT) (18) for the first time to specify CMV promoter action to express BAX gene in cancer cells that have suffered from hypoxia. Plasmid Construction BAX gene (Accession Number NM138764.4)with 3HRE from PGK (Phosphoglycerate kinase) gene in the 3' end was synthesized and cloned in the treading vector.This synthesized gene was sub cloned into the pcDNA 3.1 vector using NheI and ApaI restriction enzyme as a result making PcDNA3.1/BAX3HRE.For construction of pcDNA3.1/Bax,PcDNA3.1/BAX3HRE was digested with HindIII restriction enzyme; therefore, 3HRE was eliminated and pcDNA3.1/BAXwas made. HRE primers were synthesized (Table 1).3HRE was amplified with PCR and PCR products were recovered with gel extraction kit subsequent at 2% agarose gel electrophoresis.The fragments were ligated in the downstream of GFP gene in the pEGFP-N1 vector.pEGFP-N1 vector was used as control the same as pcDNA3.1 vector (20). Transfection Cells were plated into 24-well dishes and grown for 24 hours to reach 70% -90% confluence.Two sets of wells in duplicate were prepared for each plasmid sample.Cells were transfected with lipofectamin 2000 (Invitrogen, USA) according to manufacturer's instructions. MTT Assay Cell viability was determined with [3-(4, 5dimethylthiazol-2-yl)-2, 5-diphenyltetrazolium bromide] MTT.Briefly, cells were incubated for 36 hours, after that MTT solution (1mg/mL) was added on to each well and incubated for 2 hours at 37°C.The crystals were dissolved in 200 µL DMSO and the extent of reduction of MTT was determined as the absorbance at 470 nm.Wells with complete medium and MTT but without cells were used as blanks (21). Western Blot Incubated cells were harvested after 36 hours and lysed with a lysis buffer.The lysate was sonicated and boiled at 80°C for 5 minutes.Total proteins were separated by 15% SDS-PAGE and transferred to PVDF membrane.Following this transfer, membrane was blocked with 5% skim milk in Tris-buffered saline with tween 20 (TBST).The membrane was incubated with mouse anti-GFP monoclonal antibody (Roche) for 2 hours and washed with TBST for 3 times.It was then incubated with HRP-conjugated secondary antibody for 2 hours.After washing with TBST for 3 times the immune complex was visualized with the NBT/BCIP.β-actin was used as a control. Real Time PCR Total cellular RNA was isolated with Kit (Fermantas,Lithuania) and cDNA was synthesized with Kit (Bioneer,Republic of Korea).Real Time PCR was performed in a reaction mixture containing cDNA.Specific primer (Table 1) and SYBRGREN master mix (Bioneer,Republic of Korea) was used in reaction.The amplification was performed in a Corbet research rotor gene.The PCR reaction was used as 94 for 10 minutes, 94 for 30 seconds and 60 for 30 seconds for 35 cycles.GAPDH was used as a control. Construction of Vectors BAX gene containing 3 copies of HRE sequence was cloned into expression vector pcDNA3.1.In the next step, HRE was removed from synthesized fragment cloned into pcDNA3.1 using HindIII enzyme.The cloning of two segments was verified by PCR and sequencing. To make pEGFP N1 / 3HRE, at first, HRE was amplified by PCR reaction using primers (Table 1) that have enzyme identification site in their 5' sides and after that, this sequence was cloned into downstream of GFP gene in vector pEGFP-N1 in NotI region and its proper presence and direction was confirmed by the help of PCR and sequencing (Figure 1). HRE Can Enhance Transcription of Genes That Located in Its Upstream In this study it was proved with Real time PCR that HRE sequence can increase the expression of its upstream genes.As it is shown in Figure 2, there was a significant increase in the rate of mRNA expression of BAX gene in MCF-7 cell line transfected with pcDNA3.1/BAX3HRE in comparison with cells transfected with pcDNA3.1 / BAX under hypoxic conditions for 36h.The same results were obtained when PEGFP-N1 and PEGFP-N1 / 3HRE were compared (Figure 2). In order to prove the increase of GFP gene expression, western blot was used.As it is depicted in Figure 3 the results obtained by western blot verifies real time results again (Figure 3). Investigation of Apoptosis in MCF-7 Cells That Were Transfected with pcDNA3.1/BAX and pcDNA3.1/BAX3HRE Under Hypoxia Conditions MTT assay was used to assess transfected cells viability with vectors pcDNA3.1 / BAX and pcDNA3.1/BAX3HRE.VectorpcDNA3.1 was used as a negative control. Although cell death has occurred in transfected cells with pcDNA3.1 /BAX under hypoxic conditions, there was a rate of more than 95 % cell death in cells that transfected with pcDNA3.1 /BAX 3HRE vector (Figure 4). Discussion At first the effect of HRE sequences on increase of mRNA expression of BAX and GFP genes was investigated in MCF-7 cell line using Real Time PCR.The increase of expression of BAX gene along with high increase of apoptosis in MCF7 cell line transfected with pcDNA3.1/BAX3HREunder hypoxic condition can be related to the effect of 3HRE sequence located in downstream of gene. Hypoxia is a common phenomenon of solid tumors.Solid tumors that are in low oxygen conditions are adopted to hypoxic condition by the help of transcription activation of more than 50 genes.These genes are the main regulator of various aspects, including an angiogenesis, tumorigenicity, metabolism, proliferation, invasion and metastasis (22).Genes that are more active under hypoxic conditions contain regulatory regions called HRE.These regions are HIF-1 binding site, after binding this molecule to HRE regions expression of the gene containing it was increased (23)(24)(25).This element (HRE) can be located in upstream or downstream of genes (18). HIF-1α expression increase has a direct relationship with tumor progression and malignancy, so that re- searchers have been able to prevent tumor growth by expression decrease in HIF-1α.This result can be shown important of HIF for progression of tumor (23)(24)(25).Many studies have been conducted using HRE sequences to improve chemotherapy performance of solid cancers (26,27).For example, a virus using 6 copies of HRE sequence with Mini CMV promoter increased the expression in E1A gene which helped the virus to continue replicating under hypoxic conditions and solved the problem of reducing the replication of the virus in hypoxic conditions (10). Harada et al. showed that promoter miniCMV containing 5HRE sequence copies increase GFP gene expression up to 500 times under controlled conditions of the amount of HIF-1 in the laboratory (28).Fukui et al. could increase expression of IFNα2b in RCC cells and prevented tumor progression through HRE sequence (29).In another study Koshikawa et al. showed increased expression of luciferase gene up to 3 times under hypoxic conditions by the help of HRE sequence (30).In the present study, the results verify the increase of expression of BAX gene about 5 times in MCF7 cell line under hypoxic conditions. Ingram and Porter have shown that 3 copies of HRE sequence taken from PGK gene lead to increasein gene expression under hypoxic conditions while it had no effect on expression increase under normal oxygen conditions.It worth mentioning HRE sequences were located downstream of gene (18). One of the main problems of suicide gene therapy is its toxicity caused by using drugs such as Ganciclovir, when the genes producing enzymes metabolizing these drugs 4 Iran J Cancer Prev.2016; 9(5):e4554. are used for gene therapy, liver and spleen toxicity is inevitable.However, researchers were able to decrease the toxicity to 1000 times by the help of HRE sequences (9).Various studies have proved that during cancer the amount of anti-apoptotic proteins are increased in cancer cells and cause the resistance of cancer cells to apoptosis during chemotherapy and radiotherapy.One of the main proapoptotic proteins is BAX protein of bcl-2 family (2).Many researchers have shown that over expression of BAX gene can overcome anti-apoptotic proteins and cause cancer cells' apoptosis (3,4,31,32).In this study it was shown that BAX over expression can lead to apoptosis in cancer cells. Generally, the results of this study show that HRE sequences plays an important role in expression of its upstream genes in hypoxic conditions, which is a common phenomenon of solid tumors (33).Based on this condition, HRE sequence can be considered as a desirable way in targeting of cancer cells for gene therapy. Figure 1 . Figure 1.Making of consract.1Aschematic diagram showing the constracts.1B/verification of cloning BAX3HRE and BAX in the pcDNA3.1 vector with universal primer.1C/verification of cloning 3 repetitive of HRE in downstream of GFP gene in the pEGFP-N1 vector with universal primer. Figure 3 . Figure 3.Western Blot Analysis of the Expression of GFP in MCF7 Cells Figure 4 . Figure 4.The effect of BAX on cell viability in MCF7 cells.MCF7 cells were treated with pcDNA3.1/BAXand pcDNA3.1/BAX3HREand incubator in a hypoxia incubator for 36 hours the cell viability was measured with MTT assay.pcDNA3.1 was used as a negative control.Viability is significantly decrease (P = 0.001) in the pcDNA3.1/BAX3HREcompared with pcDNA3.1/BAX(n = 3).Student's t-test was used for analysis.Data shown is mean values ± standard deviation (SD). Table 1 . Primers Were Used in This Study
2018-12-18T11:12:24.145Z
2016-10-05T00:00:00.000
{ "year": 2016, "sha1": "571b67cd718182ab1ed8f451ec2f515e1c08ca4a", "oa_license": "CCBYNC", "oa_url": "http://cdn.neoscriber.org/cdn/serve/57/1b/571b67cd718182ab1ed8f451ec2f515e1c08ca4a/ijcp-09-05-4554.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "571b67cd718182ab1ed8f451ec2f515e1c08ca4a", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Biology" ] }
216161607
pes2o/s2orc
v3-fos-license
Histopathological Spectrum of Thyroid Lesions- A Two Years Study BACKGROUND In clinical practice, thyroid nodules are very common, with wide disparity in incidence and histopathological pattern related to age, sex, dietary and environmental factors and are usually associated with a wide spectrum of diseases extending from functionally and immunologically mediated enlargement to neoplastic lesions. Thyroid cancer is the commonest endocrine cancer accounting for 92% of all the endocrine malignancies even though it is a relatively rare malignancy. The aim of this study was to estimate the frequency, age group, sex distribution, and various histopathological spectrum of lesions in the thyroid. The present study is a hospital based retrospective two-year study and was conducted in the Department of Pathology, Azeezia Medical College, Meeyannor, Kollam, Kerala. Tissue samples for H&E sections were fixed in 10% formalin and subjected to routine paraffin embedded processing after which this was then stained with Haematoxylin and Eosin. Various histopathological spectrum of lesions in the thyroid were observed and classified as benign and malignant on the basis of World Health Organization histological classification of the thyroid tumours. Out total cases of 476 thyroid lesions, maximum number of lesions were seen in patients in the age group of 41-50 years. Most common clinical symptom was midline neck swelling. Out of 476 cases, 419 cases (88.1%) were diagnosed as non-neoplastic and remaining 57 cases (11.9%) as neoplastic. The most common non-neoplastic lesion was multi-nodular goiter (MNG) (55.4%), followed by lymphocytic thyroiditis (17.6%), Hashimoto thyroiditis (9%), and adenomatous goiter (5.6%). The common benign lesion was follicular adenoma seen in 17 (29.8%) cases. Papillary carcinoma was the commonest malignant tumour seen in 33 cases, 66.6% of all malignant lesions which we encountered in our study. In our study, majority of thyroid diseases showed a female predominance with most of them occurring in the age group of 41-50 years and most common thyroid lesions were non-neoplastic. Proper diagnostic tools, including clinical history, ultrasonography and proper pathological examination are required for the identification of thyroid malignancy. Diagnosis by histopathological examination is important for the prompt diagnosis and treatment of neoplastic lesions. B A C K G R O U N D Thyroid gland is one of the important organs in our body and the disorders occurring in thyroid are considered as the most common endocrine disorders seen worldwide. Thyroid gland plays extensive and essential physiological roles in our body. The hormones secreted by the thyroid affect all body organs and are accountable for subsistence of homeostasis and the body integrity. 1 According to statistics by Salami et al around one-third of the population of the world lives in areas where iodine deficiency is prevalent and account for an estimated 200 million cases worldwide and 42 million cases in India itself. 2 Thyroid disorders may present as a disorderliness of thyroid hormone secretion, enlargement of thyroid leading to dyspnoea or pain. Worldwide in clinical practice common anomalies are developmental, inflammatory, hyper plastic and neoplastic diseases of thyroid. 3 Most of the thyroid disorders are benign in nature and thyroid enlargements are seen more common in females than in males. 4 Thyroid gland disorders usually manifests as enlargement of the thyroid gland (goiters) or as variations in hormone levels or as both. Around 4%-5% of the population present with clinically as externally visible nodules of the thyroid. Majority of the swellings in thyroid are non-neoplastic and around only less than 5% are malignant 5 . Thyroid carcinoma is a mostly rare malignancy which represents only around 1.5% of all cancers, but it comprises the most common endocrine carcinoma accounting for about 92% of all endocrine malignancies. From a clinical point of view, the likelihood of neoplastic disease is of major consternation in those patients who present with nodules in the thyroid. In majority of the thyroid tumours, a diagnosis can be made out by the morphologic evaluation alone; so, the classification of various histomorphological characteristics are important to classify the lesions as benign and malignant tumours. In the evaluation of thyroid lesions the initial screening methods include ultra-sonogram (USG), thyroid function test (TFT), Fine needle aspiration cytology (FNAC), radio nucleotide scan, and among which FNAC is considered to be the best primary diagnostic procedure. 6 To establish a proper diagnosis, start further therapy and assess the prognosis, surgical excision and pathological evaluation are essential. This study was undertaken to describe the spectrum, frequency, age, sex distribution and various histopathological patterns of thyroid lesions. M E T H O D S This is a retrospective study of all patients with thyroid lesions received in Department of Pathology, Azeezia Institute of Medical Science and Research, Meeyannoor, Kollam, Kerala. The duration of study was 2 years from 2017 to 2018 were carried out. Information obtained included age, sex, clinical diagnosis, histological diagnosis were available. The data were presented in frequency tables. A Sample Size of 476 subjects was studied. Sample size was taken based on the convenience of the study. Inclusion Criteria Lobectomy, Hemi thyroidectomy, subtotal thyroidectomy and total thyroidectomy specimens received for histopathological examination which were suspected for inflammatory, nonneoplastic and neoplastic lesions of thyroid. Exclusion Criteria 1. History of any genetic/ congenital thyroid disease. 2. Antenatal cases having thyroid abnormalities. 3. Thyroid disorders caused due to drug intake/side effects. Study Subjects In this study, a total of 476 patients who presented with swelling in thyroid were taken. The detailed clinical details regarding age, gender along with ultra-sonographic (USG) findings, thyroid scan, related investigations (euthyroid, hyperthyroid, and hypothyroid), and operative findings were recorded from the histopathology Performa and were taken into consideration. Fine needle aspiration was the most commonly used pre-operative assessment method for most thyroid swellings information. Gross features of the specimen received were recorded. Representative tissue was taken and after processing the tissue, routine staining was carried out with haematoxylin and eosin (H&E) stain. The disorders of thyroid were classified on histological basis into nonneoplastic and neoplastic lesions which were further subclassified as benign and malignant as per the World Health Organization (WHO) classification of tumours of endocrine organs (fourth edition). Statistical Analysis Data was analyzed using Microsoft Excel and chi-square test. Statistical package for social sciences (SPSS) software was used. R E S U L T S In the present study, a total of 476 patients with thyroid swellings were taken for the study for a period of two years from 2017-2018. The age of the patients ranged from 10 years to 80 years with a mean age of 37 years. Maximum number of lesions were seen in patients in the age group of 41-50 years (n=139, 29%) followed by 31-40 years (n=109, 21%) and 21-30 years (n=57, 14%). In the present study, females were mostly commonly affected. It was observed that 408 (85.7%) cases were females and 68 (14.3%) cases were male. (Table 1) Male to female ratio was noted to be 6:1. In the present study, most common clinical symptom was swelling in front of the neck seen in almost all cases followed by menstrual irregularity and dyspnoea. In the present study, total thyroidectomies were most common, followed by hemi thyroidectomy specimens, subtotal thyroidectomies and lobectomies. In the present study, out of total 476 cases, 419 cases (88.1%) were diagnosed as non-neoplastic and remaining 57 cases (11.9%) as neoplastic. (Table 2) In the present study, among 419 cases of non-neoplastic lesions, multi nodular goiter (MNG) 264 cases (55.4%) was found to be the most common followed by lymphocytic thyroiditis 84 cases (17.6%), Hashimoto's thyroiditis 43 cases (9%) and adenomatous goiter 27 cases (5.6%)and granulomatous thyroiditis. In the present study, benign tumours were more common than malignant tumours. Out of 57 neoplastic lesions 19 cases (33.3%) were benign tumours and 38 cases (66.6%) were malignant tumours. In the present study, among 57 cases of neoplastic lesions, follicular adenoma comprised of 17 cases -29.8% was found to be the most common followed by papillary carcinoma which comprises of 33 cases -(57.8%), follicular variant of papillary carcinoma, 3 cases, follicular carcinoma and non-Hodgkin's lymphoma one case each. In the present study, out of 33 cases of papillary carcinoma, classic variant was seen in 18 cases, followed by micro papillary carcinoma 15 cases. D I SC U S SI O N Anatomically the thyroid gland is located in the neck which is bounded by the pretracheal fascia which is a portion of the deep cervical fascia. It is seen in front of the 2nd, 3rd, and 4th tracheal rings and weighs around 20-25 gm. 7 The incidence of the thyroid diseases differ in relation to gender, age groups, and racial differences. 8 Of all the endocrine disorders, thyroid disorders are the most common in India. 9 Thyroid lesions may be developmental, inflammatory, hyperplastic and neoplastic. The thyroid gland diseases are common and composed of an array of entities causing systemic disease (grave's disease) or a localized abnormality in the thyroid gland such as nodular enlargement (goiter) or a tumour mass 10 . Both the neoplastic and non-neoplastic diseases of thyroid are common all over the world, with diversity in frequency and incidences depending upon iodine deficiency status. 11 In our present study, the age of the patients ranged from 10 years to 80 years with a mean age of 47 years which was similar with the study of Arvintham, 12 Urmila devi et al, 13 In our present study the commonest age group presenting with thyroid disorders was in the 4 th to 5th decade which was correlating with the study by sreedevi 3 et al while study carried out by Jagadale K et al 16 and Ramesh V L et al 17 was found to be 4th to 6th decades and 3rd to 5th decade respectively. In the present study, it was observed that 408 (85.7%) cases were females and 68 (14.2%) cases were male. The female to male ratio found in this study was 6:1, which on comparison with the studies by Nzegwu et al, 18 Abdulkareem et al, 19 Sudha et al, 20 and Nggada et al 21 6:1, 5.7:1, 7:1 and 6.2:1 respectively and was favouring with our study. In women the high frequency of developing thyroid disorders is considered to be due to the physiological demands of puberty, menstruation, pregnancy and lactation. A considerable number of the cases in this study were non-neoplastic thyroid lesions constituting 419 cases (88%) of the cases. This observed significance of non-neoplastic lesions in our study is in accord with findings from sravani et al, 13 Chung et al 22 and Hill et al 23 which was 62.5%, 84.1% and 60.5% respectively. In our study the most predominant thyroid lesion encountered is nodular colloid goiter and was commonly seen in the 4th decade. It constituted 55.4% of all lesions similar to a study by Illorin 24 and Sreedevi et al. 3 Multi nodular goiter (MNG) is the end-stage result of diffuse hyperplastic goiter. Excessive metabolic demands in this condition will lead to the increased enlargement of the thyroid gland and this is one of the important reason for the thyroid enlargement in women during puberty and pregnancy which is considerably common. 24 Constant stimulation by the TSH released from the anterior pituitary results in multi-nodular goiter (MNG). Main reason for colloid goiter is iodine deficiency. The daily iodine requirement is about 100-125 μg. It is treated by iodized salt used for food and also iodine-containing preparations. If the iodine deficiency state sustains for a long period of time, it results in the accumulation of colloid material in the gland and lead to colloid goiter. The puberty goiter, pregnancy goiter, and colloid goiter if left untreated will change to MNG. 25 In our study the lymphocytic thyroiditis constituted 84 cases (17.6%) and it was seen most common in the 3rd decade. Which was in correspondence with Illorin et al. 24 Hashimoto thyroiditis constituted 43 cases (9%) was seen most common in the 4th decade. Hashimoto thyroiditis is an auto immune disease characterized by widespread lymphocytic infiltration, fibrosis and parenchymal atrophy with oxyphilic changes. It is a painless goiter and there are no early symptoms. 26 Among the 57 cases of the neoplastic thyroid lesions in this study, 17 case (29.8%) are follicular adenomas which was correlating with Prabha e al. 28 Follicular adenomas may be inactive or active. Depending on their level of function follicular adenomas can be described as cold, warm, or hot. A thyroid adenoma is differentiated from an MNG in that an adenoma is solitary, encapsulated and arises from a genetic mutation in a single precursor cell. To differentiate a follicular adenoma from follicular carcinoma cautious histopathological examination is necessary. 29 Papillary carcinoma was the most common malignant thyroid lesion and constituted 57.8% of the malignant lesions in our study. This observation was similar with the study of Gupta A et al 30 Chukudebelu et al 31 and Abdulkader et al. 27 Papillary microcarcinoma constituted 7 cases (6.3%) and was correlating with study by Sreedevi et al. 3 Histopathologically papillary carcinoma appears as colloid-filled follicles with papillary projections along with psammoma bodies which may be present in calcified lesions. Young females are commonly affected in the age group of 20-40 years. Frequently lymph nodes in the lower deep cervical region may be involved. 32 Thus, the present study gives beneficial knowledge about the epidemiological and demographic variables in regard with the about various thyroid disorders on the basis of histopathology. C O N C L U S I O N S In our study, thyroid diseases showed definite female predominance, with most of them occurring in an age group of 41-50 years. Multi nodular goiter is the most common thyroid condition which was seen occurring clinically, radiologically, and cytologically. In our study follicular adenoma was the most common benign neoplastic disease and papillary carcinoma was the most common malignant lesion. Fine-needle aspiration findings and ultra-sonogram findings was in consonance with histopathological findings as far as papillary carcinoma was concerned. From this study two important observations that has been noticed were that the nonneoplastic lesions are much more common over the neoplastic lesions and the other is that the malignant lesions are seen predominating the benign lesions and of the malignant lesions papillary carcinoma of thyroid is the major constituent.
2022-06-05T00:42:59.248Z
2020-01-01T00:00:00.000
{ "year": 2020, "sha1": "ab7baac2eb14c9d69e66304e6585b179bd463c1c", "oa_license": null, "oa_url": "https://doi.org/10.14260/jemds/2020/95", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "ab7baac2eb14c9d69e66304e6585b179bd463c1c", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
2517766
pes2o/s2orc
v3-fos-license
Datenbank-DIALOG and the Relevance of Habitability This paper describes certain aspects of Datenbank-DIALOG 1 , a German language interface to relational databases developed at the Austrian Research Institute for Artiicial Intelligence. Besides giving a short overview of the system architecture it emphasizes the issues of portability and habitability and how they are being tackled in the design of Datenbank-DIALOG. To demonstrate how design strategies support the development of a habitable system we take examples from the area of comparisons and measures, both of which are important for many application domains and nontrivial from a linguistic point of view. Datenbank-DIALOG has been fully implemented and is accessible worldwide via email coupled to a database containing information about all Austrian AI-projects. Osterreichisches Forschungsinstitut f ur Artiicial Intelligence during the period of development of Da-tenbank-DIALOG 1 The development of Datenbank-DIALOG was carried out jointly with \Software Management GmbH", Vienna and has been sponsored by the Austrian Government within the \Mikroelektronik FF orderungsprogramm, Schwerpunkt S7". Introduction The paper focusses on the issue of habitability and how it is accounted for in Datenbank-DIALOG 1. Examples from the area of comparisons and measures--both ilnportant for many application domains and non-trivial from a linguistic point of view--demonstrate how design strategies can SUl)port the development of a habitahle system. Datenbank-DIALOG is a German language interface to relational databases. Since the development of a first prototype it has been tested in different enviromnents and continually been improved. Currently, in a large field test, Datenbank-DIALOG interfaces to a database about AI research in Austria. Questions sent by einail 2 are answered automatically. The system consists of four main components. The scanner breaks up the natural language query into tokens for morphological analysis. The parser l)erforms syntactic and semantic analysis creating one or--in case of ambiguities--more caseframes containing the query representation at the domain level. The interpretation of the query is performed in three stages. The mapping from domain-level to database-level predicates results in a DB-Caseframe, then a linearization step produces the Logical Form and finally a syntactic transformation leads to an SQL query. The answer is then given directly by the DBMS as the result of executing the SQL query. 1:181-203, 1987. 2 Email address: aif orsch~ai, univie, ac. at wise, they will either face a continuously high rejection rate or--more likely, since humans adapt much better than computers--formulate their queries ill an unnecessarily simple and inefficient way. Hahitability cannot be judged solely on syntactic coverage. Queries must be correctly interpreted syntactically, semantically and pragmatically. While syntactic coverage depends solely on tile parser, semantic and pragmatic coverage must be considered with respect to the contents of the database to which the NLI connects. The grammar of Datenbank-DIALOG is completely domain-independent, designed to make the accepted suhlanguage as consistent as l)ossihle. Recent advances of linguistic theory were incorporated in its development, thus also facilitating implementation and maintenance. Two examples for this strategy are the treatment of determiners and verb-second (V2). Using results of Generalized Quantifiers Theory for natural language quantifiers (e.g. conservativity) a formal correspondence between GQ-formulas (representing the logical form of a query) and SQL-statements (formulas over the relational calculus) was estal)lished and implemented. This gives a sound theoretical basis for semantic interpretation and SQL generation. All extensional natural language determiners can be handled-matching the extensional nature of databases. In German finite verbs occur ill second position in main clauses (V2) and in final position in subordinate clauses. Ideas from (_;B-Theory are used for a uniform treatment. V2 is considered to be the result of a movernent from an underlying final position in the verb cluster to clause initial cornplementizer position. This movement is iml)lemented as a relation between the "moved" finite verb and its trace. In the case of main clauses, Vlin is "moved back" to the end of the verb cluster, and now the same mechanism applies uniformly. Thus both clause types are subject to the same syntactic and semantic constraints (which thus need only be stated once) and give rise to the same interpretation. Comparisons A central concern ill querying databases are cornparisons between various kinds of objects. Comt)arisons involve a relation between values associated with a dimension and units of measure. Values may be given explicitly or implicitly by derivation (thus including superlatives). Linguistic means for expressing colnparison vary widely. Usually, comparison is associated with gradable adjectives and adverbs in various syntactic constructions: hal ein hgheres Gehalt (Aux+A/NP); verdient mehr (V+Adv); rail einem hgheren Gehall (A/PP). The interpretation of those expressions should be the same. Datenbank-DIALOG uses a compositional semantics and separates the lexical item from the underlying semantic relation, which may be shared by different words. More prol)lerns arise when specifying the value for the cornparison: ein hb'heres Gehalt als 20.000,-; ein Gehalt yon mehr als 20.000,-; mehr als 20.000,-Gehalt; verdient mehr als 20.000,-. The comparative and the value may be adjacent or not, and show up a.s PPs, complex determiners or adverbial phrases. All these constructions map onto the same semantic representation, a relation, a value along with a dimension and a unit--thus allowing to compare values with different units--and a compared object. This assures a uniform semantic treatment. In verdicnl mehr als Dr. Haid the value is specified only implicitly by referring to the salary of Dr. Haid. Despite the different structure of the corresponding SQL queries the user will hardly notice this fundarnental difference. For a habitable system it is necessary to provide solutions to both types of comparisons. Datenbank-DIALOG recognizes the different interpretations by the semantic type associated with the value of the phrase to be cornpared. If the value has the correct dimension, it may safely be inserted a.s an argulnent into the cornparison relation. Otherwise, Datenbank-DIALOG constructs a subquery giving the value by using the dominating relation and fitting the comparison object into the "subject" slot of the attribute. The resulting structure is processed analogously to a top-level query. As a consequence, anal)hora resolution may be apl)lied enabling Datenbank-DIALOG to give a correct interpretation of Weri verdient mehr als seini Vorgeselzler'? Domain predicates need not uniquely deterrnine the relation and attributes of a corresponding predicate in the database. Datenbank-DIALOG splits the interpretation of an utterance into two stages: An interpretation in the domain model, i.e. a caseframe, which is then mapped (using a translation table) to an interpretation in the database lnodel, i.e. a DB-caseframe. This approach allows to interpret superficially similar queries as quite different SQL queries. Attributes with the same meaning stored in different tables (nursesalary vs. doctor-salary) can be treated as well as derived attributes (salary computed from basic + variable salary)--in short, the user should not need to know about the actual encoding of information. An interesting instance of this prillciple is the interpretation of Wieviele Patienten behandelt Dr. Haid. Whereas in one database rnodel the imrnber of patients is stored explicitly and can be treated analogously to the salary above, other database models contain this "attribute" only implicitly: the number of patients haz to be computed (counted) by the SQL query. To obtain these quite different interpretations, only a different mapping of the (contents of the) argurnent-slot of the predicate Treatment in the translation step between domain and databa.se level is required. A special case (where irnplicit attributes must be made explicit) are queries involving the comparison of two subqueries. This cannot be expressed in a single SQL query. A ternporary table has to be created containing the relevant count-attribute together with information on the object hearing that attribute. The actual comparison can then be made with the now explicit attribute. Most problems with comparatives also occur with superlatives and are dealt with in an analogous way. One interesting phenomenon which has no direct parallel in comparative structures shows in Welcher Arzl, der in der Ambulanz arbeilet, verdient am meislen? In most cases Who has the highest salary among the doctors working in the casualty department? is the most plausible interpretation and should be preferred. To produce this reading, a kind of copying has to he performed: not only must the dominating relation be copied but also the restrictions on the subject slot (i.e., on the bearer of the attribute) have to be inherited. In Datenbank-DIALOG this copying works on the ca.seframe representation, and thus is able to handle restrictions resulting from different syntactic constructions, such as the lexicon (Ambulanzarzl), APs (in der Ambulanz arbeitende Arzl), PPs (Arzt aus der Ambulanz), NPs (Arzt der Ambulanz) and relative clauses ( Arzt, der in der Ambulanz arbeitet). All these constructions end up as modifications in the caseframe due to the compositional nature of our approach. Thus a unified solution for inheritance of modifiers in their various forms is achieved. A correct comparison is only possible if compared values are of the same dimension and share a unit of measure. Differences and incompatibilities may arise in different places: frorn special formatting conventions (e.g. 20000, 20.000,-, $20), when the user specifies a dimension and unit of measure verbatim (e.g. "10 Meter"), from the database, where coinparable columns may be a.ssociated with different units of Inca.sure. Datenbank-DIALOG solves this problem--by defining a normalized form azsociating values with units and transformation rules between measures of different units--at the scanner level (l)atterns, e.g. (late formats), at the parser level (linguistic information to fill the slots in the normalized value frame), at the interpretation level (procedures to transform constant values from one unit to another), at the database level (transformation fimctions of the query language). Summary Habitability is a rnost iinl)ortant feature of NLIs. Using comparison as examl)le we have shown how the design of Datenbank-DIALOG enhances habitability, in particular by: giving a uniform interpretation to semantically equivalent user queries of different syntactic and morphological appearance; enabling users to enter data in the form most convenient to them (formatting, unit conversion); removing the need for users to know about the database representations of the concepts they use (domain concepts vs. databa.se relations and attributes, implicit flmctions, unit conversion); making ambiguities explicit; and incorporating presupl)ositions (relation and restriction copying).
2014-07-01T00:00:00.000Z
1992-03-31T00:00:00.000
{ "year": 1992, "sha1": "1939f0ac083c4c946728b1b47c14f36b914c2aa3", "oa_license": null, "oa_url": "https://dl.acm.org/doi/pdf/10.3115/974499.974547", "oa_status": "BRONZE", "pdf_src": "ACL", "pdf_hash": "b437dce4a2f7001d1a487a69a8fa2ef03aef1483", "s2fieldsofstudy": [ "Computer Science", "Linguistics" ], "extfieldsofstudy": [ "Computer Science" ] }
55253897
pes2o/s2orc
v3-fos-license
Using the Bible in post-apartheid South Africa : Its influence and impact amidst the gay debate 1 The Bible has generated a significant reception history in the first decade of democratic, post-apartheid South Africa. Its reception history testifies to how the Bible was considered to be important across a broad spectrum of society, also contributing to believers’ lives and sense of self amidst the enormous changes in the country. Recent documents and decisions of the Dutch Reformed Church on homosexuality and the ensuing debates, highlight the influence and impact of Bible use in South Africa today. Examining different hermeneutical approaches to the Bible and using insights from reception theory, a number of interesting trends in the ongoing use and influence of the Bible are highlighted and discussed. INTRODUCTION Political leaders found the Bible to be a superbly useful ally in the establishment and maintenance of Apartheid South Africa during the latter part of the first half of the twentieth century, and in fact referenced biblical texts in their attempts to justify racial segregation (e g Dube 2000:6).Even before this time, policies which assumed the superiority of white colonistsracially, religiously, morally, and otherwise -often included references to the Bible, accompanied by claims that their practices did justice to biblical "directives" 2 (e g Dube 2000:5-6).In short, the impact of the Bible on the design and practice of the social engineering of Apartheid South Africa can hardly be overestimated. While antipathy towards the Bible would have been expected, the Bible is -ironically -still found to play an important role informing not only ecclesial practice but also social discernment on a wider scale.Accounting for the continuing influence and impact of the Bible in post-Apartheid South Africa,3 reference can be made to the constituency and demographic make-up of South Africa.It is a country characterised by complexity as much as by divisions, but also a country whose religious fibre have not changed much since democracy, and with communities of faith and Christianity, in particular as the majority religion, contributing variously to the broader society or public domain as well. In this contribution on the use and impact of the Bible4 in post-Apartheid South Africa attention is focused on the current debate about gays (used here in the broad sense of the whole LGBT community) in the Dutch Reformed Church (DRC) in South Africa. 5After a brief evaluation of the use and impact of the Bible and the New Testament in particular in post-Apartheid South Africa,6 a reception studies perspective is introduced.In a third part of the paper and against this background, the use of Scripture in some recent, official documents of the DRC on homosexuality is evaluated.7 STRUGGLING WITH THE BIBLE IN POST-"STRUGGLE" SOUTH AFRICA? Notwithstanding its use and abuse by the architects of Apartheid and its later supporters, the Bible continues to play a positive role in post-Apartheid South Africa, even if its interpretation is often contested.And indeed, apprehension and disillusionment do exist among some regarding the texts' ability to play a constructive public role -in no small way because of the legacy of the past, inter alia colonialism and Apartheid -and remaining difficulties and ambiguities around certain texts, interpretative practices and the tension between academic guild(s) and faith communities should not be slighted. 8owever, through these communities of faith but also beyond them and their effect on public society, the Bible at times assumes a significant and important influence. 2.1 A framework for analysing changing patterns in biblical hermeneutics A particularly useful way to describe and analyse the ongoing shifts -and impasses -in the use of the Bible in South Africa, is the taxonomy of Schüssler Fiorenza (1999:31-55) since it provides a useful framework 9 for plotting and analysing also local biblical reception. 10The first of four is a doctrinal-fundamentalist paradigm which consists of a conservative approach to the Bible, viewing it as sacred Scripture that was divinely revealed.It employs a literalist reading intolerant of the scrutiny of the text's sociohistorical context or the social location of the contemporary interpreters -at and postmodern interpretation (cf Thiselton 2004:148ff), and here where the focus is on biblical reception, the three worlds of the text (cf Schneiders 1991; etc).On the other hand, whereas the latter methodologies tend to concentrate on the focus of the input (exegetical methodology) to gather interpretative results, the framework I choose to employ here allows for considering the production of the results as well as their impact in real life.This framework focuses less starkly on the different models of reading and criticism of the Bible that characterise the academy, church and society, making it possible to look at the use of the Bible across these publics yet without claiming that they operate exegetically in a similar mode.Schüssler Fiorenza (1999:38-39) holds that while disciplinary paradigms may at times overlap and cross-pollinate each other, "theoretical frameworks of perspectives" such as religious dogmaticism, historical positivism, cultural relativism and emancipatory theoretical commitment cannot be "married promiscuously with each other without losing one's theoretical and practical footing".In the S A context, it was however the religious dogmaticism and historical paradigms which at times intersected with one another; but this is a topic for another discussion. times insisting on the unalterable status of the interpretations too, claiming their approximation to the status of revelation.In a world characterised by complexity and change, it is a potentially attractive paradigm since it delineates exclusivist group boundaries and clear-cut identities with the accompanying allure of emotional stability as well as religious security and certainty of faith."Literalist fundamentalism vehemently rejects modern religious tolerance and pluralism but insists that the biblical message proclaims universal moral values and truth" (Schüssler Fiorenza 1999:40).However, the different and in the South African context, often contrasting ways in which Christian communities and churches appropriate the Bible, are concealed. 11hile the doctrinal-fundamentalist paradigm reigned supreme also in Apartheid's heyday, both in support of the regime and the struggle against it,12 the beginnings of a shift in the academy towards a scientific paradigm can be detected already since the 1980's.A scientist paradigm started to blossom, rooted in "the individualistic and relativistic discourses of modernity", and sharing "with fundamentalism a positivist and technological ethos"13 (Schüssler Fiorenza 1999:42).In a "scientific" positivist paradigm the emphasis is on value-free inquiry, appeals are made to the notion of a historical gap between past and present, all of which are couched in and focused on the universal applicability of the interpretation underwritten by the notion of a single, correct meaning of a text (Schüssler Fiorenza 1999:41).Rather than espousing a particular theological position, the rhetoric of scientific, disinterested objectivity rejected any recourse to conscious religious, socio-political or theological engagement as unscientific.In South Africa the emphasis shifted to methodology as can be seen in the overwhelming focus on hermeneutical methods (cf Punt 1998a) during the particularly stormy period of the dying years of Apartheid South Africa. It was feminist and liberation theological interpretation in particular that contributed to the emergence of a (post-)modern hermeneutical or cultural paradigm in the last decade in biblical studies.Destabilising (to some extent) the positivist scientific ethos of the field through its rhetorical and practical force, the cultural relativist paradigm "underscores the rhetoricity of historical knowledge, symbolic power, and the multidimensional character of texts" (Schüssler Fiorenza 1999:43).Challenging notions such as that texts represent divine revelation or act as windows on historical realities, and rejecting a correspondence theory of truth, it insists that texts are "perspectival discourses constructing a range of symbolic universes".However, while the postmodern hermeneutics paradigm subverts the scientist approach with claims to certainty, it still exudes its "own scientific value-neutral and a theological character", often relishing "a playful proliferation of textual meaning" and failing to "address the increasing insecurities of globalized inequality" (Schüssler Fiorenza 1999:43).This paradigm has passed the South African landscape largely by without much influence,14 probably because the former two paradigms were so tightly in place. However, a fourth paradigm is slowly emerging in the South African academy also, even if is yet to show its potential in most ecclesial communities and hardly posing a major challenge or destabilising the first two paradigms.Seeking "to redefine the self-understanding of biblical scholarship in ethical, rhetorical, political, cultural, emancipatory terms" and the scholar's role as socially engaged, transformative figure, a rhetorical-emancipatory paradigm views "biblical texts as rhetorical discourses that must be investigated as to their persuasive power and argumentative functions in particular historical and cultural situations" (Schüssler Fiorenza 1999:44).The emphasis on biblical scholarship's public character and socio-political responsibility goes beyond role-location, informing historical (re)constructions as much as contemporary interpretation of texts. 15The emancipatory paradigm requires a critical socio-political interpretation of the Bible, "[s]ince language not only creates a polysemy of meaning but also transmits values and re-inscribes social systems and semantic patterns of behaviour" (Schüssler Fiorenza 1999:46).This useful taxonomy assists in evaluating the broader landscape of engagement with the Bible in South Africa, providing evidence of shifts that have taken place in the academy, if not always to the same extent in the church.The tenacity of the dogmatic-fundamentalist and the scientific positivist paradigms in South African biblical hermeneutics requires some more attention. 2.2 A scientist approach amidst shifts in South African biblical hermeneutics Until fairly recently and at least until the 1990's, South African New Testament scholarship largely mirrored the local academy as elitist and populated by white males.The Apartheid system was not only legitimated by missiologists who followed Warneck, but also systematic theologians (such as the neo-Kuyperians like J D du Toit and F J M Potgieter) and biblical scholars (e g Groenewald and Snyman), keen to provide biblical and theological justification for racially separate churches and a political system of separation (Apartheid) (cf Cloete 2003:276; Naude 2005:12; Vorster 1983:94-111).The doctrinalfundamentalist paradigm played an important role in these developments, complete with appeals to biblical interpretations assuming the status of divine revelation. While South African biblical scholarship has mostly proceeded beyond a doctrinal-fundamentalist paradigm during the last decade or two, its entanglement in a scientist, positivist paradigm is evident. 16Evaluating New Testament studies in terms of responsiveness at epistemological (knowledge), social and political levels, the systematic theologian Naude recently concluded that scholars still need to make their implicit epistemological assumptions explicit and to commit themselves to emancipatory cognitive interests (Habermas) enacted through self-reflection and ideology critique. 17He also challenged South African New Testament scholars for what he perceived as their neglect of matters ethical, and especially a deficit regarding work on the ethics of interpretation, which is of particular significance in the South African context.Using three elements of public theology 18 as his criteria, Naude found the voice of New Testament scholarship to be silent amidst public concerns about moral identity in a transitional society, retributive justice issues such as affirmative action and land distribution in South Africa, and the HIV/AIDS pandemic (Naude 2005:11). 19The accusation levelled at the (biblical) scholarly community is therefore that it is not yet engaging society to the extent that is necessary, especially in a young democracy such as South Africa. These observations gain special significance in the African setting, and given the involvement of the broader society with the Bible.African biblical hermeneutics are often argued to be characterised by a threefold set of interests, showing a predilection for historical and sociological matters 20 (cf Gottwald 2000:91), with some attention reserved for thematic and symbolic dimensions of the text, as well as literary 21 interpretative interests (West 2004:166). 22However, whether these should be conceived of as neat divisions or rather as focal points in an otherwise messy discourse and practice of biblical interpretation is another question.Claims about the "dominance" of African biblical interpretation 23 by "socio-historical interpretative interests" and that its distinctiveness is situated in "the life interests that African interpreters bring to the text and how these life interests interact with their (predominantly socio-historical) interpretative interests" (West 2004:167) might not adequately deal with the hybrid, pastiche-nature of such hermeneutical patterns. 24n the end, the ongoing impact of the Bible on South African society can hardly be denied, even if it may at times be uneasy and varying.Identifying specific hermeneutical or interpretative paradigms is important, but especially in a context such as South Africa with its rich if not always wholesome history of biblical reception, attention for the role of reception -the world in front of the text -is at least of equal importance. TEXTUAL RECEPTION: INFLUENCE AND IMPACT The Bible is not a neutral text, and neither have its various interactions with society been neutral; in fact, the Bible remains a deeply contested document, used in many different social struggles (e g Germond 1997:190).And therefore, "[i]f biblical scholarship is more than history and philology, it must take account of the context of the Bible, not only the original Sitz im Leben, but also its continuing 'contextualisation' in the religious communities that have preserved it and for whom it makes sense" (Sawyer 1990:319).Not only the biblical texts but also the traditions of their interpretation need to be considered when the impact of texts and accompanying hermeneutical practice(s) are investigated. In short, reading is influenced by traditions of interpretation as well as generating and contributing to the reading traditions. Considering the impact and influence of the Bible from the perspective of the "world in front of the text" implies at least two aspects.Firstly, the traditions lying behind and leading to the formation of the text in its current form, or the tradition of interpretation has to be investigated.The relationship between the formation of the text and its interpretation by the communities of faith has been summarised by the claim that as much as there cannot be a church without the Bible, there cannot be a Bible without the church: 25 "the church … received the apostolic witness, selected the canon, and gave the biblical witness unity by its interpretation" (Froehlich 1991a:7). 26ut it is the second aspect of the world in front of the text which is sharply in focus in this paper, and requiring study of the contemporary setting of reception with everything this entails,27 such as social location, ideological stance and various other elements.Accounting for the relationship and interaction between literary and general history, Hans Robert Jauss, a student of Gadamer, described the effect of literature on society, or the "socially formative function of literature".Each successive reading of a text contributes to the shaping of a horizon of expectation which subsequent readers bring to the text and thereby influences its understanding (Jauss 1982:40; 142ff). 28This notion allows the literary text to assume an active role in its reception, "calling into question and altering social conventions through both content and form" (Holub 1984:68). The value of the text's history of effects for its current interpretation emerges in a twofold relationship: the texts themselves are the product of a history of effects and the texts are accompanied by a history of effects constituted by the various ways and forms of interpretations made by the Church through the centuries (Luz 1994:23).On the one hand then, it must be realised that texts "are not the ultimate point of departure nor the ultimate authority but products of human reception, human experiences, and human history", and therefore secondary to the encounter between people and the divine.On the other hand, the biblical texts have a history of effects which cannot be detached from the texts, because it is an expression of the power and significance of the texts.In fact, the effects can often not be separated from the texts because of the difficulties in determining where the texts end and the effects begin. In fact, the interpreted text can gradually replace the text to be interpreted, in other words, commentary replaces the text (Boone 1989:78-80, referring to Foucault).Although interpretation itself necessarily recasts the texts, the unwillingness to account for this process almost inevitably leads to a cover-up of how interpretation is prejudiced to the "finalisation" of the text.Histories and traditions of interpretation have over many centuries played a more central informative and even formative role in Reformed traditions than what is often admitted.When the accepted or traditional way(s) of interpreting the Bible becomes the only authentic interpretation and effectively replaces the Bible, 29 the interpreted text eventually constitutes the authority (Boone 1989:95). Along with the importance of stressing the interactive involvement and constitutive role of the reader in the interpretive process, the equally influential role which the interpretive traditions play in the real readers' construction of the text should not be minimised. 30Interpretation and generating meaning is the result of an interactive process between reader and text, but never in a neutral way: the text is filtered by and through the reader and the text is best viewed as construction (Segovia 1995c:296, cf 1995a:28-31; 1995b:7-17).A concern with the real readers of the texts within their social contexts -historical, cultural, political, economical and so on -is important, but the emphasis on the reader as social and historical individual, that is, as part of a social and historical community, and therefore accounting for the tradition of interpretation's control over current reading of the text remains equally important. Not to account for the way in which an established traditional reading or interpretation virtually ostracizes -if then not supplant -the original text, 31 is to eventually run the risk of uncritically reinterpreting the tradition-embalmed text, that is, interpreting the text without regard and accounting for the tradition and its influence. 32What happens particularly in midrash, can probably be found in all interpretation, but is, however, not always consciously recognised as such."The words of the wise are not added to the text; they are the text as well, linking its words to another form, not an integrated, hierarchical system, but an ongoing tradition, a structure of mutual belonging" (Bruns 1990:202).The text become the palimpsest onto which the interpretation is copied, with the text soon and increasingly fading out of the background position it already assumes. 33To put it bluntly, readers soon end up reading their own texts and not the texts which they purport to read (cf Fish, referred to in Boone 1990:65). 34ome of these trends can be identified in the current gay-debate in South Africa, as a survey of recent DRC documents and decisions show, underlining the importance of dealing with the reception history of texts and the dangers inherent to its neglect. SCRIPTURE, THE DUTCH REFORMED CHURCH AND THE GAY-DEBATE The realisation that the new South Africa has become part of the global community was brought home by its engagement in a number of glocal debates, 35 no less the issue of how to deal with homosexuality in the church. 3632 Segovia entertains the idea of the text as "construction" as a particular paradigm ("another major development") for the study of the Bible, but reaches his somewhat different conclusion (the readers as constructors of textual meaning vis-à-vis my emphasis on the reception historical framework fitted onto the text), from another direction (the "role" assigned to the text in biblical hermeneutical paradigms).He points to the significance of cultural studies or ideological criticism in biblical hermeneutics where "flesh-and-blood"-readers' activity with regard to the text is taken into account, and all their interpretive attempts are acknowledged as constructions (Segovia 1995a:7, 28-32). 33 Genette refers to the reworking of texts in different genres and languages as a process similar to the creation of a palimpsest -a new text is written on top of another (or more), previous layer of text, which remains visible to some extent.He refers to his theory as "hypertextuality" and the older layers of text as "hypotexts" (cf Van Zyl-Smit 1996:5).The interpretation or hypertext blurs out the original, to such an extent that the interpreted text displaces the original. 34 Segovia (1995b:16) concludes with saying that all exegesis is in the end eisegesis. 36 It can also be argued, however, that this debate prominently reflects a search for (a new?) identity in a vastly changed national (although not so much, ecclesial) context, and that a strong heterosexist position inclusive of claims regarding family life and values is more reflective of a last-stand approach amidst vast and fast changes in post-Apartheid S A rather than only a specific socio-ethical model.Cf Smit (2004:147) on the DRC's commitment at its 2002 General Synod to serve South Africa and its people, as well as "the continent and its complex challenges". The concerted efforts to appropriate Scripture in formulating its position on homosexuality points to -but also beyond -the DRC's protestant-reformed confession, which attaches great (ultimate?)value to biblical authority and normativity. 37n a debate often characterised by a dearth of professionalism and even personal integrity, the agreement between the opposing viewpoints seem to be on the importance of using the Bible in ethical decision-making. 38owever, a different and diverse situation emerges when the recent statements of the DRC about homosexuality are investigated.Although in both the DRC's synodical commission reports as well as in one of its presbytery's legal commission's reports regarding the investigation of a gay minister, there is evidence of academic input, the gay-debate in this church finds itself in a stalemate position, caught up in different hermeneutical paradigms and tied up in a reception history which is not addressed. 4.1 The DRC's use of the Bible re homosexuality: Two case studies 39 The DRC in its official constituencies and reports of various forms and formats consistently claim biblical sanction for the positions and recommendations formulated with regard to homosexuality.In 1986 the DRC's General Synod decided unambiguously against homosexuality in broad terms, claiming this to be "revealed in Scripture", while a contrary position as formulated in the 2002 AKLAS report also claims that the Bible is "conclusive" (Afr deurslaggewend) for its own deliberations on this matter -as well as in the more general sense of the word (AKLAS 2002:14.2.4; cf AKLAS 2004:4.7.2.2, esp 1). 40 AKLAS 41 Reports of 2002 and 2004 to the DRC Synods The minutes of the 1986 General Synod of the DRC contains its decision that proclaimed the sinfulness of homosexuality."Homosexual practices and a homosexual relationship must be dismissed for being in contradiction with the will of God, as revealed in Scripture" (General Synod 1986:672; own translation and emphasis added).In contrast to this bold decision, extensive reports of the DRC's AKLAS were tabled at the General Synods in 2002 and 2004, presenting much longer and more detailed arguments while refraining from a strong position on either the unreserved acceptance or rejection of homosexuality and lesbigays, but calling for the withdrawal of the 1986 decision, further theological-ethical study on homosexuality, within the broader context of human sexuality (AKLAS 2002:14.11.1-3;General Synod 2002:12.11),denouncing any form of sexual promiscuity and asking forgiveness for the hurt caused to gay (implicitly, LGBT) people in the past (AKLAS 2004:4.10.1-6 ; General Synod 2004:3.1-7)Both reports carefully situated their discussions of the relevant texts (Lv 18:22; 20:13; Gn 19; Jude 19 and 1 Cor 6:9, 1 Tm 1:10 and Rm 1:26-27) within three broader contexts, namely the reception history of these texts and the enveloping discourse, the biblical, literary context, as well as the contexts of today's modern interpreters.The 2002 AKLAS report dealt with the church's historical struggle to understand homosexuality (14.2) within the context of ethical decision-making (14.3), the difference between Torah as instruction and law (14.4),before dealing with the biblical texts used with reference to homosexuality (14.5) and Romans 1 in particular, on the basis of the argument about what constitutes the natural (14.6).The 2004 AKLAS report built upon the earlier report, but added reflection on the rationale behind the report (4.2), the issues surrounding terminology (4.3), broader biblical patterns (the ministry of Jesus and of the early Christian church) which could assist in interpreting the relevant texts (4.5), and, a social science perspective, focussing on explanations of the origin of homosexuality and arguments regarding the change in sexual orientation (4.6). 40 While space does not allow full discussion of all pertinent and relevant matters in these documents and official decisions, the focus here is on the use of Scripture as it appears in these and other official DRC documents. 41AKLAS is the DRC's General Synodical advisory committee for doctrinal matters (Afr Algemene Sinodale Komissie vir Leer en Aktuele Sake). Responding to the 2004 AKLAS report, the General Synod of 2004 also appealed to Scripture with reference to sexuality, but in such a way that "the message of hope and freedom in Christ" can be "communicated" (Afr tuisgebring kan word).It asked the church's members to assist gays and their families, offered its apology where the church in the past acted inappropriately towards gays, and called for a "biblically founded pastoral model to care for gays".Admitting to different perspectives in this debate, the Synod agreed to further investigation of homosexuality.Heterosexual, monogamous relationships are regarded as the only biblically accepted form of marriage, but that all people regardless of sexual orientation are included in the grace of God.This decision replaced the 1986 decision on homosexuality. Presbytery of Cape of Good Hope report 2005 (Gaum-case) In April 2005 accusations of (homo)sexual promiscuity were levelled against a DRC minister, Rev L L B Gaum of the St Stephen's church in Cape Town, by his partner who shortly afterwards committed suicide. 42The Presbytery of the Cape of Good Hope set its legal commission the task of investigating these accusations, but in the end the focus of investigation shifted to the question of whether a practising homosexual, regardless of the presence of a monogamous, stable relationship, may (continue to) be a minister of religion in the DRC.The general findings (6.1-6.3) of this report were that although no evidence existed for Gaum's alleged promiscuity, he was guilty of being "not honest and open with his church and his congregation" about his monogamous gay relationship.The commission judged this unacceptable, and recommended to St Stephen's church council that Gaum be dismissed as minister of the congregation and his clerical status be suspended. 43he report appealed positively to the 2004 decision in denying that homosexual orientation is a punishable sin, but anomalously rejected it in favour of the earlier, 1986 decision of Synod in order to condemn homosexual relationships. 44Privileging the 1986-decision above the 2004 Synod-decision is ironic since the latter called for further study on homosexuality and related biblical texts. 45However, and importantly, the reference to what is considered the scriptural warrant for one man, one woman marital relationships was used to interpret the rest of the decision of the 2004 Synod46 (Presbytery 2005:7.3.3.1 2a) and with that the commission concluded that homosexual relationships per definition falls outside marriage and are promiscuous regardless of whether gays live in a monogamous, nurturing relationship. The opinion of the Actuaris Synodi (Registrar) of the Western Cape Synod as sought by the legal commission (Presbytery 2005:7.3.6)further elucidated both the influence of a hermeneutical reading grid and the control wielded by the church.His opinion implied that gays are condemned to lives of celibacy since promiscuity is described as any sexual relationship outside of contemporary forms of monogamous, heterosexual marriage and claimed to be forbidden by the Bible. 47This particular, if widespread, post-biblical understanding of what constitutes acceptable human sexual relations, structured according to the norm of heterosexuality, seems to override the broader biblical notion of human partnerships where sexuality is lived out within sustained relationships.Rather than allowing for legitimate structures for responsible relationships (including sexual relations), the focus shifted to denouncing a particular form of sexual intercourse.In the end, the actuarius recommended punitive action against Rev Gaum not for his sexual orientation or because of sexual promiscuity, but because his monogamous, faithful relationship was gay and it thus constituted "serious misconduct" when measured against the norm of monogamous, heterosexual marriage. The heteronormative ideology of the report becomes even clearer when it lists as an aggravating condition that Rev Gaum "appeared as if he had no doubts about his interpretation [of homosexuality]", demonstrating the commission's less than unbiased premise in this regard.In fact, the commission was offended that Rev Gaum's position on the matter "did not consider the merits of the church's interpretation of the scriptural givens and admonition" (Presbytery 2005:8.2.3).This remark of course not only presupposed the veracity of the Church's interpretation48 but also disregards the explicit statement in the 2004 Synod decision on different interpretations of Scripture, and the need for more study and discussion on homosexuality (General Synod 2004:3.3,3.4).In short, the new, authoritative text on gays and homosexuality is not so much Scripture as it is an authorising interpretation in the form of a specific, historic formulation by the DRC.49 4.2 Reception-critical remarks on the DRC's use of the Bible re homosexuality These DRC-reports are, in all fairness, quite different in nature and purpose,50 and the point here is not primarily to compare them or to adjudicate the validity of their positions, but rather to conclude by making three short reception-critical remarks on the appropriation of the Bible and hermeneutical trends in the new South Africa. 51Interestingly, the drafters of the official DRC documents above did not consider it important or feasible to clarify their hermeneutical position, although it is not too difficult to infer their hermeneutical points of departure. 52This unfortunately impacts on proper interaction with views expressed whereas it could otherwise have stimulated dialogue by ensuring participants in the debate do not talk pass each other. Reception history obliterating the socio-historical context The socio-historical context for understanding the three New Testament texts traditionally believed to be related to homosexuality are too easily narrowed down to an intratextual world and populated with modern, contemporary ideas about human bodies and sexuality, relationships and values, conventions and norms.Avoiding ethnocentrism, the socio-historical context of ancient Judaism as well as the Greco-Roman world and their conception of homoeroticism have to be distinguished from our (post)modern world and its notion of homosexuality. 53ome crucial aspects of the first-century context to consider when interpreting the NT texts include the complex relationship between sexuality and sex as portrayed in the New Testament.For example, in all he says about marital love, Paul never joins it with sexual relations, and conversely, wherever he discusses sex or marriage, does not mention love (Klassen 1992:384).Marriage is a covenant; there are obligations and responsibilities, but erotic passion in this context seems to be of little interest to Paul.The firstcentury, gendered society rested upon a gender-related superiority versus inferiority, promoted honour and shame as core (motivational) values and gender-determined, and in which power relations revolved around patriarchy. 54 So for example, the nature argument heard so often in the New Testament rested on a gendered cosmology, which in terms of sexually prescribed active and passive roles, determining roles regarding penetration and, conversely, the particular penetration role determined gender.AKLAS (2002:14.2.4) acknowledges that even if a consensus position regarding biblical condemnation of (modern) homosexuality should be reached, the further problem of relevancy remains, given the clear injunctions against divorce and remarriage which are nevertheless tolerated on the basis of human weakness in the DRC. 55n the other hand, the commission's report complete with the actuarius' insistence that "only the love-relationship between one man and one woman can be considered a marriage in biblical sense" (Presbytery 2005:7.3.6), ignores the different forms of marriage found in the Bible, as well as the different ways in which such marriages were contracted and functioned.In the end, a reception history based on a secondary, hermeneutical key incapacitated the legal commission's ability to use the Bible effectively.While the Bible can be enlisted in support of monogamous, heterosexual marriage, the latter cannot simply be imposed as exclusive, interpretative grid for all human, sexual relationships -without the danger of in the process producing a new, authorised or authorising, text. The biblical text, reformatted: A new, authorising version The important influence of the reception history of texts as well as the need to account for the social location of their readers are not unimportant or negligible given their impact on interpretation but also on the texts. 56It was, for example, seen above how the notion of a one woman, one man marital relationship was used as hermeneutical key for interpreting texts referring to homoeroticism in the New Testament, and so became the interpretative norm for determining the implication if not meaning of these texts.The hermeneutical key claimed to be derived from the texts rendered a selfevident reading based upon a form of circular reasoning (reading!).But more devastatingly to the biblical texts, imposing an exclusive, secondary hermeneutical grid on the texts effectively displaces the texts, and effectively replaces the texts as authority.The texts are no longer investigated and researched, but the interpretation which was generated is maintained and defended. Secondly, the moderature of the DRC and the commission who decided to suspend Gaum in August 2005, appeared to justify this reception history, with reference to the difficulty other mainline churches experienced with determining official policy regarding homosexuality, accepting lesbigays as members and in particular allowing them to occupy clerical positions.The purpose of these appeals probably also intended to register the range of the difficulties as broader than denominational level, and to elicit empathy for the complexity of the decision-making process.The danger with such benchmarking of discussions on homosexuality in the church is that biblical texts are eventually relegated to the common opinion as informed by the force of the reception history of certain texts, or considered hermeneutically 56 When the importance of the context of the interpreter in the form of contemporary, affective encounters and experiences for interpretation is dismissed under the guise of a "biblically based" position (Botha 2005:7) which hints at an ostensible naïve and neutral interpretative stance, questions have to be raised about the interpretative interests of the claimants.The only position more dangerous than a biased hermeneutical stance is the unwillingness or inability to acknowledge it, allowing it to influence interpretation unwittingly and making it very difficult to account for it or to control its influence, and in the end render false notions of neutrality and objectivity! 902 HTS 62(3) 2006 problematical unless a common interpretation prevails.At the very least, the Bible cannot be claimed as the decisive criterion57 if the ecclesial practice of other churches, or the majority opinion, is considered more valuable or even decisive. View of Scripture As soon as the perception is created that the Bible is a moral-ethical handbook, sometimes in addition to being a catechism of faith, it assumes an oracular status with encyclopaedic value, rather than being the foundational document of Christians, reflecting the earliest believers' faith in and relationship with God.The AKLAS report of 2002 concludes that direct statements of Scripture pertaining to "homosexuality" cannot "summarily" (Afr sondermeer) be seen as binding on people today (AKLAS 2002:14.2.7).It further insists upon the contextualisation of scriptural claims, given the changing nature of the world and advances in scientific knowledge.Rather than invoking literal letters, obedience to God requires the faithful to listen to the Spirit behind the letters -and, one way of achieving this is to constantly place single texts within the context of Scripture's message as a whole (AKLAS 2002:14.3.2). 58 In the end, it seems in the South African gay-debate that the way(s) in which the Bible is used, might just be more important than the exegesis and interpretation of texts (cf Germond 1997:190) for determining a church's position.The relationship between biblical interpretation, authority and power, and marginality clearly needs more investigation, not only but also because of the particular character and role of biblical interpretation in the DRC during Apartheid in South Africa. Conclusion While a greater openness, unrestricted by government and other authorial influence, now characterises the academy in post-Apartheid South Africa, communities of faith are generally seen to have retreated from the public, socio-political sphere.Churches are at times accused of espousing an internal awareness although they still play an important role in various socio-cultural projects even if no longer at a socio-political level as was the case during Apartheid years (cf Smit 2004).With the Bible serving as the central document of South Africa churches, it is often seen as final court of appeal if not always as point of departure.But with regard to the gay-debate and the complexity and diversity thereof, it is clear that more theological reflection is required on the status and role of the Bible today, on biblical hermeneutics, and on the interstices of biblical reception created by the inevitable crosspollination between the public domains of academy, church and society. It has probably never been as important as in our postmodern, post-Apartheid world that hermeneutics cannot be posited as rules, systems or structures of interpretation -as techno-exegesis -but as being open to and indeed listening to the other as other (esp Gadamer, cf Thiselton 2004:146), to reach towards an emancipatory hermeneutics.In the search for truth in the academy, church and society, it is to be remembered in post-Apartheid South Africa also, that "biblical hermeneutic is itself grounded in the moral effect of truth, not merely in truth as a doctrinal formulation" (Anderson 1988:91).
2018-12-05T01:10:06.981Z
2006-09-28T00:00:00.000
{ "year": 2006, "sha1": "a848438d347f001c9acf218c4afdd4212e427774", "oa_license": "CCBY", "oa_url": "https://hts.org.za/index.php/hts/article/download/381/282", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "a848438d347f001c9acf218c4afdd4212e427774", "s2fieldsofstudy": [ "History" ], "extfieldsofstudy": [ "Sociology" ] }
54984395
pes2o/s2orc
v3-fos-license
Dissipation of Impact Stress Waves within the Artificial Blasting Damage Zone in the Surrounding Rocks of Deep Roadway Artificial explosions are commonly used to prevent rockburst in deep roadways. However, the dissipation of the impact stress wave within the artificial blasting damage zone (ABDZ) of the rocks surrounding a deep roadway has not yet been clarified. The surrounding rocks were divided into the elastic zone, blasting damage zone, plastic zone, and anchorage zone in this research. Meanwhile, the ABDZ was divided into the pulverizing area, fractured area, and cracked area from the inside out. Besides, the model of the normal incidence of the impact stress waves in the ABDZ was established; the attenuation coefficient of the amplitude of the impact stress waves was obtained after it passed through the intact rock mass, and ABDZ, to the anchorage zone. In addition, a numerical simulation was used to study the dynamic response of the vertical stress and impact-induced vibration energy in the surrounding rocks. By doing so, the dissipation of the impact stress waves within the ABDZ of the surrounding rocks was revealed. As demonstrated in the field application, the establishment of the ABDZ in the surrounding rocks reduced the effect of the impactinduced vibration energy on the anchorage support system of the roadway. Introduction South Africa, Poland, Russia, Canada, Australia, and others have gradually begun to mine their deeper coal resources.However, while exploiting deep coal resources, rock burst occurs more often and with greater intensity in mine roadways.According to relevant statistics, approximately 80% of rock burst occurs in the roadways of deep coal seams annually [1].Rock burst is usually a sudden outburst, giving rise to vibration and damage in coal and rock masses.Meanwhile, its energy is expected to eject coal and rock masses so as to block the roadway, leading to collapse and damage to supports and equipment in the stope, as well as casualties.The control of rock burst in deep roadways has been an important topic in the past few years [2,3]. When the excavation in a deep roadway becomes stable, the fractured zone, plastic zone, and elastic zone are expected to be generated in the surrounding rocks: a large elastic strain energy usually accumulates at the junction of the elastic and plastic zones [4,5].Disturbed by the mining of the working face, the elastic strain energy is expected to be spread to the roadway surface in the form of stress waves (the impactinduced vibration energy).When the support system of the roadway (the supporting layer constituted by the anchorage zones of the bolt and anchor cable) cannot resist the impactinduced vibration energy generated by the stress waves, a rock burst occurs [6]. When a deep roadway is excavated, the radii of the fractured zone and the plastic zone in the surrounding rocks are affected by many factors, such as the ground stress, the radius of the roadway, and the support resistance of the support.As a matter of fact, the thicknesses of the fractured zone and the plastic zone formed naturally after the excavation usually fail to meet anti-impact requirements.Therefore, pressure relief technology using large diameter drill holes [7], artificial explosions [8], and coal seam water infusion [9] can be used to smash the surrounding rocks.The thickness of the fractured zone and the size of the fractured coal and rock masses are, therefore, regulated artificially, forming an anti-impact system with favourable energy absorption and cushioning performance.The system can absorb the energy released by stress waves and, therefore, provide ideal anti-impact effects.As for artificial explosions used to prevent rock burst, firstly, the drilling rig is usually employed to drill boreholes in the surrounding rocks.Then, explosives are placed in the borehole (usually at the junction of the elastic and plastic zones) before the borehole is sealed.While the artificial blasting damage zone (ABDZ) was established in the surrounding rocks, concentrated elastic strain energy can be transferred to greater depth.Besides, when the elastic strain energy is disturbed and spread outward in the form of stress waves, the fractured coal and rock masses in the ABDZ can dissipate some of the impact stress waves energy [10], as shown in Figure 1.In addition, the support system constituted by flexible energy absorption devices, such as the bolt and anchor cable outside the ABDZ, can resist the rest of energy, thus preventing the rock burst successfully [11]. However, the dissipation of impact stress waves in the fractured coal and rock masses inside the ABDZ is unclear.As a result, the artificial explosions adopted by many mines have difficulty in preventing rock burst. Based on the fractured structures of the coal and rock masses in the ABDZ, a simplified model of the normal incidence of the impact stress waves was established.In this way, the attenuation coefficient of the amplitude of the impact stress waves was obtained after the waves passed through the ABDZ.Meanwhile, a numerical simulation was used to reveal the dissipation of the impact stress waves in the ABDZ.Thereafter, the model was verified in the field. Simplified Model of the Normal Incidence of Impact Stress Waves After the detonation of the spherical explosive cartridge in the surrounding rocks, the pulverizing zone, fractured zone, and crack zone were, respectively, generated from the inside out in the rock mass near the explosive cartridge [12].This zonal structure caused by the explosion of a single explosive cartridge is called the "spherical rupture element."Moreover, the spherical rupture elements are supposed to constitute the ABDZ (Figure 2).As for a single spherical rupture element, when the elastic strain energy within the surrounding rocks is disturbed by mining, it is supposed to be spread in the coal mass in the form of impact stress waves.In this process, the impact stress wave is expected to pass through the crack zone, the fractured zone, and the pulverizing zone of the spherical rupture element in sequence and then pass through these zones again.At last, it enters into anchorage support zone of the roadway.Owing to the rupture element being spherical and assuming that the impact stress waves vertically penetrate the subareas of each rupture element in the ABDZ during their propagation, the waves finally act on the anchorage zone. The Attenuation Coefficients of the Impact Stress Waves within the Intact Rock Mass and the ABDZ in the Surrounding Rocks The Propagation and Attenuation Coefficient of the Impact Stress Waves within Intact Rock Mass.After being disturbed by mining, the elastic strain energy in the surrounding rocks of the deep roadway is expected to be propagated in the rock masses in the form of stress waves.Assuming that the intact rock mass within the surrounding rocks follows Kelvin-Voigt viscoelastic body behaviour, the amplitude of the stress waves at each point in the rock mass is expected to decrease with increasing propagation distance and time. The two attenuation effects are called spatial and temporal attenuation, respectively [13]. The governing equation of the stress waves in a Kelvin-Voigt viscoelastic body is expressed as where , , and are the elastic modulus, viscosity coefficient, and time, respectively, is the displacement of the stress waves along the propagation direction , and 0 is the density of the intact rock mass.Suppose that the distance between the vibration source and the particle is , and according to the harmonic wave equation, the amplitude attenuation of the wave at a certain point of the rock can be expressed as follows: where and are the amplitude and angular frequency and is the responsive wave number of the stress waves in the coal and rock mass.By combining (1) and ( 2), the attenuation coefficients ( and ) of the amplitude of the stress waves in a Kelvin-Voigt viscoelastic body, varying with time and space, are given by Therefore, the amplitude of the stress waves with propagation distance () and time () in the intact rock mass can be expressed as where 0 represents the initial amplitude of the impact stress waves. The Attenuation Coefficients of the Impact Stress Waves in the ABDZ.Since the impact stress waves are projected vertically into the ABDZ, a microcuboidal element is separated from the spherical rupture element in the ABDZ.When the stress waves are propagated to the ABDZ, they are expected to pass through the crack zone, fractured zone, and pulverizing zone of the spherical rupture element in sequence and then pass through these zones again in the opposite order.At last, it acted on the anchorage zone of the roadway.Meanwhile, the stress waves are expected to generate complicated transmission and reflection phenomena between the subareas of the blasting damage zone [14], which will affect the propagation and attenuation of the wave stress in the rock mass (Figure 3).When the stress waves vertically enter the intact rock mass and the interface of the blasting damage zone 1 , part of the stress waves (with amplitude 1 ) will be transmitted through the partitioning interface 1 .Then, these waves are expected to generate the catadioptric phenomenon on interfaces 1 and 2 (the interface between the crack zone and the fractured zone in ABDZ), while some parts of the stress waves are expected to be transmitted through the interface 2 (the amplitudes of the reflected waves and transmitted waves are 2 and 2 , resp.) and then continue to be spread to the interface 3 .Afterwards, the stress waves continue to be propagated in additional zone of ABDZ according to the aforementioned order.Finally, the amplitude of the transmitted waves at the interface 6 is . Assume that V 0 , V 1 , . . ., V 4 are the velocities of the stress waves within the intact rock mass, crack zone, fractured zone, and pulverizing zone, respectively, and 0 , 1 , . . ., 4 are the densities of the intact rock mass, crack zone, fractured zone, and pulverizing zone, separately.When the stress waves are propagated from the intact rock mass to the anchorage zone, its transmission coefficients on interfaces 1 , 2 , 3 , 4 , 5 , and 6 are 1 , 2 , 3 , 4 , 5 , and 6 , respectively.The widths of the intact rock mass, crack zone, fractured zone, and pulverizing zone are Δ 0 , Δ 1 , Δ 2 , and Δ 3 , separately.The initial wavelength of the stress waves is . In accordance with other studies [15], the reflection frequency of the stress waves within the jointed rock mass is determined by the ratio of the wavelength to the width of the joint layer /Δ ( = 1, 2, 3), as shown in When the stress waves are transmitted through the joint layer 1 for the first time, the attenuation rate of the amplitude is (Δ ) 1 .After reflections (assuming that is an even number) between interfaces 1 and 2 , the amplitude of the stress waves is According to the literature [15], the amplitude of the stress waves in mine rocks can be expressed as where , , and are the spatial attenuation index, the propagation distance of the stress waves, and a constant value, respectively.According to (7), we obtain Let (Δ ) = ( = 1, 2, 3, 4, 5); thereinto, Δ 4 = Δ 2 and Δ 5 = Δ 1 . After passing through the crack zone, the total amplitude 1 of the attenuated stress waves is Similarly, after passing through the rock masses in the crack zone, fractured zone, and pulverizing zone and entering the anchorage zone by taking the reverse journey, the amplitude of the stress waves is given by where 1 , Crack zone Fractured zone Pulverizing zone Fractured zone Crack zone Geological and Mining Conditions in the Field.The 2304 working face in Tangkou coal mine (Zibo Mining Group Co., Ltd., Shandong Province, China) is located in the third coal seam with a burial depth of 1,000 m.In addition, the thickness of the coal seam ranges from 2.5 m to 3.5 m with an average thickness of 3.0 m, the dip angle of the coal seam ranges from 1 ∘ to 8 ∘ with an average dip of 6 ∘ , and the Protodyakonov coefficient of the coal is 0.7.The 2304 working face is located at the north flank of the western return airway and its south is the 2303 working face which is mining currently.There is no excavation at its north area.At present, the 2304 working face has not yet been mined, and its ventilation roadway is under excavation (Figure 4).Fully mechanised mining of the inclined retreating longwall is used: the strike and inclination lengths being 200 m and 1,700 m, respectively.The 2304 ventilation roadway is excavated in a rectangular section, which is 4 m long, 3 m wide, and supported permanently by cable anchors (Figure 5).Besides, thread steel resin bolts containing no longitudinal reinforcement with diameters and lengths of 20 mm and 2,400 mm, respectively, are used in the roof.The bolts, each of which is fixed using two CK2850 resin anchor agents, are arranged with row and line spacings on an 800 × 800 mm grid.The anchor cables, which are 17.8 mm in diameter and 9,000 mm long, are set with the intervals in the rows and line spacings being 1,200 mm and 1,600 mm, respectively.Moreover, three CK2350 resin anchor agents are placed into each hole for end anchorage.Thread steel resin bolts without longitudinal reinforcement are used on the walls of the roadway.The bolts have a diameter and length of 20 mm and 2,200 mm, respectively, and are fixed at 800 mm intervals (both row and line spacings).In addition, the 100 × 100 mm reinforcing mesh is made of reinforcing bars with a diameter of 6.5 mm.The length, width, and thickness of the steel strip are 3,600 mm, 80 mm, and 5 mm, respectively. As shown in Figure 4, the 2304 ventilation roadway has been excavated for a length of 1,300 m.During roadway excavation, dynamic accidents such as the projection of coal and rock masses and the collapse of the roof and the wall of roadway took place frequently. The TDS-6 microseismic acquisition system was used to monitor and collect the signal from the impact stress waves in the heading face of the 2304 ventilation roadway every 0.2 ms.After these collected signals were processed, the velocity-time curves of the signals of the impact stress waves were obtained (Figure 6(a)).To simplify the study, the harmonic stress wave with the maximum vibration velocity and shortest period was selected from the curves obtained above (Figure 6(b)).Then, the selected harmonic stress curve was applied as the impact stress wave acting on the surrounding rocks of the roadway for two periods of the numerical simulation.Rock cores were obtained in the field by drilling.Meanwhile, laboratory testing to find the rock mechanical parameters was carried out according to the Standard for Tests Method of Engineering Rock Masses (GB/T 50266-2013).The lithology and mechanical parameters of the roof and floor are summarised in Table 1. Establishment of the Model and Simulation Schemes. Based on the geological and mining conditions of the 2304 ventilation roadway, the dissipation of the impact-induced vibration energy in the ABDZ was studied.Meanwhile, FLAC 3D was used to establish a numerical calculation model, whose length, width, and height were 52 m, 10 m, and 34 m, respectively.The width, height, and length of the roadway were 4 m, 3 m, and 10 m, respectively.According to in situ stress measurements on the 2304 ventilation roadway, the rock stress was mainly geostatic at 25 MPa in the vertical direction and 32.5 MPa in the horizontal direction (the lateral pressure coefficient being 1.25).Therefore, horizontal restraint was applied to the and -directions of the model, while the bottom of -direction was fixed [16].Besides, the vertical stress ℎ of 25 MPa generated by the self-weight of the overlying strata was imposed on the top boundary.When the calculation reaches a balance after the excavation process, the impact stress waves were imposed.During the simulation, the static boundary was set and an observation line established along the vertical direction at the middle of the roadway roof to monitor the change of elastic strain energy.The simulation schemes are as follows. Scheme 1.After the static calculation reaches equilibrium, impact stress waves are imposed on the boundary between the elastic region and the plastic region at the midline above the roadway roof for subsequent dynamic calculation.This scheme aims to simulate the propagation of impact energy in the non-blast-damage zone.Scheme 2. After reaching equilibrium, the ABDZ, to a range of 2 m, is set on the boundary between the elastic region and the plastic region of the roadway roof according to published data [17] (Figure 7).When the static calculation reaches equilibrium for a second time, the impact stress waves are applied on the boundary between the elastic region and the ABDZ at the midline above the roadway roof for subsequent dynamic calculation. The changes in vertical stress and impact-induced vibration energy in the surrounding rocks were observed at the midline above the roadway roof, so as to analyse the influence of the ABDZ on the impact stress waves and determine the dissipation of the impact-induced vibration energy therein.on the roof of the roadway.Then, according to Scheme 1, the impact stress waves are applied to the zone (the interface between the elastic and the plastic regions) at a distance of 9 m from the surface of the roadway roof.According to Scheme 2, the blasting damage zone lying within a range of 2 m was set in the boundary between the elastic and the plastic regions at the midline above the roadway roof: the impact stress waves were then imposed on the boundary between the elastic region and the blasting damage zone (Figure 7).The variations in vertical stress and impact-induced vibration energy of the unit body in the surrounding rocks illustrated in Schemes 1 and 2 are shown in Figures 8-12. (1) The Change of Vertical Stress in the Surrounding Rocks under the Action of the Impact Stress Waves.Under the influence of the impact stress waves for the 3 ms, the change in vertical stress in the surrounding rocks is shown in Figures 8-11. The vertical stress in the roadway roof is large at = 2 ms.It can be explained by the fact that the compression and tension waves are contained within one period of the impact stress waves.The vertical stress in the surrounding rocks is large under the action of the compression wave, while it is expected to decrease when affected by the tension wave.At = 2 ms, according to Scheme 1, the vertical stress, at approximately 9 m above the roadway roof, reaches 70 MPa, with a stress concentration factor of 2.8, while the vertical stress decreases gradually from a point 9 m above the roadway roof to 2 m above it, with the decreasing amplitude per unit length being 6.14 MPa.The decreasing amplitude is approximately 10 MPa⋅m −1 from 2 m above the roadway roof to the roadway surface (Figure 9).In contrast, the maximum vertical stress at the boundary between the ABDZ and the elastic zone in the roadway surrounding rocks (11 m above the roof at = 2 ms) is about 70 MPa in Scheme 2. However, Figure 11 shows the decreasing amplitude of the vertical stress in the ABDZ of the surrounding rocks to be approximately 15 MPa⋅m −1 .Furthermore, the decreasing amplitudes of the vertical stress are 2.85 MPa⋅m −1 from 9 m to 2 m above the roof and 10 MPa⋅m −1 approximately from 2 m above the roof to the roadway surface, separately. (2) The Dynamic Response of the Impact-Induced Vibration Energy in the Surrounding Rocks under the Action of the Impact Stress Waves. Figure 12 shows the dynamic response of the impact-induced vibration energy at the medial axis of the roadway roof under the effect of the impact stress waves.For Scheme 1, the impact-induced vibration energy is large (248.6 kJ at 9 m above the roof), while it decreases gradually from 9 m to 2 m above the roof at 20 kJ⋅m −1 .The impactinduced vibration energy close to the roadway surface is approximately 30 kJ, where a rock burst is likely.By contrast, in Scheme 2, the impact-induced vibration energy shows a maximum decreasing amplitude in the ABDZ, being approximately 40 kJ⋅m −1 .The decreasing amplitude of the impactinduced vibration from 9 m to 2 m above the roof is basically the same as that in Scheme 1.The impact-induced vibration energy near the roadway is approximately 2 kJ, which is far smaller than the threshold value (25 kJ) for the occurrence of a rock burst [18]. According to the comparative analysis of the dynamic responses of the vertical stress and the impact-induced vibration energy in the roadway surrounding rocks with, and without, the ABDZ, under the action of impact stress waves, the following conclusions can be obtained: under the action of the impact stress waves, the vertical stress and the impactinduced vibration energy within the roadway surrounding rocks are increased significantly (Figures 8-12).When the impact stress waves are propagated to the roadway surface through the ABDZ of the surrounding rocks, most of the impact-induced energy is dissipated in the form of heat energy, surface energy, and plastic energy [19].This is because the ABDZ is a fractured structure consisting of multiple joints and fractures, and the impact-induced vibration energy leads to the dislocation, extrusion, and repeated damage of the fractured rock masses [20,21].As a result, the impact-induced vibration energy is weakened substantially to generate a reduced stress zone in the ABDZ.Meanwhile, the impact-induced vibration energy within the anchorage zone of the surrounding rocks is reduced accordingly (the area situated within 9 m to 11 m above the roadway roof in Figure 12).Therefore, the proposed Scheme 2 can effectively reduce the impact-induced vibration energy.4).A high-power pneumatic drill was used to drill five boreholes in the two walls and roof of the roadway (Figure 13): the length and diameter of the boreholes were 20 m and 42 mm, respectively.The method of drill bit analysis was used to determine the energy concentration zone in the surrounding rocks.Since the surrounding rocks from 1 m to 3 m depth into the roadway sides are already damaged, the cuttings obtained from depths of 1 to 3 m in the boreholes were abandoned in the field.Samples were therefore cut from no less than 4 m deep.The test results are shown in Figure 14. Application Example As shown in Figure 14, at borehole depths of 9 to 10 m, the cuttings index of the boreholes reaches a maximum (4.1 kg/m).This indicates that strong elastic strain energy is contained herein; however, if the elastic strain energy is disturbed by the mining of the excavated working face, it is likely to generate intensive impact stress waves and even give rise to a rock burst.This is because the anchorage support system and the roadway surrounding rocks themselves are not enough to resist the impact-induced vibration energy. To reduce the possibility of a rock burst in the 2304 ventilation roadway, a drilling rig was used to drill a borehole (length, 9 m and radius, 0.75 m) in the surrounding rocks.Soon afterwards, a spherical explosive cartridge was used to conduct coupling blasting, on the one hand, to transfer the concentration zone of the elastic strain energy to the deeper surrounding rocks.On the other hand, it aims to form an ABDZ, in which the fractured coal and rock masses can dissipate some of the impact-induced vibration energy (Figure 15).Immediately after the explosion, lengthened anchor cables, 15 m long, were installed in the two walls and roof of the roadway (Figure 16). Impact Resistance of the 2304 Ventilation Roadway. To verify the dissipation effect of the ABDZ on the impactinduced vibration energy, the method of drill bit analysis was performed on the aforementioned observation stations (1, 2, and 3 in Figure 4) to determine the cuttings index of the borehole in roadway surrounding rocks (Figure 17).By establishing the ABDZ in the 2304 ventilation roadway, the cuttings index of the boreholes in the surrounding rocks is reduced, with the maximum decrease being approximately 33%.That is, the elastic strain energy in the surrounding rocks of the deep roadway is reduced.Meanwhile, by increasing the number of anchorage cables, the resistance of the anchorage support system in the roadway, to prevent impact-induced vibration energy, is strengthened, which gives rise to a significantly improved impact resistance of the deep roadway.As demonstrated in the field monitoring, no rock burst was found in the 2304 ventilation roadway from November, 2012, to November, 2013, and there was little deformation of the surrounding rocks.Therefore, the 2304 ventilation roadway satisfies the mine ventilation, human entry, and transportation requirements. Conclusions (1) While establishing the artificial blasting damage zone in the surrounding rocks of the deep roadway, the pulverizing zone, fractured zone, and crack zone are expected to be generated in the blasting damage zone to form the spherical rupture element.When the elastic strain energy within the surrounding rocks of the deep roadway is disturbed, it is expected to be propagated through the rock mass in the form of impact stress waves.Thereafter, the stress waves are supposed to pass through the crack zone, fractured zone, and pulverizing zone of the spherical rupture element in sequence and then pass through these zones again in the opposite order, last to enter the anchorage zone of the roadway. (2) A numerical simulation was used to study the changes in vertical stress and elastic strain energy in the surrounding rocks under the action of the impact stress waves.As demonstrated by the simulation results, while establishing the blasting damage zone in the rocks surrounding this deep roadway, the decreasing amplitudes of the vertical stress and the impactinduced vibration energy in the blasting damage zone are 15 MPa⋅m −1 and 40 KJ⋅m −1 , respectively. (3) As demonstrated by the case study, the establishment of the ABDZ in the rocks surrounding this deep roadway and the enhancement of the anchorage support system have strengthened the ability to prevent impact-induced vibration energy.Meanwhile, the effect of the impact-induced vibration energy on the anchorage support system of the roadway is reduced, which gives rise to the significantly increased impact resistance of this deep roadway. Figure 1 : Figure 1: Stress distribution in the surrounding rocks with the ABDZ of the deep roadway. Figure 2 : Figure 2: Simplified model of the normal incidence of the impact stress waves. Figure 3 : Figure 3: The incidence and reflection of a stress wave on the interface of the joint layer. Figure 4 :Figure 5 : Figure 4: Plan layout of the working face. Figure 9 : Figure 9: The change in vertical stress in the surrounding rocks: Scheme 1. Action time = 3 ms Figure 10 : Figure 10: Dynamic response of the vertical stress in the surrounding rocks: Scheme 2. Figure 11 : Figure 11: Change in vertical stress in the surrounding rocks: Scheme 2. Figure 12 : Figure 12: Dynamic response of the impact-induced vibration energy of the strata in the roadway roof. Table 1 : The lithology and mechanical parameters of the roadway roof and floor.
2018-12-13T11:04:51.914Z
2016-04-10T00:00:00.000
{ "year": 2016, "sha1": "2d3b1266b3b07756fca1643a6f90bd63222912e5", "oa_license": "CCBY", "oa_url": "https://downloads.hindawi.com/journals/sv/2016/4629254.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "2d3b1266b3b07756fca1643a6f90bd63222912e5", "s2fieldsofstudy": [ "Geology" ], "extfieldsofstudy": [ "Mathematics" ] }
203569898
pes2o/s2orc
v3-fos-license
Uric acid and sphingomyelin enhance autophagy in iPS cell-originated cardiomyocytes through lncRNA MEG3/miR-7-5p/EGFR axis Abstract This study aimed to determine the metabolites associated with ventricular septal defect (VSD) and the underlying mechanisms. Blood samples and thymus tissues were collected from VSD patients to perform LC-MS-based metabolomics assay and generate iPS cell-derived cardiomyocytes, respectively. VSD rat model was used in vivo study. RT-PCR, western blotting, immunohistochemistry, luciferase activity assay, GFP-LC3 adenovirus and GFP and RFP tfLC3 assay, and transmission electron microscopy were performed to investigate the underlying mechanisms. The metabolites uric acid (UA) and sphingomyelin (SM) increased in the serum of VSD patients, along with enhanced autophagy. The combination of UA and SM treatment could promote autophagy and inhibit EGFR and AKT3 expressions. Overexpression of EGFR and AKT3 suppressed autophagy in UA and SM-treated cardiomyocytes, respectively. Also, lncRNA MEG3 knockdown and overexpression could enhance and inhibit autophagy in UA and SM-treated cardiomyocytes, respectively, through targeting miR-7-5p. Moreover, miR-7-5p mimics and inhibitors promoted and inhibited autophagy in UA and SM-treated cardiomyocytes, respectively, via target EGFR. In VSD rat model, upregulation of MEG3 could reverse high level of autophagy and decrease serum UA and SM. In conclusion, UA and SM are essential VSD-associated metabolic biomarkers and MEG3/miR-7-5p/EGFR axis is critical to the regulation of autophagy in cardiomyocytes. Introduction Ventricular septal defect (VSD), accounting for around 40% of cardiac anomalies, is one of the commonest congenital heart diseases [1]. The primary pathophysiological features of VSD appear in the direction and amount of interventricular shunting, the degree of volume loading to the cardiac chambers and prolapse of the aortic valve [2]. With advances in examination techniques, such as imaging and screening, the incidence of VSD has substantially increased, ranging from 1.56 to 53.2 per 1000 newborn babies [3,4]. However, since many clinical features display as asymptomatic and many anomalies close shortly, the accurate prevalence of VSD varies between studies [2]. Therefore, the development of examination technique that can precisely and effectively diagnose VSD could provide reliable evidence for subsequent treatment and improve theoretical management. Metabolomics is a novel research field regarding the highthroughput quantification, classification and characterization of metabolite molecular in metabolome [5]. The metabolic phenotype is the manifestation of metabolic state, showing the environmental contribution, including lifestyle, diet and gut microbe, in a particular condition [6]. Metabolic phenotypes also can be used as evidence that hardly obtains from gene or protein expression profiles [7] and provides a systematic readout of molecular, cellular, physiological status that may be exploited in healthcare and clinical treatment [6]. With regard to cardiovascular disease, Ganna et al. analysed the circulating metabolites in patients with coronary heart disease (CHD) and found that four lipid-related metabolites contribute to the development of CHD [8]. Serum metabolomics study revealed that several metabolic markers, including 2-hydroxy, 2-methylpropanoic acid and erythritol, are associated with heart failure [9]. Therefore, metabolomics is an effective and safe approach for the diagnosis, evaluation of severity, and therapy management for diseases in both clinical practice and basic research. The study of human cardiomyocyte was hampered by few ideal vitro models in the early stage [10]. However, this challenge has been overcome by emerging embryonic stem (ES) cells. ES cells are capable of consecutively growing stem cell lines that are first isolated from the mouse blastocysts [11]. ES cells are characterized by the potent potential to differentiate into various cell/tissue type as well as proliferate in the state without differentiation-stimulated conditions for long period of time [12,13]. Induced pluripotent stem (iPS) cells, possessing the differential potential of ES cells, have been identified as one of the promising sources of cardiomyocytes [14]. In our previous report, we collect iPS cells from the thymic epithelial cells of a patient with VSD and successfully induce these iPS cells to differentiate to cardiomyocytes [15], providing a promising vitro model to studying the pathophysiology of VSD. Long noncoding RNAs (lncRNAs) and microRNAs (miRNAs) are two important subclasses of noncoding RNAs (ncRNAs) [16]. It has well documented that both lncRNAs and miRNAs are important for the regulation of transcription and posttranscription and play essential roles in various physiological and pathological processes [17]. In general, lncRNAs act as a molecular sponge for miRNAs via inhibiting miRNAs target mRNAs, while miRANs can regulate the stability and function of lncRNA [18]. So far, growing evidence has been demonstrated that the interactive correlations between lncRNA and miRNA are essential in cardiovascular pathophysiologies, such as cardiac hypertrophy, myocardial infarction, atherosclerosis, cardiac apoptosis and heart failure, which are detailed and documented [19][20][21]. Therefore, understanding the roles of lncRNA and miRNA in VSD would greatly expand our horizon to VSD. In this study, we aimed to determine the VSD-associated metabolites by LC-MS-based metabolomics assay and then apply iPS cell-derived cardiomyocytes and VSD rat model to investigate the mechanisms underlying the effect of identified metabolite in VSD in vitro and in vivo, respectively. The findings of this study may provide novel insights into the molecular architecture of VSD and theoretical evidence for developing new VSD diagnosis strategy. Patients and tissues In this study, all researches involving in human subjects conformed to the Declaration of Helsinki and the Principles of Ethical Publishing in the International Journal of Cardiology. The experimental protocols for this study were approved by the institutional ethics committee and written informed consents were obtained from all patients. All heart tissues and blood samples were collected from patients with left to right shunt congenital heart disease combined with heart failure (VSD þ HF) (n ¼ 20; 6 males and 14 females; age: 1 month to 12 years) and health individuals (n ¼ 24; 10 males and 14 females; age: 4 months to 12 years). In this study, written approval for human thymus tissue collection, iPS cell generation, and biochemical and molecular analysis was obtained from the Ethics Committee for Human Research at Fudan University (approval number: 048). The patient involved in the study was presented with VSD at age 2 years. Written informed consent was obtained from the patient's guardian. Metabolomics analysis Liquid chromatography-mass spectrometry (LC-MS)-based metabolomics assay was performed to determine the active metabolite associated with VSD. The detailed procedure was performed as described [22]. Mass spectrometry detections were conducted in either the negative or positive ion mode (full scan mode from m/z 80-1000) with 350 C gas temperature, 4000 V capillary voltage, 11 L/min drying gas flow rate, and 230 V fragmentor voltage. The internal standards were used for calibration of the response of metabolite ions. Isotope-labelled FFA C16:0-d3 and carinitines were used to calibrate the metabolites in negative and positive ion mode, respectively. Other internal standards were used for calibration in both ion modes. QC samples were used to assess the reproducibility. The ion features were obtained and aligned by using MassHunter workstation (Agilent Technologies, Palo Alto, CA). The modified 80% rule was applied to remove missing values [23]. Principal component analysis (PCA), orthogonal signal correction (OSC) partial least-squares-discriminant analysis (PLS-DA), orthogonal partial least squarediscriminant analysis (OPLS-DA) were used after the Pareto scaling. The Variable Importance in the Projection (VIP) value was applied to determine the active metabolites. iPS cell generation and cardiac differentiation In this study, all procedures of iPS cell generation and cardiac differentiation were performed as described in our previous report [15]. iPS cell-derived cardiomyocytes were used to subsequent experiments. Immunohistochemistry (IHC) Heart tissues were fixed in 4% paraformaldehyde for 48 h, embedded in paraffin, and cut into 3-5 lm sections (3 sections for each animal). The detailed procedures followed those in the report [26]. The primary antibodies for P62 (Santa Cruz, Shanghai, China) were applied. Three random regions in the sections were selected to analyse. Cell transfection The sequences of MEG3, EGFR and AKT3 were synthesized and subcloned into pLV-CMV plasmids according to the manufacturer's instructions (Invitrogen, Shanghai, China). si-MEG3, miR-7-5p mimics and inhibitor (25 nmol/L) were obtained from GenePharma (GenePharma, Shanghai, China). Cardiomyocytes were cultured in six-well plate and transfection was performed by using Lipofectamine 2000 (Invitrogen, Carlsbad, CA) according to the manufacturer's instructions. Then, 48 h post-transfection, cardiomyocytes were used for subsequent assessments. Luciferase activity assay TargetScan (www.targetscan.org) and MiRanda (www.microrna.org) were used to online predict target genes of miR-21. MEG3 was predicted to target miR-7-5p and EGFR were predicted as a potential target gene of miR-7-5p. To test these predictions, the luciferase reporter assays were performed according to the manufacturer's instructions (Promega, Madison, WI). The detailed protocol was identical to the previous method [26]. The engineered luciferase reporter plasmids (wild-type and mutant) were transfected with MEG3 and miR-7-5p into cardiomyocytes by using Lipofectamine TM 2000 kit in accordance with the manufacturer's instructions (Invitrogen, Carlsbad, CA), respectively. And, 48-h post-transfection, luciferase activity was assessed by using the Dual-Luciferase Reporter Assay system kit (Promega, Fitchburg, WI). Transmission electron microscopy (TEM) Cardiomyocytes were fixed in 0.1% glutaraldehyde and 4% paraformaldehyde in cacodylate buffer for 24 h at 4 C. Then, the samples were sent to the electron microscope room at Shanghai Medical College of Fudan University for subsequent processing and testing. VSD rat model All animal experiments were approved by the Institutional Animal Care and Use Committee of Fudan University and the experimental procedures were conducted according to the approved guidelines. In this study, we investigated the effect of MEG3 on autophagy in vivo study by using VSD rat model. The animal model was early established VSD rat model in our lab according to previous studies [27,28]. Eighteen adult male Sprague-Dawley rats were divided into three groups (n ¼ 6): control, VSD and VSD þ MEG3 mimics. MEG3 mimics (GenePharma, Shanghai, China) was diluted in 20 lL PBS (1 nmol/lL) and intravenously injected into rats every 4 d for a total of 7 times. Before the first and last time of injection, the blood samples were collected from each rat to determine serum concentration of UA and SM by using ELISA assay kits (Abcam, Cambridge, MA) in accordance with the manufacturer's instructions. Heart tissues and cardiomyocytes were harvested from rats using standard protocols and used to subsequent experiments. Statistics In this study, the sample size was at least three replicates in each experiment. Statistical analysis was performed by using SPSS version. 19.0 software (SPSS, Chicago, IL). Data are presented as mean ± SEM Differences between groups were analysed with Student's t-test. Differences were considered to be significant at p < .05. Results Uric acid and sphingomyelin were upregulated in the serum of patients with VSD The VSD-associated metabolites were evaluated by LC-MSbased metabolomics assay with both positive and negative ion modes in the serum samples from VSD and healthy individuals. PCA score plots revealed that VSD and healthy groups were significantly segregated between positive ion mode (R2X ¼ 0.329, Q2 ¼ 0.116) and negative ion mode (R2X ¼ 0.622, Q2¼ À0.00446) (Figure 1(A,B)). Next, a PLS-DA model was performed to determine the metabolic fingerprint changes in VSD patients (Figure 1(C,D)). The scores plot of the PLS-DA model showed a clear segregated distribution between two groups in positive ion mode (R2X ¼ 0.233, The similar observations were also found in OPLS-DA model plot (Figure 1 The VIP values were used to screen VSD-associated metabolites. All selected metabolites were filtered by t-test with a significant difference at p < .05. The results indicated that sphingomyelin (SM) and uric acid (UA) were significantly upregulated in serum samples of VSD patients (Figure 1(G)). The autophagic level was upregulated in cardiomyocytes of patients with VSD In this study, we first evaluated the autophagic level of cardiomyocytes of patients with VSD. P62 is an important autophagy adaptor that is highly associated with the autophagosome formation [29]. By applying immunohistochemistry ( Figure 2(A)) and Western blots assays (Figure 2(B)), protein expression of p62 was downregulated in heart tissues of patients with VSD. We also observed increased autophagic level in VSD heart tissues by TEM assessment (Figure 2(C)). Together, the level of autophagy was increased in heart tissues of patients with VSD. The autophagic level was upregulated in UA and SMtreated cardiomyocytes Since UA and SM were two potential VSD-associated metabolites that were significantly elevated in the serum, we aimed to determine the effects of combinational treatment of UA and SM on cardiomyocytes. In this study, iPS cell-derived cardiomyocytes were used as the cell model, as described in our previous report [15]. We observed the combination of UA and SM promoted autophagy in cardiomyocytes by TEM assessment (Figure 2(D)). Next, a group of autophagy protein markers was applied to determine the level of autophagy by Western blots. In addition to p62, we added two other autophagy markers beclin-1 and Atg7 to evaluate autophagic level. Beclin-1 is an essential regulator of autophagy and exerts important roles in the initiation of autophagosome formation [30]. As a multifunctional regulator in autophagy, Atg7 is critical for autophagosome formation and promotes maturation of Atg8 (LC3) from an immature cytosolic form to an autophagosomal membrane protein [31,32]. In our study, we found that p62 level was decreased while beclin-1 and Atg7 were increased in UA and SM-treated cardiomyocytes, indicating the formation of autophagosomes was enhanced (Figure 2(E)). In autophagy, autophagosomes engulf cytoplasmic molecules, such as organelles and cytosolic proteins, in which LC3-I, a cytosolic form of LC3, is converted to LC3-II, a form of LC3-phosphatidylethanolamine conjugate that is required to autophagosome formation [33,34]. Therefore, the detection of the level of LC3 has been widely applied to monitor the autophagy-associated process. In our study, to further evaluate upregulated autophagy induced by UA and SM, cardiomyocytes were transfected with GFP-LC3 adenovirus and then were subject to the combination of UA and SM treatment. The results showed that GFP signals were increased in UA and SM-treated cardiomyocytes (Figure 2(F)), suggesting the formation of autophagosomes was increased. Since autophagy is a highly dynamic and complicated process, we also quantified the level of autophagy by detecting autophagic flux, representing the state of autophagic degradation activity [35]. Cardiomyocytes were transfected with the mRFP-GFP-LC3 plasmids as previously described by Kimura et al. [36]. The fluorescent LC3 puncta were imaged by fluorescence microscopy. In this method, the red puncta (mRFP) indicates the normal maturation of autophagosomes into autolysosomes while the green signal (GFP) does not represent the presence of autolysosomes since GFP is unstable in lysosomal degradative and acidic conditions. The yellow puncta displaying both green and red signals indicate autophagy is impaired. In our study, we found cardiomyocytes treated with the combination of UA and SM displayed higher yellow signal than those of control cells (Figure 2(G)), indicating that the autophagic flux was impaired. Collectively, the results indicated that the combination of UA and SM treatment caused more autophagosome formation and impaired autophagic flux, eventually leading to increased autophagy in cardiomyocytes. Under the combination of UA and SM treatment, we also found the mRNA expressions of EGFR and AKT3 were downregulated in cardiomyocytes compared with control cells (Figure 2(H)), suggesting both may be involved in UA and SM-induced autophagy. EGFR and AKT3 regulated UA and SM-induced autophagy in cardiomyocytes To determine the functions of EGFR and AKT3 in UA and SMassociated autophagy in cardiomyocytes, pLV-CMV-EGFR/ AKT3 plasmids were applied to enhance expressions of EGFR and AKT3, respectively, in UA and SM-treated cardiomyocytes and the protein levels of EGFR and AKT3 were significantly increased compared with control cells (Figure 3(A,F)). Next, expressions of autophagy protein markers were determined by Western blots assay and it suggested that the expression of p62 was increased while the levels of beclin-1 and Atg7 were decreased in cardiomyocytes treated with both pLV-CMV-EGFR/AKT3 plasmids (Figure 3(B,G)), indicating autophagosomes formation was reduced. These observations were consistent with those found in TEM assessment that the lower autophagic level was found in cardiomyocytes with forced expressions of EGFR or AKT3 (Figure 3(C,H)). Also, GFP-LC3 adenovirus assay revealed that less GFP signals were observed in pLV-CMV-EGFR/AKT3 plasmid-treated cardiomyocytes compared with control cells (Figure 3(D,I)). Furthermore, cardiomyocytes with enhanced expression of EGFR/AKT3 displayed significant less colocalization of RFP-GFP-LC3 plasmids in tfLC3 assay (Figure 3(E,J)). Together, the data revealed the similar roles of EGFR and AKT3 on UA and SM-induced MEG3 was downregulated in UA and SM-induced autophagy in cardiomyocytes To further determine the mechanisms underlying UA and SM-induced autophagy in cardiomyocytes, lncRNA MEG3, an autophagy-associated regulator [37,38], was found significantly decreased in UA and SM-treated cardiomyocytes (Figure 4(A)). Then, we applied siMEG3 and pLV-CMV-MEG3 to knockdown and overexpress MEG3 in UA and SM-treated cardiomyocytes and the results showed that the expression of MEG3 was successfully inhibited or enhanced by siMEG3 and pLV-CMV-MEG3, respectively (Figure 4(B,G)). Regarding the effect of MEG3 on UA and SM-induced autophagy in cardiomyocytes, we found downregulation of MEG3 could further promote autophagy in UA and SM-treated cardiomyocytes. Western blots assay revealed that the level of p62 was decreased while expressions of beclin-1 and Atg7 were increased in cardiomyocytes transfected with siMEG3 ( Figure 4(C)). Also, the enhanced autophagic level was observed in cardiomyocytes with inhibited level of MEG3 as shown in TEM assessment (Figure 4(D)). Furthermore, both GFP-LC3 and RFP-GFP-LC3 signals were increased in siMEG3treated cardiomyocytes in GFP-LC3 adenovirus (Figure 4(E)) and tfLC3 (Figure 4(F)) assay, respectively. Collectively, these data indicated the stimulatory effect of downregulation of MEG3 in autophagy induced by UA and SM in cardiomyocytes. On the other hand, the opposite roles of upregulation of MEG3 were observed in our study, in which increased expression of MEG3 was associated with reduced autophagic level (Figure 4(H-K)). Therefore, these data together demonstrated that MEG3 is essential for the regulation of autophagy in UA and SM-treated cardiomyocytes. MEG3/MiR-7-5/EGFR axis regulated UA and SM-induced autophagy in cardiomyocytes Regarding the correlation between MEG3 and EGFR/AKT3, we found siMEG3 further decreased while pLV-CMV-MEG3 increased levels of EGFR and AKT3 in UA and SM-treated cardiomyocytes, respectively ( Figure 5(A)). Next, through prediction by PicTar (http://pictar.mdc-berlin.de/) and TargetScan (www.targetscan.org/index.html), a binding sequence of MEG3 was found in the 3'UTR of miR-7-5p ( Figure 5(C)). The RT-PCR results suggested that the expression of MEG3 was in inverse ratio to miR-7-5p ( Figure 5(B)). Luciferase activity assay further demonstrated that miR-7-5p is a direct target of MEG3 ( Figure 5(D)). These results indicated that MEG3 acts as a sponge of miR-7-5p in the regulation of cardiomyocytes autophagy. Furthermore, we applied miR-7-5p mimics and inhibitors to determine the correlation between miR-7-5p and EGFR/AKT3. The results revealed that miR-7-5p mimics and inhibitors could further decrease and increased the expressions of EGFR and AKT3 ( Figure 5(E)), indicating miR-7-5p may be a molecular sponge of EGER and AKT3. Through bioinformatic prediction, a predictive binding site of miR-7-5p was found in the 3'UTR of EGFR, but not AKT3 ( Figure 5(F)). Meanwhile, the luciferase activity assay suggested that EGER was a target gene of miR-7-5p ( Figure 5(G)). Taken together, MEG3/MiR-7-5/EGFR axis may play important roles in UA and SM-induced autophagy in cardiomyocytes. MiR-7-5 was an essential regulator in UA and SM-induced autophagy in cardiomyocytes To determine the functions of miR-7-5p in autophagy in cardiomyocytes, we performed a series of functional studies to determine the autophagic level in UA and SM-treated cardiomyocytes that were subjected to miR-7-5p mimics and inhibitors, respectively. The data revealed that miR-7-5p mimics significantly inhibited the expression of p62 whereas elevated the levels of beclin-1 and Atg7 (Figure 6(A)). TEM assessment indicated that the increased autophagic level was observed in cardiomyocytes treated both the combination of UA and SM and miR-7-5p mimics (Figure 6(B)). Moreover, both GFP-LC3 and RFP-GFP-LC3 signals were elevated in miR-7-5p mimics-treated cardiomyocytes in GFP-LC3 adenovirus ( Figure 6(C)) and tfLC3 (Figure 6(D)) assay, respectively. Therefore, miR-7-5p mimics exerted stimulatory roles in autophagy of UA and SM-treated cardiomyocytes through enhancing . LncRNA MEG3 regulated UA and SM-induced autophagy in cardiomyocytes in vivo study. (A) MEG3 was decreased and increased in VSD rats and VSD rats treated with MEG3 mimic, respectively. (B) p62 protein level was decreased and increased in VSD rats and VSD rats treated with MEG3 mimic, respectively. Beclin-1 and Atg7 protein levels were increased and decreased in VSD rats and VSD rats treated with MEG3 mimic, respectively. (C) Autophagic levels were increased and decreased in VSD rats and VSD rats treated with MEG3 mimic, respectively, as assessed by TEM (scale bar: 200 nM). (D) The number of GFP-LC3 positive cells was increased and decreased in VSD rats and VSD rats treated with MEG3 mimic, respectively, as assessed by GFP-LC3 adenovirus-transfected autophagic vesicle assay (scale bar: 50 lM). (E) GFP signals were increased and decreased in VSD rats and VSD rats treated with MEG3 mimic, respectively, assessed by GFP and RFP tandemly tagged LC3 (tfLC3) assays (scale bar: 50 lM). (F) Serum UA and SM were determined by ELISA assay in VSD rats and VSD rats treated with MEG3 mimic. Values are mean ± SEM. For each experiment, three samples at least were available for the analysis. Ã p < .05, ÃÃ p < .01, ÃÃÃ p < .001. autophagosome formation and promoting the impairment of autophagic flux. Notably, miR-7-5p was found to act positive roles in autophagy of UA and SM-treated cardiomyocytes ( Figure 6(E-H)). Together, the results suggest that miR-7-5p plays important roles in the modulation of autophagy induced by UA and SM. MEG3 played essential roles in autophagy of VSD rat model To determine the effects of MEG3 on autophagy in VSD rat model, we first evaluated the expression of MEG3 in VSD rats and VSD rats treated with MEG3 mimics. The results indicated that compared with control rats, MEG3 was decreased in cardiomyocytes of VSD rats while increased in VSD rats treated with MEG3 mimics (Figure 7(A)). The expression of p62 was decreased while the levels of beclin-1 and Atg7 were increased in cardiomyocytes of VSD rats compared with control rats (Figure 7(B)). All three autophagy protein markers were not different from those of control rats (Figure 7(B)). In TEM assessment, the autophagic level was increased VSD rats while was reversed by MEG3 mimics treatment (Figure 7(C)). Similar effects were also found in GFP-LC3 adenovirus ( Figure 7(D)) and tfLC3 (Figure 7(E)) assay, in which autophagy was enhanced in VSD rats whereas was reversed by upregulation of MEG3. Furthermore, serum UA and SM were found to increase in VSD rats while decrease to the levels of control animals by MEG3 mimics (Figure 7(F)). Therefore, the results suggested that MEG3 is a key regulator in the regulation of autophagy of VSD rat model. Discussion VSD is a common congenital malformation of the heart, occurring in approximately 1.56 to 53.2 per 1000 live births [3,4]. Beyond an isolated cardiac anomaly, VSD is also an intrinsic component of multiple anomalies, such as univentricular atrioventricular connection and tetralogy of Fallot [2]. In addition, VSD has been demonstrated to be related to a series of lesions, including congenitally corrected transposition, aortic coarctation, and transposition of the great arteries [39,40]. To date, the pathophysiology of VSD has not been fully understood, in particular, the correlation between VSD and metabolic state. In this study, we first discovered that the serum levels of UA and SM were significantly elevated in VSD patients. In addition, we found that administration of UA and SM promoted autophagy in cardiomyocytes and inhibited expressions of EGER and AKT3. Furthermore, we found that the effects of UA and SM on cardiomyocytes were associated with MEG3/miR-7-5p/EGER axis. Metabolomics analysis is a powerful approach to shed light on the metabolic phenotype that is related to healthcare and pathophysiology [6]. In this study, metabolomics assay revealed that both UA and SM were elevated in the serum of patients with VSD. SM, an essential integral component of cell membranes, is present in lipoproteins [41]. It is reported that patients with CHD have higher plasma SM level than healthy individuals [42,43] and that plasma SM level is regarded as an independent risk factor for coronary artery disease [44]. Additionally, uric acid, the final product of purine metabolism, also has been demonstrated to be associated with chronic heart failure [45], hypertension [46] and CHD [47]. Collectively, the finding that UA and SM were increased in VSD patients' serum is consistent with those reported in previous studies. To further investigate the functions of UA and SM in VSD, we then applied UA and SM treatment to iPS cell-derived cardiomyocytes. The results revealed that the combination of UA and SM treatment significantly enhanced the autophagosomes formation and impaired normal autophagic flux, leading to elevated autophagic level in cardiomyocytes. Autophagy is a highly conserved process by which cytoplasmic constituents and organelles are degraded by the lysosome. As a double-edged sword, cardiomyocyte autophagy has been identified to be critical for normal cardiac function by stabilizing homeostasis and also contribute to the pathogenesis of cardiovascular diseases [48]. For example, lysosomal structures are found to increase in the ventricular biopsies from heart failure patients, indicating increased autophagy level [49]. Increased cardiac autophagy has also been observed in heart failure, cardiac hypertrophy, ischemia/reperfusion, and aortic stenosis [48]. Herein, we reported that UA and SM were two inducers for cardiomyocytes autophagy, suggesting that UA and SM may facilitate the VSD progression via promoting cardiomyocytes autophagy. Meanwhile, SM&UA-induced cardiomyocytes autophagy in our study was found to be associated with reduced expression levels of EGFR and AKT3, which suggests the potential roles of EGFR/AKT3 signalling in VSD. The mammalian target of rapamycin (mTOR) signalling plays important roles in various cellular functions [50]. Also, a variety of upstream signals has been demonstrated to modulate mTOR activity through phosphoinositide-3-kinase-protein (PI3K)/kinase B (AKT) signalling pathway, including EGFR, insulin-like growth factor-1 (IGF-1) and vascular endothelial growth factor receptors (VEGFRs) [51], which participates in various physiological activities including metabolism, cell cycle regulation and apoptosis [52]. Hou et al. reported that advanced glycation endproducts promote autophagy in cardiomyocyte via inhibiting PI3K/Akt/mTOR pathway [53]. Li et al. found that miR-199a causes aberrant autophagy and promotes cardiac hypertrophy through mTOR activation [54]. Therefore, the activation of EGFR/AKT3 signalling in this study indicates the involvement of mTOR signalling in cardiomyocytes autophagy in VSD. The importance of lncRNA and miRNA in gene expression regulation has been widely reported, and such significant roles also are greatly essential for various physiological and pathological activities [17]. By application of luciferase assay, we found that MEG3 could directly target miR-7-5p and EGFR was the target gene of miR-7-5p. Manipulation of expression of MEG3 and miR-7-5p could also alter the level of cardiomyocytes autophagy. These results may be interpreted that MEG3 acted as a sponge of miR-7-5p to regulate the activation of downstream mTOR signalling, and in turn, modulate cardiomyocytes autophagy. MiR-7-5P is an important regulator in cell autophagy in various cell type [55][56][57]. Furthermore, in this study, upregulation of MEG3 is associated with reversing the effect of UA and SM on cardiomyocytes and downregulating serum levels of UA and SM in VSD rat model. It is suggested that the involvement of MEG3 in autophagy has been reported in bladder cancer [37], mycobacteria [38] and epithelial ovarian carcinoma [58]. Together, our results demonstrated that the interaction between MEG3 and miR-7-5p, at least in part, played important roles in cardiomyocytes autophagy induced by UA and SM. In conclusion, the results revealed that UA and SM were increased in the serum of patients with VSD and that the combination of UA and SM promotes cardiomyocytes autophagy via MEG3/miR-7-5p/EGFR axis. This study provides a greater understanding of the molecular mechanisms underlying VSD and novel insights into developing diagnosis and treatment strategies for VSD.
2019-09-28T13:02:40.166Z
2019-09-27T00:00:00.000
{ "year": 2019, "sha1": "18f280ac358aefc38182b4eae16f0afb75c5139c", "oa_license": "CCBY", "oa_url": "https://www.tandfonline.com/doi/pdf/10.1080/21691401.2019.1667817?needAccess=true", "oa_status": "GOLD", "pdf_src": "TaylorAndFrancis", "pdf_hash": "d636ef16cb8aa47d504eac48b4cdbc8239d259f9", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Chemistry", "Medicine" ] }
250188684
pes2o/s2orc
v3-fos-license
Efficacy and Safety of Lenvatinib in Anaplastic Thyroid Carcinoma: A Meta-Analysis Background Lenvatinib has shown promising efficacy in targeted therapies that have been tested to treat anaplastic thyroid carcinoma (ATC) in both preclinical and clinical studies. The aim of this study was to evaluate the efficacy and safety of lenvatinib in the treatment of patients with ATC. Methods PubMed, the Cochrane Library, Embase, and ClinicalTrials.gov were searched for potential eligible studies from inception to February 1, 2022. The outcomes included partial response (PR), stable disease (SD), disease control rate (DCR), median progression-free survival (mPFS), and median overall survival (mOS). Effect sizes for all pooled results were presented with 95% CIs with upper and lower limit. Results Ten studies met the inclusion criteria. The aggregated results showed that the pooled PR, SD, and DCR were 15.0%, 42.0%, and 63.0%, respectively. The pooled mPFS and mOS were 3.16 (2.18–5.60) months and 3.16 (2.17–5.64) months, respectively. Furthermore, PFS rate at 3 months (PFSR-3m), PFSR-6m, PFSR-9m, PFSR-12m, and PFSR-15m were 52.0%, 22.5%, 13.9%, 8.4%, and 2.5%, respectively. Meanwhile, the 3-month OS rate (OSR-3m), OSR-6m, OSR-9m, OSR-12m, and OSR-15m were 64.0%, 39.3%, 29.7%, 18.9%, and 14.2%, respectively. The most common adverse events (AEs) of lenvatinib were hypertension (56.6%), proteinuria (32.6%), and fatigue (32%). Conclusions This meta-analysis showed that lenvatinib has meaningful antitumor activity, but limited clinical efficacy in ATC. Systematic Review Registration PROSPERO [https://www.crd.york.ac.uk/PROSPERO/], identifier [CRD42022308624]. INTRODUCTION Anaplastic thyroid carcinoma (ATC), a malignancy derived from undifferentiated thyroid follicular cells (1), accounts for 1%-2% of all thyroid cancers but has a poor prognosis, which accounts for 50% of all thyroid cancer-related deaths (2,3). Most patients with ATC are older, often present with large, very rapidly growing tumors that often cause airway and esophagus compression, and even about half of them have distant metastatic disease at diagnosis. Among patients with ATC, the median survival time was 3-4 months and the 1-year survival rate was approximately 18%-20% (2,4,5). Up to now, there are no effective therapeutic options to treat ATC (6). Recently, in both preclinical and clinical studies, some novel targeted therapies have been tested for treating ATC, but had limited efficacy while lenvatinib has shown some promising and potential results (7,8). Lenvatinib is a multi-target antiangiogenetic broad-spectrum tyrosine kinase inhibitor (TKI) that can inhibit various signal receptors (VEGFR 1-3, FGFR 1-4, PDGFR-a, RET, and KIT proto-oncogenes) (9)(10)(11)(12). In a global phase III study, lenvatinib showed a promising and meaningful efficacy in differentiated thyroid carcinoma (9). Recently, lenvatinib has been regarded as a promising target drug of ATC in Japan due to its significant antitumor effect (13). Evidence from the work of Iwasaki et al. (14) suggested that lenvatinib had a good disease control rate (DCR) and overall survival rate in patients with ATC. However, according to many different clinical studies, great differences in tumor response and survival in ATC patients treated with lenvatinib have been demonstrated. Therefore, this metaanalysis aimed to elucidate the efficacy and safety of lenvatinib in ATC, and hope to offer some guidance for clinical treatment of ATC. Protocol and Registration We have registered our protocol on PROSPERO (registration number: CRD42022308624). This meta-analysis followed the Preferred Reporting Items for Systematic Review and Meta-Analysis (PRISMA) statement (15). The PRISMA checklist is provided elsewhere (Supplementary Table S1). Search Strategy and Eligibility Criteria PubMed, the Cochrane Library, Embase, and ClinicalTrials.gov were searched for potential eligible studies. The search was performed from inception to February 1, 2022. The search keywords were "thyroid carcinoma, anaplastic" and "lenvatinib" and the search strategy in PubMed was as follows: Inclusion criteria were as follows (1): studies including patients confirmed with ATC; (2) studies involving patients treated with lenvatinib; and (3) studies reporting either efficacy and/or safety endpoints. Exclusion criteria were as follows: (1) sample size less than 10 patients; and (2) article type: case report, review, conference abstract, and cell or animal study. Quality Assessment Methodological index for non-randomized studies (MINORS) evaluates single-arm studies (16). JBI Critical Appraisal Checklist for Case Series evaluates retrospective studies without a comparison group (17). Data Extraction Two investigators independently made study selection. If there were any differences between them, the third author would discuss with them together. Information on the following characteristics of included studies was recorded: authors, study type, sample size, age, criteria for tumor response [partial response (PR), stable disease (SD), and DCR], adverse events (AEs), and reported endpoints. Statistics Analysis of pooled PR, SD, and DCR, and of the pooled K-M curves of ATC patients treated with lenvatinib was performed using R version 3.6.3. Effect sizes for all pooled results were presented with 95% CIs with upper and lower limit. Heterogeneity between studies was examined using the Abbreviations: ATC, anaplastic thyroid carcinoma; PR, partial response; SD, stable disease; DCR, disease control rate; mPFS, median progression-free survival; mOS, median overall survival; VEGFR, vascular endothelial growth factor receptor; FGFR, fibroblast growth factor receptor; PDGFR, platelet-derived growth factor receptor; AEs, adverse events; MINORS, methodological index for non-randomized studies; CR, complete response; PD, progressive disease; ORR, overall response rate; NE, not estimable; RECIST, Response Evaluation Criteria in Solid Tumors; CTCAE, Common Terminology Criteria for Adverse Events; CBR, clinical benefit rate; NR, not estimable; PFSR, progression-free survival rate; OSR, overall survival rate; TKIs, tyrosine kinase inhibitors. Cochrane Q chi-square test and I 2 statistic. When I 2 ≤ 50%, use the fixed-effects model; otherwise, use the random-effects model. For pooled results with high heterogeneity, the sensitivity analysis was performed by excluding each study individually. Begg's test, Egger's test, and the trim-and-fill method were used to assess publication bias. p < 0.05 was considered statistically significant. Tumor Response We extracted efficacy measures from each study which included in this meta-analysis ( Table 3). These studies were divided into two subgroups, namely, the subgroup of retrospective studies and the subgroup of prospective studies according to study types. Nine studies reported PR as an outcome of clinical activity. The pooled PR was 15.0% (95% CI, 7%-23%, I 2 = 59.0%, p < 0.01), and the pooled PR in subgroups was different (Figure 2A). In the subgroups of the retrospective study, the pooled PR was 17% (95% CI, 8%-27%, I 2 = 57%, p = 0.02), while the other subgroups showed a pooled PR of 11% (95% CI, 0%-31%, I 2 = 73%, p = 0.05). Publication Bias Egger's test, Begg's test, and the trim-and-fill method were used to identify publication bias in the study. Pooled SD showed no significant publication bias in the included studies, p = 0.509 by Egger's test and p = 0.588 by Begg's test. Graphically, the funnel plot shows potential publication bias (Egger's test, p < 0.05) on the estimated pooled PR and DCR (Supplementary Figure S2). DISCUSSION As a rare and lethal type of thyroid carcinoma, ATC has a poor prognosis, which reports that nearly 50% of patients had metastatic disease at diagnosis (27). Currently, there are limited options for treating ATC, with an estimated first-year mortality rate of 90% (3,28). As previously reported, chemotherapies such as doxorubicin, paclitaxel, and cisplatin did not prolong survival in patients with ATC (29,30). However, the results of Viglietto et al. showed that VEGF was overexpressed in ATC tissues and pointed out that VEGFR expression was also increased in the microvascular endothelial (32,33), which suggested that ATC has many biological targets that can be inhibited and blocked by TKIs. Among these TKIs, some clinical data showed that lenvatinib might provide efficacious benefits to ATC patients (7,13). To evaluate the efficacy and safety of lenvatinib in ATC patients, the data on tumor response, survival, and safety were extracted and analyzed in this meta-analysis. Among all the studies, there were two single-arm, phase II studies, with a relatively large sample size, which may provide more reliable lines of evidence on the efficacy and safety of lenvatinib in ATC. One was a nonrandomized, open-label, multicenter, phase II study (13) including 17 patients, which demonstrated that the PR, SD, DCR, the mPFS, and the mOS were 24%, 71%, 94%, 7.4 months, and 10.6 months, respectively. The other single-arm, phase II study (18) on 34 patients showed that the PR, SD, DCR, mPFS, and mOS were 3%, 50%, 53%, 2.6 months, and 3.2 months, respectively. Differences between two prospective studies may be due to the different ethnicity, tumor pathology, or prior treatment. Our meta-analysis showed that pooled PR, pooled SD, and pooled DCR were 15%, 42%, and 63%, respectively, which demonstrated that lenvatinib showed a potential and meaningful antitumor activity in ATC patients. A study by Tahara et al. showed that 24% of ATC patients treated with lenvatinib achieved PR and 47% achieved SD (7), which was in accordance with the results of Koyama's study (8) that reported 24% achieved PR after lenvatinib in 17 ATC patients. A study on 23 patients reported a DCR of 43.5% (14), and another study on ten patients showed a DCR of 70%, with an mPFS of only 2.7 months (21). In addition, it is questionable whether lenvatinib administration prolongs survival in ATC patients. In the analysis of survival data, the results showed that the pooled mOS and pooled mPFS were 3.16 months and 3.16 months, respectively, which indicated that lenvatinib has a limited efficacy in the treatment of ATC. It should be noted that a report on 124 patients, which was excluded from our study because of its criteria for response, showed a median OS of 101 days, which was in accordance with the results of our study (34), whereas Tahara et al. (7) reported that mPFS (7.4 months) and mOS (10.6 months) were longer with lenvatinib for the treatment of ATC. Therefore, we were unable to show a significant effect of lenvatinib in ATC on prolonging survival, which was also not demonstrated in previous studies (14,21). However, compared with other multikinase inhibitors of VEGF receptors, such as pazopanib and sorafenib, which were used as monotherapy for ATC (35,36), lenvatinib actually showed a meaningful antitumor activity in patients with ATC. Medication safety is the focus of treatment. This metaanalysis showed that all patients experienced AEs and the most common AEs in ATC with lenvatinib were hypertension, proteinuria, fatigue, and asthenia, which are related toxic side effects of VEGF-targeted therapy (37). Hypertension was the most common AE and was well controlled by adjusting the dose and administering antihypertensive drugs. With regard to proteinuria, renal failure can be prevented by dose reduction and adequate withdrawal of lenvatinib (38). Lenvatinib-induced fatigue and asthenia can be improved with drug pauses and dose reduction. Furthermore, there were 3 patients who experienced severe hemoptysis and 2 patients underwent pneumothoraxrelated AEs, leading to death in our meta-analysis, which is unclear if lenvatinib was related. Lesions close to large vessels are at risk of bleeding and require careful administration (39). In particular, lesions with a history of external irradiation (40) or fistulae formed in the digestive tract or skin are at risk of rupture of the vessel wall (41). Although a rare complication, pneumothorax onset during lenvatinib treatment for thyroid carcinoma has already been described to be fatal (42). Therefore, careful management and continuous monitoring are required to avoid these AEs, which is critical to improving patient prognosis. The study had some limitations. First, this meta-analysis had a strong heterogeneity among included studies, which may be caused by patient and tumor characteristics, such as tumor burden, prior treatment, and ethnicity. Second, although we included nearly all recent studies, only 10 eligible studies were included in our meta-analysis. Finally, most clinical research reports currently available are retrospective or single-arm studies with small sample sizes. Therefore, randomized and prospective studies with a large sample size are needed to evaluate the efficacy of lenvatinib in ATC. CONCLUSION This study was the first systematic review of the efficacy and safety of lenvatinib in ATC. This meta-analysis showed that lenvatinib has a meaningful but limited clinical efficacy in ATC. Although most AEs can be controlled with dose adjustment or drug discontinuation, evaluation and prevention of fatal AEs are required during treatment. Studies with large sample sizes and randomized controlled trials are needed to confirm the efficacy and safety of lenvatinib in ATC, and provide stronger and highquality evidence. DATA AVAILABILITY STATEMENT The original contributions presented in the study are included in the article/Supplementary Material. Further inquiries can be directed to the corresponding authors. AUTHOR CONTRIBUTIONS DH conceptualized and designed the study. DH and JZ critically assessed studies and extracted data. XZ and MG performed the analysis. DH and JZ wrote the manuscript. All authors contributed to the article and approved the submitted version.
2022-07-02T15:27:39.243Z
2022-06-30T00:00:00.000
{ "year": 2022, "sha1": "d2184b98bc36271a014a23587ce27598b001d2ec", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Frontier", "pdf_hash": "86958d0a56154ead86034344206ebe2fc88192a2", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
219688062
pes2o/s2orc
v3-fos-license
Immune priming against bacteria in spiders and scorpions? Empirical evidence of immune priming in arthropods keeps growing, both at the within- and trans-generational level. The evidence comes mostly from work on insects and it remains unclear for some other arthropods whether exposure to a non-lethal dose of a pathogen provides protection during a second exposure with a lethal dose. A poorly investigated group are arachnids, with regard to the benefits of immune priming measured as improved survival. Here, we investigated immune priming in two arachnids: the wolf spider Lycosa cerrofloresiana and the scorpion Centruroides granosus. We injected a third of the individuals with lipopolysaccharides of Escherichia coli (LPS, an immune elicitor), another third were injected with the control solution (PBS) and the other third were kept naive. Four days after the first inoculations, we challenged half of the individuals of each group with an injection of a high dose of E. coli and the other half was treated with the control solution. For scorpions, individuals that were initially injected with PBS or LPS did not differ in their survival rates against the bacterial challenge. Individuals injected with LPS showed higher survival than that of naive individuals as evidence of immune priming. Individuals injected with PBS tended to show higher survival rates than naive individuals, but the difference was not significant—perhaps suggesting a general immune upregulation caused by the wounding done by the needle. For spiders, we did not observe evidence of priming, the bacterial challenge reduced the survival of naive, PBS and LPS individuals at similar rates. Moreover; for scorpions, we performed antibacterial assays of hemolymph samples from the three priming treatments (LPS, PBS and naive) and found that the three treatments reduced bacterial growth but without differences among treatments. As non-model organisms, with some unique differences in their immunological mechanisms as compared to the most studied arthropods (insects), arachnids provide an unexplored field to elucidate the evolution of immune systems. INTRODUCTION The invertebrate immune system was traditionally believed to contain no memory and specificity. This is due to the lack of immune machinery that is needed in order to develop the desired immune response in vertebrates (Rowley & Powell, 2007). However, recent literature has reported that invertebrates exposed to a low dose of a pathogen can obtain protection against a subsequent lethal dose of the same pathogen, a phenomenon termed as immune priming (Little & Kraaijeveld, 2004). This improved immune response can be observed within a few days after the priming, in later stages of the individual ('within-generation immune priming ', Milutinović & Kurtz, 2016) or even transferred to the offspring ('trans-generational immune priming', Tetreau et al., 2019). Even though it is thought that the immune system of arthropods is well conserved across species, based on an innate immune system, consisting of cellular and humoral responses (Rowley & Powell, 2007), recent studies showed there exists some variation across taxa and the insect immune system that which does not necessarily characterize other arthropods. For instance, Bechsgaard et al. (2016) discovered that some genes involved in pathways for pathogen recognition (e.g., bacteria) have been lost in arachnids and the humoral immune effector proteins (antimicrobial peptides, AMPs) are apparently not induced as it is the case for insects, but they are constitutively produced, a trend also observed by previous studies (Lorenzini et al., 2003;Fukuzawa et al., 2008;Baumann et al., 2010;González-Tokman et al., 2014). In other arachnids, the evidence seems to suggest a complete absence of an induced immune response (Santos-Matos et al., 2017). Another example of dissimilarities between insects and arachnids is the evidence indicating that phagocytosis plays a role in the immune priming of insects (Pham et al., 2007;Wu et al., 2015a). However, in spiders, phagocytosis seems to play a minor role in defense when compared to AMPs and coagulation (Fukuzawa et al., 2008). Overall, whether these differences in arachnids' immune systems influence their capacity to mount an immune priming response is unclear. Immunological studies and evidence of immune priming in arachnids come mainly from work with ticks, given their medical importance, with evidence of upregulation (Nakajima et al., 2001;Matsuo et al., 2004) and improved survival after exposure to an immune elicitor, controlled by molecular pathways that are apparently unique to ticks (Shaw et al., 2017). Moreover, blood-feeding can strongly upregulate defensin genes in the midgut, which normally occurs in the fat body after bacterial infection in insects (review in Taylor, 2006). Ticks as hematophagous are an atypical group of arachnids in terms of the use of immune defenses; for instance, ticks can use fragments of the host blood for their own defense against bacteria in the midgut level (Nakajima et al., 2003;Nakajima et al., 2005), together with their own antibacterial peptides (Nakajima et al., 2005) or with the influence of commensal and symbiont bacteria (Chávez et al., 2017). In contrast, knowledge about the immune system of other arachnids remains mostly unknown. In fact, no experimental study has investigated immune priming in terms of increased survival in non-hematophagous arachnids like spiders or scorpions (Milutinović & Kurtz, 2016;Milutinović et al., 2016). By studying the immune response of other arachnids, analogies and differences with other taxa can be established in order to understand the evolution of the immune systems in invertebrates. Here, we performed the first test of immune priming in spiders and scorpions in terms of improved survival. We investigated whether the wolf spider Lycosa cerrofloresiana (Lycosidae) and the scorpion Centruroides granosus (Buthidae) can mount an immune priming response when injected with lipopolysaccharides (LPS) of Escherichia coli and subsequently challenged with a lethal dose of the same bacteria. If antimicrobial peptides are constitutively produced, then their immune system may always be prepared for an immune challenge and exposure to a priming agent may not be required. Alternatively, priming would both trigger the release of constitutive components and induce recruitment of production of higher levels of antimicrobials components. Study species This study was carried out with two nocturnal terrestrial predators, the wolf spider Lycosa cerrofloresiana Petrunkevitch, 1925 and the scorpion Centruroides granosus Thorell, 1876 (Buthidae). Lycosa cerrofloresiana is found from El Salvador to Panama (World Spider Catalog, 2019), while C. granosus is endemic to Panama (De Armas, Teruel & Kovařík, 2011). For both species, all the existing literature is on aspects of taxonomy and distribution (e.g., De Armas, Teruel & Kovařík, 2011;World Spider Catalog, 2019 and references therein). Still, Centruroides granosus prey on a variety of arthropods, including insects and other arachnids (Miranda et al., 2015). Literature on the diet of the wolf spider is missing but we have noticed spiders eating crickets and cockroaches in the field. Spiders were collected from a baseball field in the town of Gamboa (09 • 07 05.1596 , −079 • 42 03.5266 ) and scorpions were collected from a dirt road in the town of Polanco (08 • 45 44.3196 , −079 • 48 22.8618 ). All individuals were fed with the cricket Acheta domesticus, one week before the experiments. The study did not involve unethical handling of animals and did not require permits for experimentation by the Bioethics Office from the University of Panama. We collected all specimens under the collection permit SE/AH-2-18 issued by the 'Ministerio de Ambiente', the government entity in charge of the management of natural resources. Immune priming A strain of Escherichia coli was used for the experiments, which was obtained through isolation with selective media by the Department of Microbiology of the Biology School at the University of Panama. Tests of virulence of this strain produced high mortality in both spiders and scorpions (Supplementary material). Previous studies have used E. coli via injection or pricking as an immune elicitor in other arthropods (Eleftherianos et al., 2006;Roth & Kurtz, 2009;Erler, Popp & Lattorff, 2011;Santos-Matos et al., 2017) and arachnids (Sonenshine et al., 2003;Santos-Matos et al., 2017). We used chilling anesthesia for all injections, which consisted of placing scorpions and spiders at 4 • C for 20 min. In order to stimulate priming, we injected spiders with 138 nL of LPS in PBS (0.5 mg / mL; Sigma: L8274; hereafter LPS) by using a Nanoliter 2010 injector (WPI, Florida, USA). For scorpions, we picked 100 µL of the LPS solution with a micropipette to fill insulin syringes used for the injections. Control groups consisted of individuals injected only with PBS and another group of untreated individuals (naive) to test whether the mechanical damage caused by the injections was enough to prime the immune system. For spiders, the injection procedure during the priming caused around 1% mortality and there was no mortality in scorpions. For the bacterial challenge, bacteria were cultured overnight on lysogeny broth (LB) at 27 • C. We centrifuged 14 ml of the culture (LD 50 1 × 10 7 cells / mL) at 4,000 rpm for 5 min, the pellet was washed with PBS and resuspended in 14 ml of PBS. Four days after the initial injections, half of the individuals in each treatment were injected with the bacterial solution (138 nL for spiders and 100 µL for scorpions; Naive -Challenged, PBS -Challenged, LPS -Challenged, see Fig. 1 for details on sample sizes). As controls, the other half of the individuals of each treatment were injected only with PBS (138 nL for spiders and 100 µL for scorpions; Naive -Control, PBS -Control, LPS -Control, see Fig. 1 for details on sample sizes). We performed the experiments twice, on separate dates and monitored the survival of spiders and scorpions for 15 days after the final challenge. Antibacterial activity For these measurements, we were only able to collect sufficient hemolymph samples from individual scorpions. To test whether the priming with LPS upregulated the production of antimicrobial components found in the hemolymph, we measured antibacterial activity following a protocol modified from Wu et al. (2014). Three days after the priming phase, we collected 10 µL of hemolymph from each treatment (Naive: n = 9; PBS: n = 6 and LPS: n = 9) by pricking chilled animals and placed it immediately in ice and later stored at −20 • C. The antibacterial test consisted of mixing 10 µL of cell-free hemolymph (centrifuged at 4,000 rpm for 5 min) with 10 µL of E. coli culture (1 × 10 7 cells / mL) in 180 µL of LB and incubated during 14 h at 27 • C in 1.5 mL Eppendorf tubes. Antibacterial activity was quantified as inhibition of bacterial growth in the samples by measuring optical density at 630 nm on a 96-well microplate reader. To evaluate whether the hemolymph samples inhibited the bacterial growth, we used a positive control in which we placed 10 µL of E. coli culture in 190 µL of LB (three replicates). Statistical analysis All analyses were performed in R (R Development Core Team, 2019). The Kaplan-Meier survival analysis was carried out to test for differences in survival rates between treatments as implemented in the package 'survival'. Moreover, we tested for differences between sexes in both species as a fixed factor. We used the Gehan-Breslow-Wilcoxon test to compare survival rates across treatments at early time points and the log-rank test to compare treatments at the end of the experiments (package survMisc). For the antibacterial activity, we performed a one-sample Wilcoxon test for each treatment to assess whether the priming treatment reduced bacterial growth as compared to the mean bacterial growth in the absence of hemolymph (OD 630 = 0.763). To compare treatments, we carried out a Kruskal-Wallis test. Immune priming For scorpions, overall, sex has no effect on survival (log-rank: z = −0.04, p = 0.97). The bacterial challenge significantly reduced the survival in each treatment (Naive -Bacteria vs Naive -PBS, PBS -Bacteria vs PBS -PBS, LPS -Bacteria vs LPS -PBS, Fig. 1A, Table 1). We found evidence of immune priming because scorpions initially injected with LPS showed higher levels of survival against the bacterial challenge than that of naive scorpions (LPS -Bacteria vs Naive -Bacteria, Fig. 1A, Table 1). Although the results suggests that the priming could be elicited by the wounding caused by the injection, this trend was not significant overall (PBS -Bacteria vs Naive -Bacteria, Table 1) and neither during the early stages of the infection (Gehan-Breslow-Wilcoxon test in Table 1). The survival between scorpions injected initially with PBS or LPS against the bacterial challenge was not significantly different (PBS -Bacteria vs LPS -Bacteria, Fig. 1A, Table 1). The survival of controls of the three treatments were not significantly different (Naive -PBS vs PBS -PBS, Naive -PBS vs LPS -PBS, PBS -PBS vs LPS -PBS, Fig. 1A, Table 1). Table 1 Survival analysis pairwise comparisons of priming treatments exposed to a control solution (-For spiders, the influence of sex on survival was investigated in the first trial and was not significant (z = −1.89, p = 0.06). The bacterial challenge significantly reduced the survival of all the priming treatments (Naive -Bacteria vs Naive -PBS, PBS -Bacteria vs PBS -PBS, LPS -Bacteria vs LPS -PBS, Fig. 1B, Table 1). The three priming treatments did not vary in the survival against the bacterial challenge (Naive -Bacteria vs PBS -Bacteria, Naive -Bacteria vs LPS -Bacteria, PBS -Bacteria vs LPS -Bacteria, Fig. 1B, Table 1). The controls of the three priming treatments were not significantly different (Naive -PBS vs PBS -PBS, Naive -PBS vs LPS -PBS, PBS -PBS vs LPS -PBS, Fig. 1B, Table 1). DISCUSSION Scorpions as organisms with relatively long lifespans (Lourenço, 2000) are more likely to be exposed to a pathogen multiple times during their lifetime; therefore, they are good candidates to show immune priming (Best et al., 2013). Indeed, we found evidence of immune priming in terms of improved survival for individuals that were treated with LPS as compared to naive individuals. It is unclear whether wounding by itself is sufficient to elicit priming since control individuals (injected with PBS) showed similar survival against the bacteria to individuals injected with LPS or kept naive. Thus, further work should evaluate whether wounding may be sufficient to trigger priming in arachnids as seen in other arthropods (Korner & Schmid-Hempel, 2004;Roth et al., 2010;Nam et al., 2012). Perhaps danger-associated molecular patterns (DAMPs) associated to wound healing could trigger immune priming (Krautz, Arefin & Theopold, 2014) or they may allow the entrance of pathogens that trigger the priming. The presence of LPS in the hemolymph should have triggered the production of AMPs (Rodríguez De La Vega et al., 2004) or other antimicrobial effectors; however, our antibacterial activity assay with scorpions' hemolymph suggests that there was no upregulation of AMPs in primed individuals, in line with previous work in scorpions comparing control and challenged individuals (Cocianich et al., 1993;Ehret-Sabatier et al., 1996). However, the freezing and thawing of the samples may have influenced the antibacterial effect, as it was not a part of the original protocol or perhaps the detection of an effect requires larger sample sizes. Another concern is that the immunological history of the individuals used for experimentation was unknown (e.g., priming occurring before the experiments) and whether this influences the immune priming response. Future studies should try to establish potential model species that could be reared in the laboratory for immunological studies. The improved resistance by priming may result from other factors or in interaction with AMPs in the hemolymph, which might not perform well in the medium used for our assay. Rodríguez De La Vega et al. (2004) found in Centruroides limpidus the existence of inducible AMPs and proposed a cooperative antibacterial activity with constitutive hemolymph components. Still, the differences between the survival experiment and the antibacterial activity illustrate how disease resistance and immunity assays may not correlate or are pathogen dependent (review in Adamo, 2004); consequently, providing different resolutions to the experimental detection of immune priming in arthropods. Furthermore, assays developed for insects may not be appropriate for arachnids as pointed out by other studies (Gilbert, Karp & Uetz, 2016). Future studies should investigate the efficacy of different methods to measure immune components in arachnids. In spiders, Gilbert, Karp & Uetz (2016) provided some indirect evidence of immune priming, finding in the wolf spider Schizocosa ocreata that juveniles fed with another gram-negative pathogenic bacteria showed higher encapsulation response against a nylon monofilament implant in the adult stage. In contrast, we did not find benefits in terms of increased survival for wolf spiders that were 'primed' and challenged in the adult stage, suggesting that the age in which priming occurs should be examined. Future studies on arachnids should be aimed at identifying mechanisms, including multiple host -pathogen or host -elicitor (e.g., dead pathogen, other molecules) combinations to evaluate specificity, duration, the effect of symbionts or other potential influential factors. For example, the mode of infection: Keiser et al. (2016) showed that a bacterial cocktail increased mortality of a social spider via cuticular topical application while on the contrary spiders fed with crickets injected with the same bacterial cocktail showed longer lifespans than spiders fed with control crickets. Arachnids offer systems to study other means of defense against pathogens. For instance, the silk of spiders can have antibacterial properties (Wright & Goodacre, 2012) and cuticular antifungals have been found in subsocial spiders (González-Tokman et al., 2014). In addition, there is extensive evidence revealing AMPs in the venom of spiders and scorpions that are active against bacteria, fungi, viruses and parasites in vitro, which is being aimed at medical applications (Santos, Reis & Pimenta, 2016;Wang & Wang, 2016). However, we are not aware of studies that investigated the venom -immune system interaction in arachnids when coping with pathogens. Our priming procedure and lethal injection did not allow the interaction between the venom and the bacteria. One might expect that the deactivation of the bacteria by the venom inoculated in the prey may generate a form of priming agent (e.g., dead bacteria) that would act after ingestion. Despite the inherent differences in the immune system of insects and spiders, immune priming seems to be conserved as a general protection mechanism across arthropods taxa. As non-model organisms, arachnids provide alternative systems to study the evolution of immune systems in non-vertebrate animals and our study adds support to the hypothesis that all organisms should have some sort of acquired immunity (Rimer, Cohen & Friedman, 2014). CONCLUSIONS The aim of the study was to test whether immune priming occurred in two arachnid species: a scorpion and a wolf spider. Injection of bacterial components (LPS) seemed to trigger the immune system of the scorpions as they showed improved survival against alive bacteria as compared to individuals that remained untreated (naive). However, scorpions injected with LPS showed similar survival rates as scorpions injected with only a saline solution (PBS), suggesting that the damage caused by injection may be enough to trigger the upregulation of the immune system. The lack of differences in antibacterial assays with scorpions' hemolymph from the different treatments; together with the lack of evidence for immune priming in spiders, it indicates that the experimental detection of this phenomenon may depend on multiple variables (host -pathogen, priming method, host lifespan, virulence, among other) as proposed in the literature.
2020-06-11T09:09:56.142Z
2020-06-05T00:00:00.000
{ "year": 2020, "sha1": "cf9a5cd718836fda7b02507489a6adcd58440a97", "oa_license": "CCBYNC", "oa_url": "https://peerj.com/articles/9285.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "a7870778b24c6faf339af9f1e1e4030ac8e9e66a", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
229350456
pes2o/s2orc
v3-fos-license
A protocol for systematic review and meta-analysis on psychosocial factors related to rehabilitation motivation of stroke patients Abstract Background: Rehabilitation motivation is more important than any other factor in terms of treatment effects among stroke patients. The goal of this study is to explore the variables related to rehabilitation motivation that affect treatment effects and analyze their effect sizes, in order to manage the psychosocial interventions required by stroke patients. Methods: Thirteen electronic databases will be searched from November to December 2020. The search terms will be composed of the disease term part (eg, “stroke”) and the intervention term part (eg, “rehabilitation motivation or rehabilitation factors related to motivation or self-efficacy or family support or rehabilitation adherence or achievement or psychosocial factors, including self-motivation, social support, psychological distress, rehabilitation adherence”). Selected studies the for systematic review and meta-analysis will include randomized, quasi-randomized, and nonrandomized controlled trials, and research programs on rehabilitation motivation; qualitative research and case studies will be excluded. The participants will be stroke patients. Two authors will independently assess each study for eligibility and risk of bias, and to extract data. Results: This study will comprehensively explore the psychosocial and physical behavioral variables related to the rehabilitation motivation of stroke patients and provide their priorities and effect sizes. In addition, we will report the magnitude of the correlation effect on the rehabilitation motivation of stroke patients according to each demographic variable. Conclusions: The conclusions of our study will provide effective evidence of psychosocial variables that influence the treatment outcomes of stroke patients. PROSPERO registration number: CRD42020207467 Introduction "Patients can move by themselves and lead independent lives." The ultimate goal of all treatment is to allow stroke patients to move by themselves and achieve an independent life. To achieve this, the doctor diagnoses the patient, develops a treatment plan, and implements it. [1] However, treatment effects cannot be achieved through the efforts of doctors alone. It is a sensitive work through interaction with the patient that results in the S-EN contributed equally to this work. Ethical approval is not required because individual patient data will not be analyzed. The findings of this systematic review will be disseminated through peer-reviewed publications and/or conference presentations. Ethics approval and consent to participate are not applicable Consent for publication is not applicable. Availability of data and material is not applicable. Data statement is not applicable. patient's will and motivation to engage in treatment. [2] In particular, rehabilitation treatment is effective or discontinued not only according to the patient's rehabilitation motivation [3] but also to the family's economic ability and psychological support to help the patient undergo long-term treatment. [4,5] However, in the past, rehabilitation treatment from a traditional point of view focused on simply recovering the impaired function rather than prioritizing the patient's will or goal of treatment, and viewed function recovery as the main treatment effect. [6] However, around the world, the concept of disability has shifted from permanent damage to the body to the possibility of activity and participation in society. Rehabilitation treatment no longer regards recovery of physical function as the goal of treatment, but has started to pay attention to the patient's return to daily routine. [7] This change in perspective has made it possible to understand the cases in which patients who received rehabilitation treatments were unable to return to their daily lives even though their functional impairment had resolved. [8] In addition, it was predicted that the inability to return to their daily life may occur in patients who have a low probability of cure and require long-term rehabilitation. According to Choi et al, [9,10] such patients would include those with severe diseases that are caused by brain damage, such as stroke, and for which a cure is unlikely at 6 months after onset. In fact, most stroke patients have to completely or partially depend on others, and 12% to 18% of them also experience speech impairment. [11] These physical impairments and perceptions of continuous rehabilitation treatment reduce the adherence of stroke patients [12] and, in severe cases, lead to stopping rehabilitation. [13] Stroke patients may experience anger, frustration, and depression as well as increased economic burden due to the rehabilitation treatment and family discord due to long-term treatment period. [14,15] This negative emotional experience and the persistence of a disability that is difficult to resolve lowers patients' rehabilitation motivation and may cause them to stop rehabilitation. Accordingly, among severely ill patients, especially stroke patients who need long-term rehabilitation treatment, the lower the rehabilitation motivation, the more difficult the rehabilitation treatment becomes. [16,17] In addition, it is important to understand patients' rehabilitation from a psychological point of view, and not only physical, as the patient has to leave the treatment facility and live life with a disability even when physical function is recovered. Since treatment is performed mainly focusing on functional recovery, stroke patients with a low probability of cure may face frustration in the rehabilitation process. In other words, the key to a successful rehabilitation may involve setting a realistic period of rehabilitation. [18] This is not a period of rehabilitation for the recovery of function as it was before the occurrence of the disease, but a goal to return to daily life in a state of recognizing the extent of realistic recovery and accepting the persisting disability. [19] However, there have been no studies investigating the factors related to the rehabilitation motivation, which is a key variable for rehabilitation outcomes and for the ultimate goal of rehabilitation, namely "return to daily life." However, stroke studies have analyzed the relationship between self-efficacy, self-esteem, motivation-related rehabilitation, [15] family support, economic status, [20] and ability as an environmental factor influencing rehabilitation performance. [21] However, it has become necessary to comprehensively analyze and organize these variables related to rehabilitation motivation. Therefore, this study aims to identify variables related to rehabilitation motivation in stroke patients by conducting a systematic literature review and a correlation meta-analysis, and to comprehensively organize the findings of previous studies. For this purpose, we aim to identify factors related to "rehabilitation motivation" among stroke patients and provide important information for future rehabilitation interventions. In addition, by classifying and organizing the psychosocial variables and physical behavior factors related to the rehabilitation motivation of stroke patients, the role of psychosocial intervention methods in rehabilitation treatment in the future will be provided as basic data in the fields of medical welfare and medical humanities. Study registration The protocol for this systematic review was registered in the International Prospective Register of Systematic Reviews (PROSPERO) (registration number: CRD42020207467) on November 10, 2020. This study will involve and update a systematic review according to this protocol. This protocol will be reported in accordance with the Preferred Reporting Items for Systematic Review and Meta-Analysis Protocols 2015 statement [22] and the Cochrane Handbook for Systematic Reviews of Interventions. [23] If the protocol represents an amendment of a previously completed or published protocol, identify as such and list changes Data sources The following databases will be searched comprehensively from their inception to November 2020 by 2 independent researchers (MJC and BHJ): 6 English-language databases (MEDLINE via PubMed, EMBASE via Elsevier, the Cochrane Central Register of Controlled Trials, the Allied and Complementary Medicine Database via EBSCO, the Cumulative Index to Nursing and Allied Health Literature via EBSCO, and PsycARTICLES via ProQuest), 5 Korean-language databases (Oriental Medicine Advanced Searching Integrated System, Korean Studies Information Service System, Research Information Service System, Korean Medical Database, and Korea Citation Index), and 2 Chinese-language databases (China National Knowledge Infrastructure and Wanfang Data). We will also search the reference lists of the relevant articles and perform a manual search on Google Scholar to identify additional articles. We will include not only the literature published in journals but also "gray literature" such as theses and conference proceedings. There will be no language restrictions. Search strategies The search terms will be composed of the disease term part (eg, "stroke") and the intervention term part (eg, "rehabilitation motivation or rehabilitation factors related to motivation or selfefficacy or family support or rehabilitation adherence or achievement or psychosocial factors, including self-motivation, social support, psychological distress, rehabilitation adherence"). The search strategies for the MEDLINE and EMBASE databases are shown in Table 1 and will be modified and used similarly for the other databases. Selected studies for systematic review and meta-analysis will include randomized controlled clinical trials, quasi-randomized controlled trials, controlled (nonrandomized) clinical trials, and research programs on rehabilitation motivation; qualitative research and case studies will be excluded. Types of participants. We will include studies with stroke patients. There will be no restriction on the gender, age, or race of the participants. Types of interventions and comparators. Studies using psychosocial variables or factors related to rehabilitation motivation will be included. We will also include studies using social behavioral variables such as socioeconomic status and family support, and individual internal and external variables related with the rehabilitation adherence or treatment. There are no comparators. (2001), [24] like other rehabilitation motivation assessment tools used as a measurement and evaluation tool in each study. 2.5.2. The secondary outcome. The secondary outcome measures will use tools that can be evaluated in terms of psychological factors and physical behaviors related to rehabilitation motivation. 1) Rehabilitation adherence The rehabilitation adherence assessment tool for stroke patients developed by Park (2014) [15] consists of a total of 29 questions: 5 on medications, 3 on rehabilitation exercises, 3 on bedsores prevention, 2 on aspiration prevention, and 2 on health behaviors. Adherence measures are of 3 types [25] : (1) Patient monitoring: Patient attendance to rehabilitation sessions is monitored. For each participant, the ratio of sessions attended to scheduled sessions is calculated. Attendance has been used as an adherence measure in previous sports injury research. [26] (2) Sport injury rehabilitation adherence scale [27] (Brewer, Van Raalte, Petitpas, Sklar, & Ditmar, 1995) at each physical therapy appointment, the practitioner (eg, physical therapist or athletic trainer) responsible for the rehabilitation of each participant on that day completes the sport injury rehabilitation adherence scale. (3) Patient self-reports of home exercise: At each rehabilitation session, patients report their degree of completion of prescribed home exercises on a scale ranging from 1 (none) to 10 (all). 2) Modified Barthel index The modified Barthel index developed by Austrian occupational therapists will be used to evaluate the performance of daily activities. 3) National Institutes of Health Stroke Scale, stroke impact scale, Scandinavian stroke scale, stroke specific quality of life scale Study selection The study selection will be conducted by 2 independent researchers, MJC and BHJ, according to the above selection criteria (Table 1). After removing duplicates, we will select and review the titles and abstracts of the searched studies for relevance, and will then evaluate the full texts of the selected studies for eligibility. Any disagreement on study selection will be resolved through discussion with other researchers. The literature selection process will be reported in accordance with the preferred reporting items for systematic review and meta-analysis guidelines [28] (Fig. 1). Data extraction The extracted studies will include the first author's name, year of publication, country, paper title, sample size and number of dropouts, age, and gender of participants, details of intervention and comparison, research design, measurement tools, independent, dependent, mediated, and control variables, and sub-factors related to rehabilitation motivation. For example, a psychosocial variable is extracted as an intervention variable related to the rehabilitation motivation, which is an outcome variable, and then classified as a psychological or social variable, and the subvariables include factors that reduce and improve rehabilitation motivation. Subsequently, the variables related to rehabilitation motivation in each study will be classified and structured as factors (eg, depression as a psychological risk factor, resilience as a protective factor, economic burden as a risk factor, and family support as a protective factor). The extracted data will be recorded using Excel 2016 (Microsoft, Redmond, WA) and will be shared among researchers using Dropbox (Dropbox, Inc., CA) folders. We will contact the corresponding authors of the included studies via email to request additional information if the data are insufficient or ambiguous. Quality assessment Two independent researchers, MJC and BHJ, will assess the methodological quality of the included studies and the quality of the evidence for each main finding. Discrepancies will be resolved through discussion with other researchers. The methodological quality of the included studies will be assessed using the Cochrane Collaboration risk-of-bias tool. [29] We will assess random sequence generation, allocation concealment, blinding of participants and personnel, blinding of outcome assessments, incomplete outcome data, selective reporting, and other biases for each included study. Each domain will be categorized into 1 of 3 groups: "low risk," "unclear," or "high risk." Each evaluation will be recorded in an Excel 2016 spreadsheet and will be shared among researchers using Dropbox (Dropbox, Inc.) folders. The evaluated results will be presented in a full review using Review Manager version 5.3 (Cochrane, London, UK). The results of the quality of evidence will be presented through a summary-offindings table. The evaluation process will be shared and discussed by researchers. Data synthesis and analysis Data synthesis and analysis will be performed using Review Manager Version 5.3 (Cochrane) and Excel 2016, and files will be shared among researchers using Dropbox (Dropbox, Inc.) folders. Descriptive analyses of the details of participants, interventions, and outcomes will be conducted for all included studies. A quantitative synthesis will be performed if there are studies using the same types of intervention, comparison, and outcome measures. The collected data will be analyzed in 2 stages by first synthesizing and analyzing the data according to the systematic review process, and then classifying the studies with figures that can be meta-analyzed. In the first stage, a systematic review aims to comprehensively organize and analyze psychosocial variables related to the rehabilitation motivation of stroke patients. A Study on the effect of psychological intervention in the recovery of stroke patients and exploring individual psychological and environmental variables, such as support for rehabilitation motivation of stroke patients. Therefore, this study will be classified and coded to "author (year of publication)," "subjects (patients)," psychosocial factors and sub-factors that affect the rehabilitation motivation of stroke patients, measurement tools of rehabilitation motivation and research methods, research procedures, and research results. We will synthesize and analyze each paper in this way. In the second step, the psychosocial factors related to the rehabilitation motivation of stroke patients used in the meta-analysis will be systematized through discussions and reviews among researchers. The framework of the analysis category will be nominated and coded based on the following items in order to calculate the size of the correlation for each study. The data coding for the metaanalysis will be as follows. First, the psychosocial variables related to the rehibiliatation of stroke patients will be classified as psychological or social variables. Second, psychological and social variables will be divided into risk factors having a negative correlation and protective factors having a positive correlation with rehabilitation motivation. Third, the sub-variables of risk factors and protection factors will be synthesized by identifying the correlation code in studies on psychosocial variables in stroke patients, reviewing the theoretical background, and classifying each variable into an easy frame for analysis. After that, we will analyze the overall publishing bias, homogeneity verification, overall correlation effect size analysis, and correlation effect size between all factors related rehabilitation motivation. The correlation effect size will be analyzed using Fisher z [30] (.1 for small effect size, .3 for medium effect size, and .5 for large effect size) by checking the correlation coefficient in the 95% confidence interval. Heterogeneity between the studies in terms of effect measures will be assessed using both the chi-squared test and the I-squared statistic. We will consider I-squared values greater than 50% and 75% indicative of substantial and high heterogeneity, respectively. In the meta-analyses, a random effects model will be used when the heterogeneity is significant (I-squared value > 75%), while a fixed effects model will be used when the heterogeneity is non-significant. A fixed effects model will be also used when the number of studies included in the meta-analysis is very small, where inter-study variance estimates have poor accuracy. [31] When it is considered that the heterogeneity is too high for the results to be synthesized (I-squared value > 75%), a subgroup analysis will be conducted as follows to determine the cause of heterogeneity. Assessing the quality of the body evidence The quality of the evidence was assessed using the Grading of Recommendations, Assessment, Development, and Evaluation, [32] which was rated according to the following 5 categories: risk of bias, imprecision, inconsistency, indirectness, and other factors such as publication bias. [33] Subgroup analysis If heterogeneity is evaluated as significant (I-squared value > 75%) and the necessary data are available, we will conduct a subgroup analysis to account for the heterogeneity. A subgroup analysis will be conducted according to the following criteria: (1) the stroke rehabilitation period, (2) the hospital stay period, (3) demographic variables, and (4) socioeconomic status. Sensitivity analysis To identify the robustness of the meta-analysis result, we will perform sensitivity analyses by determining the effects of excluding (1) studies with high risks of bias, (2) studies with missing data, and (3) outliers. Assessment of reporting bias If there are more than 10 trials included in the analysis, reporting biases such as publication bias will be assessed using funnel plots. When reporting bias is implied by funnel plot asymmetry, we will attempt to explain possible reasons. Ethics and dissemination Ethical approval will not be needed because the data used in this systematic review will not include individual patient data and there will be no concerns regarding privacy. The results will be Cheong et al. Medicine (2020) 99:52 www.md-journal.com disseminated by the publication of a manuscript in a peerreviewed journal and/or presentation at a relevant conference. Discussion Rehabilitation treatment for stroke patients is not performed over a short period of time, and the concept of cure does not apply; thus, it is considered that patients are in rehabilitation for a lifetime (Kwon et al, 2003). [34] As such, rehabilitation treatment requires a long period of time, and it is difficult to expect a satisfactory rehabilitation effect without the patient's active participation, a clear goal setting for the rehabilitation period, and the economic support of the family. Even if being an economically rich or with competent therapist, rehabilitation is likely to be stopped if the patient has no or low rehabilitation motivation. In particular, stroke patients show decreased willingness to rehabilitate as well as feelings of frustration and anger when they are not in the shape or situation they expect at a particular stage of recovery through rehabilitation. Therefore, it will be useful to design effective therapeutic interventions to identifying the variables that affect the rehabilitation motivation of stroke patients. However, until now, systematic searches for such variables have not been conducted. Therefore, in this study, we aim to explore the variables related to the rehabilitation motivation, one of the major factors in the treatment of stroke patients. We believe the results of this systematic review will help clinicians optimize treatment protocols for stroke patients. It is also expected that social welfare and health policy makers will be able to identify areas in the public health setting that require intervention to improve treatment for stroke patients.
2020-12-23T06:16:39.948Z
2020-12-24T00:00:00.000
{ "year": 2020, "sha1": "c397509c30b36dc068cb73acd4c4aaec8c8f1337", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1097/md.0000000000023727", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "2873aa7fc00b228931c16f52f4faf94b52da4077", "s2fieldsofstudy": [ "Psychology", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
155995001
pes2o/s2orc
v3-fos-license
Survey on Sound and Video Analysis Methods for Monitoring Face-to-Face Module Delivery — The objective of this work is to identify unobtrusive methodologies that allow the monitoring and understanding of the educational environment, during face-to-face activities, through capturing and processing of sound and video signals. It is a survey on applications and techniques that exploit these two signals (sound and video) retrieved in classrooms, offices and other spaces. We categorize such applications based upon the high level characteristics extracted from the analysis of the low level features of the sound and video signals. Through the overview of these technologies, we attempt to achieve a degree of understanding of the human behavior in a smart classroom, on behalf of the students and the teacher. Additionally, we illustrate open-research points for further investigation. Introduction While remote and virtual learning environments are being massively investigated, explored and extended, based on current technological capabilities, the face-to-face learning scenario has remained, largely, unchanged. Students who actively engage in the module delivery tend to understand, learn, remember more, and be more able to appreciate the relevance of what they have learned, than students who passively receive. In the face-to-face learning scenarios, the module delivery is performed within classrooms that students physically attend. In the classroom, a rich set of activities take place and a wealth of signals are generated by the students and the teachers. Such signals and data can offer important information related to student engagement and emotions (which can be predictors of learning). Student attention, collaboration as well as the characteristics of module delivery and teaching styles (e.g. monologues or based on discussion, dialogues and experimentation) are well hidden in these signals. This information can support enriched learning analytics and allow the stakeholders to take informed education -related decisions. While these signals offer live feedback to the teacher and the instructor, who can make adaptations and corrective movements during the module delivery, they remain largely unexploited in terms of their machine-based capturing, processing, analysis, understanding and exploitations. This is the scope of this paper as we perform a survey of research activities that are related to the monitoring, processing and understating of signals in an unobtrusive way in smart spaces. These can be already related to education or they can be potentially exploited in learning scenarios. To support the unobtrusive characteristics of the monitoring framework we focus on sound and video signals. The structure of the paper is the following: Section 2 describes the framework of our survey and sets the criteria for our decision, including the signal and technology selections and the typical architectural layering. Section 3 analyzes the sound signals and the features that are extracted in the research activities that we have considered. We also correlate this analysis with behavioural features that are of interest during module delivery. Section 4 examines the features extracted by video signals and performs a similar correlation. Finally, Section 5 presents the conclusions and suggests some open points for future investigation and research. Research Framework The personalized diagnosis, assistance and evaluation of students in open learning environments are challenging tasks, especially when considering real-time, classroom conditions. In the past the members of our team have investigated and designed an open learning environment to monitor the comprehension of students, assess their prior knowledge, build individual learner profiles, provide personalized assistance and, finally, evaluate their performance by using artificial intelligence [1]. Monitoring a student's behavior in text comprehension activities allows the inference of the student cognitive profile and selection of the personalized feedback and support [2]. Our focus is the educational environment itself, and specifically within the face-toface module delivery. We are investigating research efforts, possibly focused upon different application domains that can be adapted and applied for the module delivery. In principle we consider the concept of the Smart Environment (SE) defined as able to acquire and apply knowledge about this environment and its inhabitants in order to improve their experience in that environment [3]. This can be achieved with information acquisition from intelligent sensor devices, communications among sensors, enhanced services offered by intelligent devices and predictive and decisionmaking capabilities. Signal and technology selection framework The infrastructures and the systems under examination have to be unobtrusive in terms of the education processes both for the students and the teacher / instructor. As already discussed, to support this requirement we select methodologies and technologies monitoring and analyzing sound and video signals through typical microphones and cameras, i.e. without special equipment held by the students. The objective of our survey is to identify technologies and mechanisms able to monitor the classroom conditions through the generated signals during module delivery. In the research efforts that have been surveyed, the monitored space is indoor and preferably, but not necessary, related to education / classroom. The infrastructure should be preferable based on cost effective tools and easily deployed tools and equipment. In terms of the architectural approach, we consider two layers, consisting of an operational and an intelligent layers allowing for interaction with the monitored space and themselves, as sufficiently flexible and scalable to accommodate the modules we consider [4]. The operational layer is composed by the sensing (and possibly actuating) capabilities mainly including the sensors and the signal retrieval and adaptation modules. The intelligent layer includes the signal processing and interpretation techniques and allows understanding of the monitored signals, the environment's state and information representation. Sound Signals Acoustic signals, sound, can provide useful information related to a smart space and especially for the face-to-face scenario, the classroom conditions and participation of the teacher and the students. Aspects of interest include the number of speakers in the room, at any given time, considering two extremes: the teacher's monologue to the generalized dialogue between the students along with multiple intermediate states of students contributing to the module delivery with observations, questions and organized and controlled dialogues. In the following, we group the capabilities of sound processing in smart environments that can be associated with the needs of a monitored classroom, in an escalating complexity, and refer to the identified research efforts. Sound Identification: In [5] sound analysis is performed through structured statistical modeling. The Smart Ambient Sound Analyzer platform supports different ambient audio mining tasks (including audio classification and location estimation) extracting acoustic features from sound components (e.g., music, voice and background), and translates them into structured information. The system integrates mixture models and an SVM algorithm into a unified classification framework. Sound Detection is performed in [6], where the designed system allows for detecting sound events from input streams. The Sound/Speech discrimination functionality allows for discriminating speech from other sounds to extract voice commands, while the classification recognizes daily living sounds and the speech recognition applies speech recognition to events classified as speech. Voice Activity Detection: Voice activity detection allows classification of input signal frames based on feature estimation in two classes: speech activity and nonspeech events (pauses, silence, or background noise). This separation has been performed in another application domain (related to movies) in [7]. Speech Loudness and Clarity characterization: The environment noise level in classrooms has been correlated with the intensity and quality of the teachers' voice [8] using the Pearson Correlation Test. The mean of sound pressure level is calculated in specific locations in the classrooms. Voices were classified according to the GRABSI parameters including grade, roughness, asthenia, breathiness, strain and instability. Sound Localization: More elaborate sound analysis scenarios involve the localization of the sound (i.e. to locate sound sources in space) so that the sound generation context can be understood in a more meaningful way and the identification of sound sources can be confronted. The human binaural auditory system is capable of analysis of auditory scenes and sound localization, in a classroom environment it is a challenging task to localize the student that posed a question without Line of Sight (e.g. if the teacher is writing on the board). The mechanism of localization is well explained in physiology in terms of interaural difference cues. The listener's head interrupt the sound path from the source to the far ear, resulting in a difference of pressure level and time of arrival (or phase) between the two ears, with neurons being able to measure such differences [9]. According to [10], the existing Sound Source Localization technologies can be categorized into three groups: Time delay estimation (TDE), beamforming method and machine learning methods. TDE is enhanced using multichannel crosscorrelation-coefficient algorithm to exploit spatial and temporal information among multiple microphones in [11]. The measurement of the active sound sources and the estimation of their corresponding DOAs is achieved in [12] by representing the observed signals in Time-Frequency zones where each source is dominant from the others. In [13] sound localization is performed based upon spatially distributed sensors calculating Time Differences of Arrival (TDOAs) between microphones of the same sensor. In [14] also sound localization is realized using TDE but the novelty here is the usage of arbitrarily shaped non-coplanar microphone arrays. In [15] a microphone array sensor has also been deployed, so that through the analysis of the signals to detect the number of speakers within a moving windowbased time interval. The system computes the TDOA (Time Difference of Arrival) between each microphone pair by using the received acoustic signal patterns and identifying the time lag between each signal pair. In order to confront the room reverberations and background noises, the system uses generalized cross correlation with phase transform (GCC-PHAT). The system locates the direction of the speakers, and uses this information to detect the number of speakers in the target environment. The beamforming method follows subspace approaches (exploiting the orthogonality between signal and noise subspaces) and beamscan approaches localize the array signals into one direction (such as the Steered Response Power Phase Transform) [16], [17], [18], [19]. Machine learning approaches are supervised learning methods, including support vector machine [13], multilayer perceptron neural network [20] and Gaussian mixture model [21]. Physical Room Acoustic Feature Modeling: Given the challenges in estimating the accuracy of direction of arrival (DOA) in indoor environments due to echo, multipath propagation and spectral distortions as well as due to the fact that noise sources may have similar spectral characteristics with the signal monitoring, the study and modeling of the space acoustic features may improve the DOA estimation. The acoustic features of the physical rooms can be pre-studied before localization. This kind of data driven training methods can be more effective especially when the environment is too complex to be modeled. Parameters to identify -Situations to understand: In the following table, we correlate the aforementioned sound analysis results with parameters related to the classroom activities. The main challenges in such environments include the existence of multiple people (teacher and students) that can be potential sound sources with different vocal characteristics, the echoing and the reverberation. Video Signals Video capturing and processing can unobtrusively retrieve indications produced by the students and the teachers during the module delivery. Such features may span from presence to activity and expression recognition. In the following we group the capabilities of video processing in smart environments that can be associated with the needs of a monitored classroom. Human face detection and identification Human detection and identification can be some of the first objectives of image capturing and analysis. In [22] and [23] a rotating camera is positioned centrally in the front of the classroom and captures frontal images from the students. Histogram normalization allows for contrast enhancement in the spatial domain and median filtering allows removal of noise. Skin classification allows separating pixels related with skin (making them white). For face detection, Haar classifiers are used. The detected faces are compared with the database and when a face is recognized the attendance is marked on the server. In [24], one camera retrieves the occupied seats and another captures the images of student's face. Facial expression recognition Recognition of the students' emotions belongs to the objectives of [25], [26] which aim at recognizing facial expressions and head gestures. Based on Ekman's analyses [27] and using a facial expression recognition system [28] it is attempted to infer mental states from a video stream of facial events in real-time. The system, capturing at 30 fps, locates and tracks 24 feature points on the face and uses motion, shape and color deformations to identify facial and head movements (e.g. head pitch, lip corner pull) and communicative gestures (e.g. head nod, smile). The work in [29] detects facial expressions connected with frustration. The red-eye effect is used to track pupils and Hidden Markov Models (HMMs) detect head nods, shakes and blinks. Support vector machine (SVM) is used to compute the two opposite expressions of smiles or fidgets. Gaze recognition In [30] and [31] a system of cameras are directed towards the students' audience. Using Pointing'04 database [32] faces are detected using a probabilistic detection of skin chrominance by normalizing the red and green components of the RGB color vector by the intensity (R+G+B). Face position and thus gaze destination are estimated using a zeroth order Kalman Filter which permits the process to be focused on the face region. Head orientation and gaze estimation up to the distance of 4 meters away from the black-board was effectively captured by re-training a face detector and developing additional testing data-sets. A mobile eye-tracker is also needed (possibly worn by the teacher) which may lead to distraction for both the teacher and the students. Posture and gesture recognition Human posture and gesture perception supports the understanding of human behavior and attitude. These features are extracted in [33] during human-machine interaction. To achieve real-time response, the upper body part is analyzed as it projects information about many human activities. The body pose estimation is analyzed in two parts: first skin color is used to track head and hand blobs and then a silhouette is shaped from the lengths of the shoulders and the neck. The 3-D movements of head and hands are tracked using multi-view input. This approach presupposes that the person stands in front of the camera with straight arms and facing forwards at the beginning of each session in order to construct a body model for initialization. An investigation of building spatio-temporal models of human actions that can support categorization and recognition of simple action classes is conducted in [34]. Action recognition is realized in two stages: the characteristics of motion are extracted from visual input and then these characteristics are classified into action classes. This approach considers variations in viewpoints around the central axis of human body and proposes a representation based on Fourier analysis of motion history volumes in cylindrical coordinates. 3-D motion descriptors have been extracted that support meaningful categorization of simple action classes. Fusing of facial expression and body gesture is suggested in [35], considering that many of the spatial-temporal features detected are similar. Combining visual channels of facial expression and body gesture is a potential way to accomplish effective affect analysis [36]. Two cameras are used for capturing the facial expression and the body movements. The Canonical Correlation Analysis (CCA) is used in order to find pairs of base vectors (i.e. canonical factors) for two variables such that the correlations between the projections of variables onto these canonical factors are mutually maximized. Human activity recognition In [30], [31] a camera, placed at the back of the classroom, captures teacher's actions and slight changes in the scene. Major body movement and gesticulation are recognized due to the long distance. The tracker/detector [37] used to track teacher's motion in 1D horizontal location in front of the blackboard. The tracker estimates the object's motion between consecutive frames and the detector treats every frame as independent and performs full scanning of the image to localize all appearances that have been observed and learned in the past in order to generate training examples and thus to avoid same errors in the future. 4.6 Modeling of environment In [38] human action is recognized from video input in an environment for which prior knowledge is available and the possible actions are pre-categorized (e.g. entering/exiting the scene, standing up, sitting down, using a computer terminal). Before detecting activities, statistical information about monitored area is computed. This reduces the effect of lighting conditions and other factors that change over time. The prior knowledge of the area and the limitation of under consideration actions simplify the actions' determination. The main challenges in such environments include the existence of multiple people (teacher and students) that can be potential sources of movements. Furthermore the complexity of actions, the proximity of people and the difficulty to detect a person if there is no clear view. These challenges are partially confronted with multiple cameras (multi-view), increased resolution and the fusion -based enhancements. Conclusion and Future Work In this paper, we have surveyed applications based on audio and video analysis able to detect and recognize human behaviours and subsequently to predict and enhance the imminent procedures and activities. As discussed we have focused on the education environment for face-to-face module delivery but the research efforts identified come from other applications domains (including the smart spaces). Our focus has been on sound and video and the main restriction has been the unobtrusive nature. For both areas a set of methodologies have been identified, while the achieved results have been correlated with behaviours and features met in the classroom. For audio analyses, we have identified research works and methodologies, which recognize speech from other sounds; detect dialogue; recognize the number of speakers; estimates the level of noise in a room; analyzes the intensity and clarity of the voice and localize the sound source. For video analyses, the identified approaches concentrate on the extraction of diverse human activities from human presence to facial expression recognition, posture and gesture recognition and activity recognition. The maturity and the results presented in the sources identified allow confidence that this field can be further developed through the consolidation of such methods. On the other hand, we recognize that the classroom environment is highly challenging, due to the co-existence of multiple people acting as signal (sound and movement) sources. Furthermore behaviour is not strictly formulated in such environments and unexpected (or undefined) activities may take place. In this view, systems should recognize concurrent activities from a person rather than focusing on a single activity each time. 7 Authors Andreas Papadakis (Dr-Ing.) holds degree and PhD from the Department of Electrical and Computing Engineering in NTUA (National Technical University f Athens). He is an Associate Professor in the department of Electrical and Electronics Engineering Educators in the School of Pedagogical and Technological Education (ASPETE), Athens, Greece. He has published more than 50 papers in refereed scientific journal and proceedings of International conferences and he has participated in more than 10 RTD European projects in the area of innovative Internet and telecommunications services. His research work has more than 200 citations and he is a regular reviewer of international journals. Eleni Tsalera is Electronics Engineer with Master Diploma in Electronics and Radioelectrology; she is currently Ph.D. candidate in University of West Attica at the Department of Informatics and Computer Engineering. She also works as research associate in School of Pedagogical and Technological Education (ASPETE) in Athens (Greece) since 2009. Maria Samarakou (Dr-Ing.) holds degree and PhD from the Department of Physics in National and Kapodistrian University of Athens. She is a Professor at the department of Informatics and Computer Engineering, University of West Attica and Director of Laboratory of Educational Technology and e-Learning Systems. Her research work has contributed to the design of educational environments, intelligent tutoring systems, artificial intelligence, energy management, web based education and computer science education. She has undertaken more than 20 National and European projects in research and technology development as coordinator/project manager or main researcher. She has published more than 100 papers in refereed scientific journal and proceedings of International and National congresses on topics in the field of simulation, optimization, expert systems, artificial intelligence and educational technology. She has more than 400 citations in scientific articles and she is Reviewer in various international scientific journals and conferences.
2019-05-17T13:16:36.473Z
2019-04-30T00:00:00.000
{ "year": 2019, "sha1": "f51195b413b7343c78c827c6bd4d4ecee76dddd0", "oa_license": "CCBY", "oa_url": "https://online-journals.org/index.php/i-jet/article/download/9813/5651", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "96da8667c9c200f76fbf8f48322409acf846f4cf", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
19793588
pes2o/s2orc
v3-fos-license
A Low-Normal Free Triiodothyronine Level Is Associated with Adverse Prognosis in Euthyroid Patients with Heart Failure Receiving Cardiac Resynchronization Therapy Summary Thyroid dysfunction is prevalent in patients with heart failure (HF) and hypothyroidism is related to the adverse prognosis of HF subjects receiving cardiac resynchronization therapy (CRT). We aim to investigate whether low-normal free triiodothyronine (fT3) level is related to CRT response and the prognosis of euthyroid patients with HF after CRT implantation. One hundred and thirteen euthyroid patients who received CRT therapy without previous thyroid disease and any treatment affecting thyroid hormones were enrolled. All of patients were evaluated for cardiac function and thyroid hormones (serum levels of fT3, free thyroxine [fT4] and thyroid-stimulating hormone [TSH]). The end points were overall mortality and hospitalization for HF worsening. During a follow-up period of 39 ± 3 weeks, 36 patients (31.9%) died and 45 patients (39.8%) had hospitalization for HF exacerbation. A higher rate of NYHA III/IV class and a lower fT3 level were both observed in death group and HF event group. Multivariate Cox regression analyses disclosed that a lower-normal fT3 level (HR = 0.648, P = 0.009) and CRT response (HR = 0.441, P = 0.001) were both independent predictors of overall mortality. In addition, they were also both related to HF re-hospitalization event ( P < 0.01 for both). Patients with fT3 < 3.00 pmol/L had a significantly higher overall mortality than those with fT3 (cid:2) 3.00 pmol/L ( P = 0.027). Meanwhile, a higher HF hospitalization event rate was also found in patients with fT3 < 3.00 pmol/L ( P < 0.001). A lower-normal fT3 level is correlated with a worse cardiac function an adverse prognosis in euthyroid patients with HF after CRT implantation. (Int Heart J 2017; 58: 908-914) C ardiac resynchronization therapy (CRT) is a considerable treatment for patients with drugrefractory heart failure (HF) and electromechanical dyssynchronoy. 1,2) Though CRT improves heart failure symptoms and quality of life, and reduces both HF-related morbidity and mortality, non-response to CRT has been reported in nearly one third of patients. 3) The reason of non-response to CRT is complicated and remains a question of different opinions. Thyroid hormones (THs) have cardiac and vascular effects, and they also regulate biochemical reactions in most tissues. 4) TH consists of thyroxine (T4) and triiodothyronine (T3), and the cardiac myocyte does not convert T4 to T3. In other word, T3 is the bioactive form of thyroid hormone for cardiomyocytes and plays an important role in cardiovascular regulation. Many studies have confirmed that T3 is a prognostic predictor or risk stratifica-tion of HF. [5][6][7] Recently, fT3 level was shown to be associated with cardiac function and heart structure in euthyroid subjects without HF. 8) In addition, another research reported hypothyroidism was associated with a worse prognosis after CRT implantation, but it did not supply values for complete thyroid panel (fT3 and fT4). 9) Is the prognosis of patients with HF after CRT implantation is correlated with lower fT3 levels within a normal reference range? If the above is true, what is the relationship among THs, CRT response and prognosis? Thus, we aim to understand whether changes in fT3 levels within the normal reference range could affect the prognosis in a group of HF subjects with CRT implantation, and investigate the relationship between THs and CRT response. Methods Study population: This original cohort consisted of 138 consecutive HF patients who received CRT implantation in the Sun Yat-sen Memorial Hospital of Sun Yat-sen University, from August 2007 to August 2013. All the devices were implanted according to a combination of 2008 and 2010 ESC guidelines: New York Heart Association (NYHA) functional classification II/III/IV, left ventricular ejection fraction (LVEF) !35% and sinus rhythm with QRS duration "120 ms, under optimal medical therapy. Euthyroid status was defined as all thyroid hormones within the normal reference range. Exclusion criteria included fT3 levels beyond the reference range; therapy with amiodarone, thyroid hormone (TH), glucocorticoids, and antithyroid medication within the last 1 month; previous therapy with radioiodine treatment and thyroid surgery; interventional or surgical procedures performed within the last 3 months; acute myocardial infarction within the previous 3 months; and loss of follow-up or data missing. Study design: Clinical data included clinical status (e.g., body mass index [BMI], previous hypertension, diabetes, hyperlipidemia, previous heart disease or stroke, NYHA class), electrocardiogram, echocardiogram and blood samples test before CRT implantation. Baseline of renal function was determined by estimated glomerular filtration rate (eGFR). eGFR was estimated with the abbreviated Modification of Diet in Renal disease (MDRD) equation: eGFR (mL/minute) = 186.3 × (serum creatinine) -1.154 × age -0.203 × (0.742 if female). This study was approved by the Hospital's ethical committee and informed consents were given by all enrolled patients. Blood chemistry and assays: Fasting blood samples were drawn from each patient during the first two days of admission. Serum fT3, fT4 and TSH levels were measured by Immulite 2000 (Bio DPC, Los Angeles, USA). The reference intervals of our laboratory are as follows: fT3 1.84-7.39 pmol/L, fT4 8.36-29.6 pmol/L and TSH 0.3-4.5 mIU/mL. The patients were further divided into lownormal fT3 group (fT3 1.84-3.00 pmol/L) and highnormal fT3 group (fT3 3.01-7.39 pmol/L) according to lower quartile of fT3. Creatinine was measured by automatic biochemistry analyzer (Hitachi 7600, Japan). CRT implantation: Before CRT implantation, all patients received a coronary angiography (CAG) to identify coronary artery disease (CAD). CAD was defined as the stenosis of when at least one of three major coronary arteries was more than 50%. Ischemic etiology of HF was determined when CAD was diagnosed. When CRT was implanted, a coronary sinus venogram was performed firstly, and the LV pacing lead was inserted through the coronary sinus and placed in the lateral or posterolateral vein. Both atrial and right ventricular leads were implanted conventionally, and all leads were connected to a dual-chamber biventricular implantable pacemaker. Follow-up and end points: All patients were followed up via telephone or medical records in our hospital every 1 to 6 months after CRT implantation until the end point. Optimal medical treatment was executed to all patients. Patients were classified as responders when the LVEF in-creased by more than 5%, and as non-responders if these response criteria were not satisfied. The primary end-point was overall mortality and the secondary end-point was hospitalization for HF worsening. Hospitalization for HF worsening was defined as patients admission with worsening signs and/or symptoms of HF, including dyspnea, peripheral edema, and/or congestion on the chest radiograph and the need for treatment with intravenous diuretics or an increase in oral diuretics. Statistical analysis: Continuous data with normal distribution were presented as mean ± SD, continuous data with non-normal distribution were presented as median (interquartile range [IQR]), and dichotomous data were presented as numbers and percentages. Independentsamples t test was used to compare normally distributed data, and Mann-Whitney U test was used to compare nonnormally distributed data. Chi square (χ 2 ) test or Fisher's exact test were used for dichotomous variables. To explore the predictors of overall survival and hospitalization admission event, multivariable Cox regression models with forward stepwise approach were constructed with age, sex, etiology of HF, NYHA class, LVEF, QRS width, NT-proBNP, fT3, fT4, TSH and response to CRT as predictive variables, respectively. All analyses were performed with PASW Statistics for Windows, version 18.0 (SPSS Inc, Chicago, Illinois). Level for statistical significance was Pvalue < 0.05 at 2-sides. Results Among the original 138 patients, 15 patients treated with amiodarone (9 patients), thyroid hormone (3 patients) or anti-thyroid medication (3 patients) during the followup period and 10 patients who were lost to follow-up were excluded. Finally, 113 patients (85 male and 28 female) completed the entire study. All of the patients were under biventricular pacing after CRT implantation, with 65 CRT responders (57.5%) and 48 non-responders (42.5%). Except for LVEF (29.9% versus 32.9%, P = 0.038), there were no significant differences between patients' characteristics for CRT responders and nonresponders. During a follow-up period of 39 ± 3 weeks, 36 patients (31.9%) died, including 30 (83.3%) of cardiac causes and 45 (39.8%) had hospitalization for HF worsening (Table I). The death group had a higher rate of NYHA III/IV class (80.6% versus 50.6%, P = 0.004), a lower fT3 level (3.35 pmol/L versus 4.06 pmol/L, P < 0.001) than those of the survival group. Compared with event-free group, the hospitalization event group had a significantly higher rate of NYHA III/IV class (77.8% versus 48.5%, P = 0.003) and a lower fT3 level (3.15 pmol/L versus 4.11 pmol/L, P < 0.001). The relationship between thyroid hormones levels and other parameters of cardiac function: Spearman's correlation was used to analyze the relationship between different levels of THs and other parameters of cardiac function (Table II). When we compared TH levels in patients with different NYHA classes, a lower fT3 was found in NYHA III/IV patients compared with those with NYHA II (r = -0.254, P = 0.007). However, no difference was detected between NYHA classes in terms of fT4 and TSH. After Chen, ET AL adjusting for potential confounders (age, sex, hypertension, diabetes mellitus), fT3 showed a significant correlation with BMI (r = 0.207, P = 0.028) and eGFR (r = 0.321, P = 0.001). On the other hand, fT4 and TSH had no significant relationship with BMI and eGFR. Besides, none of THs was associated with CRT response. (HR=0.441, 95% CI 0.221-0.880, P = 0.020) remained independent predictors of overall mortality. On the other hand, only a lower fT3 level (HR = 0.533, 95% CI 0.402-0.705, P < 0.001) rather than fT4 or TSH was an independent predictor of HF readmission, as well as CRT response (HR = 0.425, 95% CI 0.229-0.787, P = 0.007). Therefore, the prognosis of patients with CRT implantation was significantly associated with a lower fT3 level before CRT implantation and CRT response. All patients were grouped into two subgroups according to lower quartile (3.00), of fT3 level: low-normal fT3 group (fT3 1.84-3.00 pmol/L) and high-normal fT3 group (fT3 3.01-7.39 pmol/L). The overall survival rate and event-free survival between two subgroups were both compared using Kaplan-Meier curves with log-rank test. Figure A showed that patients in low-normal fT3 group had a significantly lower overall survival rate than highnormal group (log-rank test, χ 2 = 4.896, P = 0.027). Meanwhile, lower event-free survival rate was also observed in low-normal fT3 group (log-rank test, χ 2 = Chen, ET AL Figure. Kaplan-Meier survival curves showing the differences in overall survival (A) and event-free survival rate (B) between those with lower-normal fT3 levels (fT3 < 3.00 pmol/L) and higher-normal fT3 levels (fT3 ≥ 3.00 pmol/L). THYROID FUNCTION AND PROGNOSIS OF HEART FAILURE 17.726, P < 0.001, Figure B). Discussion As we had known, CRT proved to be benefitial for the LV dysfunction with electromechanical dyssynchronoy. 10) However, mortality still remains high due to frequent comorbidities. 11) Hypothyroidism and "low T3 syndrome," which represent decreased level of fT3 in common, are both the frequent comorbidities resulting to poor prognosis for patients with HF. 5,12) Our study demonstrated that even the low-normal level of fT3 (less than lower quartile) could influence the prognosis of HF patients after CRT implantation. It was an interesting result that fT3 might be a sensitive biomarker to detect the prognosis of HF patients receiving CRT. There was a high prevalence of abnormal thyroid hormone metabolism in patients with advanced HF. The most prominent abnormality was a decrease in T3 levels due to a diminished conversation of T4 to T3. 13,14) Our study also confirmed that lower fT3, rather than fT4 or TSH, was associated with the prognosis, which was in line with most of the research results. The reasons for this phenomenon might be that cytokines such as tumor necrosis factor, IL-1 and IL-6, had been implicated in the pathogenesis of low-T3 syndrome in HF patients through reduced peripheral conversion of T4 into T3 and by inhibiting 5'deiodinase activity. 15) Hepatic congestion caused by volume overload in the setting of right ventricular dysfunction was another contributing factor to the decreased hepatic conversion of T4 to T3. 16) Several potential mechanisms might explain the association between low serum fT3 levels and higher mortality and HF exacerbation rate in euthyroid subjects. Firstly, a lower fT3 level was more frequently seen in NYHA III/IV class patients and fT3 level had a negative correlation, both with NYHA class and NT-proBNP, in our study. It suggested that fT3 level was correlated to the severity of HF in patients receiving CRT, which agreed with the previous studies. [17][18][19][20] Low fT3 level in HF reduced metabolic demand, which was seen as an adaptive process for HF. 21) But persistently low fT3 represented a maladaptive mechanism in favor of structural and functional cardiac remodeling, which had a key role in the pathogenesis of HF. 22) Secondly, low fT3 was correlated to deteriorative hemodynamic status and had been proved in catheterization-based studies. 18,19) Worse hemodynamic status and low ejection fraction were clearly associated with low fT3 level. Low fT3 level reduced ejection fraction, and further caused enlarged heart chambers, lower velocity in the left atrial appendage and severe mitral regurgitation, which finally reduced cardiac systolic function. Thirdly, low-normal fT3 could affect several arteriosclerotic risk factors, such as diabetes mellitus, 17) insulin resistance, 20) serum lipid levels 23) and artery stiffness, 24) all of them above had been proved to have adverse effects on progress of HF. Fourthly, low fT3 status was independently correlated to peak oxygen uptake and functional exercise capacity in severe HF, which made recurrence of HF more frequent and remission period shorter than those with normal range of fT3. 25) As for the euthyroid population without HF, fT3 is also associated with heart rate and echocardiographic heart function and structure. Roef et al found that fT3 level was positively associated with the peak velocity of systolic mitral annulus and late ventricular filling, whereas negatively with left ventricular end-diastolic diameter. 8) It suggested that euthyroid subjects with low-normal fT3 levels had an increased risk to suffer relatively reduced heart function, decreased fT3 level within the low-normal range might be associated with the severity of cardiac insufficiency. On the other hand, CRT can improve fT3 levels after a reverse of cardiac remodeling. Celikyurt et al reported that the fT3 levels increased from 2.67 pg/mL to 2.97 pg/mL and the fT3/fT4 ratio increased from 1.81 to 2.34 in the reverse remodeling group (P < 0.05 for both), which indicated that the increase of fT3 levels after CRT device implantation might be a potential biomarker for identification of CRT response. 26) Despite adverse outcome of HF with a lower level of fT3, the relationship between fT3 level and CRT response remained unclear. Sharma AK found that hypothyroidism was related to adverse outcome for CRT patients, but there was no significant difference between echocardiographic responses to CRT implantation in hypothyroid subjects compared with patients with euthyroid. 9) Our results also showed that there was no correlation between thyroid hormones (fT3, fT4 and TSH) before CRT implantation and CRT response. Therefore, we thought that adverse outcomes might result from decrease in fT3, which did not affect CRT response. Baseline thyroid hormones before CRT implantation and the changes of thyroid hormones after CRT implantation should be studied together, and further large sample studies could be carried out to deal with the complicated relationship. This study had some limitations. Firstly, the sample size of this study was relatively small, which might increase the sampling error and reduce the statistical power. Secondly, the thyroid hormones were only measured at the time of admission before CRT device implantation, but not at the period of follow-up, whereas it could be affected by many confounders. The changes of thyroid hormones may reflect the real effect of CRT. Thirdly, the span of years in enrolling was large, and the periods of follow-up were significantly different. Conclusions Our study demonstrates that low-normal fT3 level was found to be associated with poor prognosis of HF patients receiving CRT. HF patients receiving CRT with low-normal fT3 levels had a higher mortality rate than patients with high-normal fT3 levels. Patients receiving CRT with low-normal fT3 levels are vulnerable to suffer HF again compared to those with high-normal fT3 levels. A low-normal fT3 level may affect the response of CRT in patients through their association with HF process. Conflicts of interest: The authors report no relationships that could be construed as a conflict of interest.
2018-04-03T02:32:04.092Z
2017-11-17T00:00:00.000
{ "year": 2017, "sha1": "1a51e80bf0c4c435a30acc1871b811fceb0c3616", "oa_license": null, "oa_url": "https://www.jstage.jst.go.jp/article/ihj/58/6/58_16-477/_pdf", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "b9817871de0979816a2a925e7f2b5ec2406d1422", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
249183780
pes2o/s2orc
v3-fos-license
The Effect of Syringic Acid and Phenoxy Herbicide 4-chloro-2-methylphenoxyacetic acid (MCPA) on Soil, Rhizosphere, and Plant Endosphere Microbiome The integration of phytoremediation and biostimulation can improve pollutant removal from the environment. Plant secondary metabolites (PSMs), which are structurally related to xenobiotics, can stimulate the presence of microbial community members, exhibiting specialized functions toward detoxifying, and thus mitigating soil toxicity. In this study, we evaluated the effects of enrichment of 4-chloro-2-methylphenoxyacetic acid (MCPA) contaminated soil (unplanted and zucchini-planted) with syringic acid (SA) on the bacterial community structure in soil, the rhizosphere, and zucchini endosphere. Additionally, we measured the concentration of MCPA in soil and fresh biomass of zucchini. The diversity of bacterial communities differed significantly between the studied compartments (i.e., unplanted soil, rhizospheric soil, and plant endosphere: roots or leaves) and between used treatments (MCPA or/and SA application). The highest diversity indices were observed for unplanted soil and rhizosphere. Although the lowest diversity was observed among leaf endophytes, this community was significantly affected by MCPA or SA: the compounds applied separately favored the growth of Actinobacteria (especially Pseudarthrobacter), while their simultaneous addition promoted the growth of Firmicutes (especially Psychrobacillus). The application of MCPA + SA together lead also to enhanced growth of Pseudomonas, Burkholderia, Sphingomonas, and Pandoraea in the rhizosphere, while SA increased the occurrence of Pseudomonas in leaves. In addition, SA appeared to have a positive influence on the degradative potential of the bacterial communities against MCPA: its addition, followed by zucchini planting, significantly increased the removal of the herbicide (50%) from the soil without affecting, neither positively nor negatively, the plant growth. INTRODUCTION The presence of plant secondary metabolites (PSMs) such as flavonoids, coumarins, phenolic compounds, and terpenes, is known to modify the chemical and physical properties of soils. PSMs also serve as allelochemicals, protect plants against pathogens, act as substrates, and can induce pollutant catabolic pathways of soil microorganisms (Pilon-Smits, 2005;Singer, 2006;Zhou and Wu, 2012;Pino et al., 2016). They also shape the structure and function of plant-associated bacterial communities, i.e., rhizobacteria and endophytes, favoring the growth of suitable pollutant consumers (Uhlik et al., 2013;Qin et al., 2014;Eevers et al., 2016), which directly use their degradative capabilities to metabolize a given pollutant (Glick, 2010;Pawlik et al., 2015;Sauvêtre and Schroder, 2015;Khare et al., 2018;Wu et al., 2018). Thus, the role of PSMs in the degradation of pollutants can be substantial. Due to their structural similarity, PSMs may have a considerable influence on the removal of structurally related pollutants (Uhlik et al., 2013;Hu et al., 2014). PSMs typically provide the energy for microorganisms to perform cometabolism, while the structurally similar pollutant is degraded as a secondary substrate (Musilova et al., 2016). Common examples of cometabolites are biphenyl and PCBs (Singer et al., 2003). However, PSMs can also be used as a primary source of carbon and energy by bacterial communities to support their growth and stimulate the expression of desirable genes involved in the catabolism of structurally similar pollutant. The degree of similarity between a PSM and a pollutant has been found to influence the rate of pollutant removal (Mierzejewska et al., 2019;Urbaniak et al., 2019a). Earlier studies described syringic acid (SA) as a characteristic PSM for cucurbits (Blum et al., 2000;Campos et al., 2009;Kruczek et al., 2015;Shi et al., 2016), which are themselves known as effective phytoremediators of organic pollutants (Eevers et al., 2018;Urbaniak et al., 2019bUrbaniak et al., , 2020. The addition of SA to bacterial cultures not only enhanced MCPA removal but more importantly increased the number of detected functional genes responsible for the initiation of phenoxy herbicide biodegradation (Mierzejewska et al., 2019;Urbaniak et al., 2019a). Additionally, SA application and zucchini cultivation was found to decrease the toxicity of MCPA-contaminated soil (Mierzejewska et al., 2022). 4-chloro-2-methylphenoxyacetic acid (MCPA) is one of the most commonly used herbicides in Europe, and it is particularly persistent under the low winter temperatures, low soil organic carbon content, and acidic pH that are typical for Europe (Paszko et al., 2016). In areas with high contamination levels, heavy rainstorms facilitate the transport of MCPA residues from soil to water, resulting in the contamination of aquatic ecosystems (Matamoros et al., 2012;Rheinheimer dos Santos et al., 2020) to levels exceeding permissible threshold concentrations (Ignatowicz and Struk-Sokołowska, 2004;Rippy et al., 2017). These are believed to account for the adverse effects on living organisms reported in several worldwide studies (Pereira and Cerejeira, 2000;Nielsen and Dahllof, 2007;Palma et al., 2018;Morton et al., 2020). Despite the large body of research performed on the effects of MCPA on soil-and water-inhabiting organisms, little is known about its influence on soil and plant-associated bacterial communities. Also, little attention has been devoted to the role of PSMs in the remediation of MCPA-contaminated soil. While our prior research has demonstrated that SA treatment enhanced MCPA transformation (Mierzejewska et al., 2019;Urbaniak et al., 2019a), no studies have examined the effects of such treatment on the structure of plant-associated bacterial communities (rhizobacteria and endophytes). Consequently, this study aimed to determine whether adding SA, which is structurally like the herbicide, to unplanted and zucchini-planted soil contaminated with MCPA shapes the bacterial communities within the unplanted soil, rhizosphere, and plant endosphere (i.e., endophytic communities in roots and leaves). The obtained results could be applied to develop naturebased solutions for the removal of MCPA residues and to prevent their dispersal in the environment. Soil The potting soil was obtained from a certified supplier: Substral OSMOCOTE. Its composition was described as universal soil for plant growth, containing a mixture of peat, fertilizer, expanded clay, and silica and with the following properties: pH 4.3; 34.3% C, 2.2% N; 1.5 g P kg −1 soil, 2.5 g K kg −1 soil. This soil was selected based on previous experiments with cucurbits (Mierzejewska et al., 2022). Plants Based on previous investigations on the uptake of toxic organic pollutants by the cucurbits Wyrwicka et al., 2019;Mierzejewska et al., 2022), C. pepo L. "Atena Polka" (zucchini) was chosen for this study. The seeds were purchased from a certified supplier of garden seeds (W. Legutko) and were germinated under stable conditions in perlite for 5 days and grown in unamended potting soil for 7 days to select seedlings at the same growth stage. Experimental Setup Each treatment was prepared in six replicates, with one seedling (zucchini) per pot, resulting in a total of six plants for each "+" means treatment; "−" means no treatment. treatment variant. Unplanted soil variants were also included as an unplanted reference (Table 1). Soil humidity was adjusted to 60% v/w, and MCPA and SA were applied to the potting soil. The soil variants were kept for 1 h under the fume hood to allow the solvent to evaporate. Following this, the zucchini seedlings were planted into the potting soil. All variants were cultivated in 500 cm 3 soil pots, in a growth chamber at 23 ± 0.5 • C based on a 16 h light/8 h dark cycle, with a photon flux density of 250 µmol m −2 s −1 during the light period, and 60% w/v soil humidity . All variants were watered daily. The experiments were running for 20 days. Samples of unplanted soil and rhizosphere (rhizospheric soil and roots) soil were collected and stored at 4 • C for further analyses. The aboveground biomass of the plants (stems and leaves) was determined subsequently. The leaves were collected and stored at 4 • C for further analyses. MCPA Concentration in Soil The MCPA concentrations were determined in all Soil variants (unplanted and planted) at the beginning of the experiment and after 20 days (Us + MCPA; Us + MCPA + SA; Zu + MCPA; Zu + MCPA + SA). MCPA was determined according to the sample preparation method published by the European Commission (EURL-SRM, 2015, 2020. Briefly, 5.00 g ± 0.05 g of soil was weighed in a 50 ml Teflon centrifuge tube, and 10 ml of deionized water was added. Then, a 10 ml sample extraction solvent (1% formic acid in acetonitrile) was added. After shaking this mixture for 15 min using the QuEChERS Hand Motion Shaker (Eberbach model EL 680.Q.25 QuEChERS) with 450 osc., 4 g of magnesium sulfate and 1 g of sodium chloride were added, followed by shaking for 1 min. The suspension was centrifuged for phase Separation at 8,100 rpm for 5 min. The extract was then diluted (200 µl of extract with 700 µl of water, 50 µl of acetonitrile, and 50 µl of internal standard MCPA D6). This mixture was vortexed and filtered through a 0.22 µm PTFE directly into an amber HPLC vial. The MCPA concentrations in the extracts were determined by highly selective liquid chromatography coupled with tandem mass spectrometry (LC-MS/MS-Agilent 1260 HPLC + 6460 Triple Quad LC/MS) (Agilent Technologies). The chromatography and mass spectrometry conditions are presented in Supplementary Tables 1, 2. The suitability of the method for analyzing MCPA residues in soil has been confirmed: the selectivity/specificity, limit of detection and quantification, linearity, precision, recovery, and expanded uncertainty of the method were validated. The obtained validation parameters met the criteria described in the document SANTE/12682/2019 (European Committee for Standardization, 2018). Molecular Analysis: Microbial Community Structure Characterization Rhizospheric Soil and Root Separation The rhizospheric soil and roots were separated in 15 ml Falcon tubes containing 2.5 ml sterile PBS-S buffer amended with Tween 0.2 ml L −1 to separate rhizospheric soil tightly attached to roots. Subsequently, the rhizospheric soil was shaken off on the platform for 20 min at 180 rpm. The roots were then transferred to Falcon tubes containing sterile 2.5 ml of PBS-S buffer, rinsed, and subjected to surface sterilization. Rhizospheric soil was centrifuged for 20 min at 3,500 rpm, the supernatant was discarded and the resulting pellet was defined as a rhizosphere compartment (Bulgarelli et al., 2012). Roots and Leaves Surface Sterilization To isolate the DNA of endophytic bacterial communities, the roots and leaves were subjected to surface sterilization. The tissues were first rinsed in sterile distilled water to wash off the bulk dust. Subsequently, tissues were incubated in 70% ethanol (1 min), NaOCl (1% roots, 2.5% leaves; 1 min), 70% ethanol (1 min) for surface sterilization. Afterward, the samples were rinsed three times in sterile dH 2 O. Finally, 100 µl of the third rinsing water was transferred to a Petri dish containing an undiluted 869 medium (Mergeay et al., 1985) to check for sterility. DNA Extraction Total DNA from bulk and rhizospheric soil was isolated according to the DNeasy PowerSoil Pro Kit (Qiagen, Venlo, Netherlands). The surface-sterilized plant tissues (roots and leaves) were lyophilized before DNA extraction and snap-frozen in liquid nitrogen. Endophytic DNA was isolated according to PowerPlant R Pro DNA Isolation Kit. DNA samples were quality checked using Nanodrop 2000 (Thermo Fisher Scientific, Wilmington, DE, United States) and stored at −20 • C. 16S rRNA Amplicon Sequencing of the Soil, Roots, and Leaves The isolated DNA was used for the 16S rRNA gene amplification. DNA was isolated from unplanted variants, rhizospheric soil, surface-sterilized roots, and leaves. The PCR program started with an initial denaturation stage (3 min at 98 • C), followed by a second denaturation (10 s at 98 • C). Following this, 30 cycles were performed of the following three steps: for root and leaf endophytes, PNA clamping (10 s at 75 • C), followed by annealing for V3V4 (30 s at 56 • C) and extension (30 s at 72 • C). The reaction was ended by a final 7 min extension at 72 • C. The amplified DNA was purified using AMPure XP beads (Beckman Coulter) and a MagMax magnetic particle processor (ThermoFisher, Leuven, Belgium). Subsequently, 5 µl of the cleaned PCR product was used for a second PCR attaching the Nextera indices (Nextera XT Index Kit v2 Set A; FC-131-2001, and D;FC-131-2004, Illumina, Belgium). For these PCR reactions, 5 µl of the purified PCR product was used in a 25 µl reaction volume and prepared following the 16S Metagenomic Sequencing Library Preparation Guide. The PCR conditions were the same as described above, but the number of cycles was reduced to 20, and a 55 • C annealing temperature was used. The PCR products were cleaned with the Agencourt AMPure XP kit, and then quantified using the Qubit dsDNA HS assay kit (Invitrogen) and the Qubit 2.0 Fluorometer (Invitrogen). Once the molarity of the sample was determined, the samples were diluted down to 4 nM using 10 mM Tris pH 8.5 before sequencing on the Illumina Miseq. The samples were sequenced using the Miseq Reagent Kit v3 (600 cycle) (MS-102-3003) and 15% PhiX Control v3 (FC-110-3001). For quality control, a DNA extraction blank and PCR blank were included throughout the process, and the ZymoBIOMICS Microbial Mock Community Standard (D6300) was used to test the efficiency of DNA extraction (Zymo Research). Bioinformatic Processing of Reads. The sequences were demultiplexed using Illumina Miseq software; they were subsequently quality trimmed and the primers were removed using DADA2 1.10.1 (Callahan et al., 2016) in R version 3.5.1. The parameters for length trimming were set to keep the first 290 bases of the forward read and 200 bases of the reverse read, maxN = 0, MaxEE = (2,5) and PhiX removal. Error rates were inferred and the filtered reads were dereplicated and denoized using the DADA2 default parameters. After merging paired reads and removal of chimeras via the removeBimeraDenovo function, an amplicon sequence variant (ASV) table was built and taxonomy assigned using the SILVA v138 training set (Quast et al., 2012;Yilmaz et al., 2014). The resulting ASVs and taxonomy tables were combined with the metadata file into a Phyloseq object (Phyloseq, version 1.26.1) (McMurdie and Holmes, 2013). The pollutants were removed from the dataset using the Decontam package (version 1.2.1) applying the prevalence method with a 0.5 threshold value (Davis et al., 2018). A phylogenetic tree was constructed using a DECIPHER/Phangorn pipeline as described previously (Murali et al., 2018). Data Visualization and Statistical Analyses The ASV table was further processed by removing organelles (chloroplast, mitochondria), and prevalence was filtered using a 2% inclusion threshold (unsupervised filtering) as described by Callahan et al. (2016). The unfiltered data were subjected to alpha-diversity metrics, such as observed ASV count, Simpson's and Shannon's diversity index, using scripts from the MicrobiomeSeq package. Hypothesis testing was performed using analysis of variance (ANOVA) and Tukey's HSD method using Statistica. For beta-diversity, the Bray-Curtis distances were calculated on unfiltered data using the vegan package (version 2.5.4), and the data were visualized using principal coordinate analysis (PCoA). Hypothesis testing was done by the Adonis function (vegan, version 2.5.5). To assess the homogeneity of variance, the Betadisper function (vegan, version 2.5.4) was used. Relative abundances were calculated and visualized in bar charts using Phyloseq. Differential abundance testing was done on unfiltered data using DESeq2 (version 1.22.2) (Bijnens et al., 2021). Hierarchical cluster analysis was performed to determine the differences between studied variants using PAST 4.0. All performed statistical tests were corrected for multiple testing, and p < 0.05 was considered statistically significant. MCPA Concentration in Soil The MCPA content in the soils was determined at the beginning of the experiment (T0) and after 20 days (Tf) of incubation (Figure 1). The mean concentration of MCPA at T0 was 6.71 ± 0.270 mg g −1 in the soil amended with MCPA (Us + MCPA) and 6.93 ± 0.156 mg g −1 in soil amended with MCPA and SA (Us + MCPA + SA), respectively. After 20 days (Tf), the MCPA concentration in the Us + MCPA variant decreased to 5.63 ± 0.708 mg g −1 indicating about 18% reduction. In the soil amended with SA (Us + SA), the concentration of MCPA diminished to 4.73 ± 0.784 mg g −1 , i.e., about 30% lower than at T0. The cultivation of zucchini in the MCPA-contaminated soil (Zu + MCPA) did not affect the MCPA concentration in the soil (7.77 ± 0.617 mg g −1 ). However, a substantial reduction of the MCPA concentration (∼50%) was observed in planted soil enriched with SA (Zu + MCPA + SA), where the final concentration of the herbicide was reduced to 3.53 mg g −1 . Fresh Biomass Plant fresh weight was significantly affected by the addition of MCPA ( Table 2). Significantly higher aboveground biomasses were observed for the zucchini grown in soils without MCPA than those grown in soil amended with MCPA ( Table 2). The same pattern was observed for fresh weights of leaves and stems measured separately, these values being significantly lower for variants amended with MCPA compared to unamended soil. The addition of SA alone had no significant effect on the plant's fresh weight. The Effect of Syringic Acid Application Without MCPA on Bacteria Community Structure The changes in alpha diversity are presented in Table 3. Calculated indices (InvSimpson; Obs.ASVs; Shannon) were slightly lower in Us + SA than in Bs. The amendment of soil with SA increased the InvSimpson/Obs ASVs in the rhizosphere (ZuRh + SA) and roots endosphere (ZuRo + SA), and slightly increased the Shannon index in the roots endosphere (ZuRo + SA). Higher InvSimspon scores were observed in unamended ZuLe than in the SA-amended variant ZuLe + SA. The addition of SA alone, without MCPA, influenced the composition of bacterial communities, both in soils and plant tissues (Figures 2A,B). A high abundance of Proteobacteria was observed in Us + SA (66.88%). However, the phyla Acidobacteriota, Gemmatimonadota, Planctomycetoota, Firmicutes, and Chloroflexi were not detected after the amendment of unplanted soil with SA (Us + SA). Firmicutes were also not detected in ZuRh + SA. After SA addition, the abundance of Proteobacteria in ZuLe + SA was three times lower (25.58%) than in ZuLe (77.81%), while the abundance of Actinobacteria was almost five times higher in ZuLe + SA (63.27%) than in ZuLe (14.67%). The relative abundance of certain genera was also influenced by SA (Table 4 and Figures 2C,D). In contrast to the unamended variant (Bs), Candidatus udaeobacter was not detected in the Us + SA. Devosia and Candidatus xiphinematobacter genera were not detected in the ZuRh + SA. In turn, the relative abundance of Mucilaginibacter was six times higher in ZuRh + SA (6.588%) than in ZuRh (1.000%). SA had significant effects on the presence of several genera, especially in the roots endosphere: the relative abundance of Dyella was three times higher in ZuRo + SA (3.966%) than in ZuRo (1.000%). Although the presence of the genera Ralstonia, Rhizobium, Massilia, Luteimonas, Klebsiella, Streptomyces, and Mucilaginibacter was confirmed in ZuRo, they were not detected in ZuRo + SA. SA exerted a particularly strong effect on the dominant genus in leaves endosphere (Figure 2D), where the abundance of Pseudarthrobacter was significantly higher (61.50%) than in other variants. SA lowered the prevalence of Paenibacillus in ZuLe + SA five times (1.000%) in comparison to ZuLe (5.786%). Also, the relative abundance of Paeniglutamicibacter was four times lower in ZuLe + SA (1%) than in ZuLe (4.284%). In turn, higher abundances of Pseudomonas (13.72%), Klebsiella (11.77%), Exiguobacterium MCPA caused similar changes in the structure of the bacterial communities as SA (Figures 3A,B). The most abundant phylum in Us + MCPA was Proteobacteria (69.26%). In Us + MCPA, the relative abundance of Acidobacteriota (1.695%), Crenarchaeota (2.366%), and Verrumicrobiota (2.851%) was approximately two times lower than in Us (being: 4.124, 5.563, and 2.851%, respectively). In contrast, the abundance of Patescibacteria was two times higher in Us + SA (4.644%) than in Us (2.256%). Proteobacteria also predominated the bacterial community in ZuRh + MCPA (61.20%). In contrast, the relative abundance of Actinobacteria in ZuRh + MCPA increased by a factor of two (2.135%) in comparison to ZuRh (1.000%). A three times lower abundance of Verrumicrobiota was observed for ZuRh + MCPA (1.966%) than in ZuRh (6.469%). The relative abundance of Actinobacteria and Firmicutes in ZuLe + MCPA increased after MCPA treatment (78.90 and 18.41%, respectively), in comparison to ZuLe (14.67 and 7.10%, respectively). In contrast, the abundance of Proteobacteria from ZuLe + MCPA (77.81%) was approximately thirty times lower than in ZuLe (2.443%). MCPA also changed the prevalence of individual genera (Table 4 and Figures 3C,D). The addition of MCPA to unplanted soil decreased the abundance of Candidatus udaeobacter by five times from 5.852% in Us + MCPA + SA to 1.000% in Us + MCPA. The genera Asticcacaulis, Burkholderia, and Candidatus xiphinematobacter were not detected in ZuRh + MCPA. In turn, the abundance of Methylophilus was six and half times higher in ZuRh + MCPA (6.510%) than in ZuRh (1.000%). Furthermore, higher values of the abundance of Rhodanobacter were observed for ZuRh + MCPA (20.52%) than in ZuRh (14.73%). In roots endosphere (ZuRo + MCPA), the abundances of Dyella and Salmonella (5.410 and 3.419%, respectively) were higher than in ZuRo (1.000 and 1.000%, respectively). In contrast, after the amendment of soil with MCPA Mucilaginibacter, Ralstonia and Rhizobium were not (Figures 3C,D): the abundance of Pseudarthrobacter was almost eight times higher in ZuLe + MCPA (78.09%) than in ZuLe (10.12%). Furthermore, the genus Paeniglutamicibacter was not detected in ZuLe + MCPA. The Effects of Simultaneous Application of MCPA + SA on the Bacterial Community Structure Statistically significant differences in alpha diversity were found between the studied compartments, particularly regarding the InvSimpson index (Table 3). For unplanted soil, the InvSimpson was slightly lower in Us + MCPA + SA than in Bs, and for roots endosphere, the index for ZuRo + MCPA + SA was lower than in ZuRo. However, in rhizosphere and leaves, the values were higher in the MCPA-treated variants than in untreated ones. In addition, a lower value of Obs. ASV was observed in ZuRh + MCPA + SA than in ZuRh. The simultaneous addition of MCPA and SA to the soil affected both the soil and endophytic bacterial communities (Figures 4A-D). The predominant phylum in Us + MCPA + SA was Proteobacteria (69.26%). In turn, in US + MCPA + SA Firmicutes, Planctomycetota, and Crenarchaeota were not detected. Proteobacteria was also the predominant phylum in the rhizosphere ZuRh + MCPA (61.20%). The prevalence of Bacteroidota in ZuRh + MCPA + SA (12.32%) was two times higher in ZuRh (25.23%) than in ZuRh + MCPA + SA. Also, the relative abundance of Proteobacteria in roots endosphere (ZuRo + MCPA + SA) was higher (71.14%) than the abundance of other phyla. The prevalence of Actinobacteria was two times higher in ZuRo (30.23%) than in ZuRo + MCPA + SA (15.79%). The phylum Firmicutes was not detected in ZuRo + MCPA + SA, whereas its presence was confirmed in ZuRo (3.41%). In turn, Firmicutes dominated in ZuLe + MCPA + SA. The relative abundance of Firmicutes in ZuLe + MCPA + SA (65.39%) was almost seven times higher than in ZuLe (7.097%). The abundance of Actinobacteria was two times higher in ZuLe + MCPA + SA (29.68%) than in ZuLe (14.67%). Bacterial Diversity in the Different Compartments Considerable variations were observed in the alpha diversities of unplanted soil, rhizosphere soil, roots, and leaves endosphere ( Table 3). The MANOVA analysis ( Slightly lower Inv. Simpson indices were observed in the rhizosphere, i.e., the highest value was found for ZuRh + MCPA (79.96) and the lowest for ZuRh (68.17). Lower diversity indices were observed in roots and leaves endosphere than in bulk and rhizospheric soil, ranging from 2.415 for ZuLe + MCPA to 39.01 for ZuRo + MCPA (Inv. Simpson), from 5.000 for ZuLe + MCPA to 94.33 for ZuRo + SA (observed ASV), and from 1.559 for ZuLe + MCPA + SA to 3.908 for ZuRo + MCPA (Shannon index). Furthermore, the beta-diversity analysis indicated clustering (Figures 2B, 3B, 4B) of samples from different compartments. The MANOVA analysis demonstrated significant compartment specificity of the bacterial and fungal community structure (p < 0.001). A compartment specificity was observed for the occurrence of specific genera irrespective of the treatment (Table 4). For example, the occurrence of Devosia, Sphingomonas, and Luteimonas was observed only in unplanted variants. Similarly, Asticaccaulis and Burkholderia were present in the roots endosphere of all variants. In turn, Rhodanobacter was observed in all variants of unplanted soil, rhizosphere, and roots endosphere, but not in the leaves endosphere. DISCUSSION Recent research has provided a comprehensive insight into the interactions between bacteria and plants in contaminated environments (Tardif et al., 2016;Thijs et al., 2018). It appears that the structure of bacterial communities in soils and plants is determined by the presence of pollutants. However, our knowledge about the effects of plant root exudates (including PSMs, such as SA) on the communities of bacteria in contaminated environments remains limited. Some studies have shown that certain phenolic compounds, such as benzoic acid, can enhance the biodegradative activity of bacteria in soil (Mandal et al., 2010;Zwetsloot et al., 2020). However, phenolic compounds can exert contrasting effects on microbial communities depending on the prevailing conditions (Zwetsloot et al., 2020). While most studies have examined the biodegradation of phenoxy herbicides, especially 2,4-D (2,4-dichlorophenoxy acid) in soil, the most widely-used pesticide in European agriculture is MCPA. MCPA is usually sprayed as a commercial formulation in the form of amines, sodium salts, or esters to control perennial and broadleaf annual weeds (Paszko et al., 2016). Higher phenoxy herbicide (i.e., MCPA) concentrations are mostly detected in regions with ongoing intensive agriculture (Bianchi et al., 2017), although residuals of MCPA can be transported with intensive surface run-off and become a threat to non-target organisms. Monitoring studies showed that MCPA was present in 33-60% of the studied sites along the river tributaries in Poland (Zagibajło et al., 2017;Jarosiewicz et al., 2018). Thus, the development of methods for the prevention of MCPA residuals leaching is of the highest importance. Still, little attention has been paid to the potential of selected plants as phytoremediators of soils contaminated with phenoxy herbicides. The members of the Cucurbitaceae family are particularly promising candidates due to their ability to extract, translocate, and accumulate highly toxic persistent organic pollutants from soils (Hülster et al., 1994;White et al., 2003;Inui et al., 2008;Zhang et al., 2009;Wyrwicka et al., 2014;Urbaniak et al., 2016) and several cucurbit species, such as zucchini (C. pepo cv "Atena Polka"), have demonstrated resistance to MCPA (Mierzejewska et al., 2022). Therefore, this study investigates the effects of a specific PSM (SA) and zucchini cultivation on the removal of MCPA from the soil. In unplanted MCPA-contaminated soil, amendment with SA led to a higher decrease of MCPA (30%) than in untreated soil (18%) (Figure 1). However, higher MCPA removal (50%) was observed in the planted condition amended with SA. Another study showed that the removal of 2,4-D from water, using the plant species Plectranthus neochilus, was 49% after 30 days (Ramborger et al., 2017). Furthermore, Germaine et al. (2006) demonstrated that endophyte-enhanced phytoremediation significantly improves the capacity for removal of 2,4-D. Our study showed that SA improves the removal of MCPA from unplanted soil. However, when SA application and zucchini cultivation are combined, the decrease of the herbicide concentration in the soil is significantly greater. Although SA enhanced the removal of MCPA from soil (Figure 1), MCPA had a detrimental effect on fresh biomass ( Table 2). Indeed, MCPA application was associated with 90% lower aboveground biomass in the planted variants. Phenoxy herbicides are transported to meristems, causing uncontrolled growth and consequently damaging the development of plant tissues (Grossmann, 2003). SA has been found to alleviate the toxic effects of MCPA (Mierzejewska et al., 2022), however, these studies investigated higher initial concentrations of both compounds. Kováčik et al. (2010) showed that certain phenolic acids, e.g., salicylic acid, can alleviate stress symptoms in plants. On the other hand, Zhou et al. (2014) suggested that SA can exhibit phytotoxic effects. Blum (1996) reported the concentration, composition, and synergism of individual phenolic compounds to be crucial for plant growth inhibition in the environment. Hence, we claim that despite improving MCPA-removal efficiency, the application of SA throughout the incubation time did not enhance the plant growth-promoting properties of the system. The above-mentioned observations are as per earlier studies indicating that SA significantly enhanced the removal of MCPA from liquid cultures enriched with microorganisms derived from agricultural soil (Mierzejewska et al., 2019;Urbaniak et al., 2019a). Hence, this study also examines the effects of SA and MCPA, used individually or in combination, on the composition of bacterial communities within the soil, rhizosphere, and the zucchini plant itself. The effects on the diversity of bacteria were found to be compartment-specific, with an interaction between SA and MCPA ( Table 5). The highest diversity indices values were found for unplanted bulk and rhizospheric soil, and the lowest in leaves. Approximately, 10% lower values of diversity indices were observed after SA application (Table 3). Indeed, it has previously been reported that SA has antimicrobial activity against some bacterial strains (Shi et al., 2016). Our findings also indicate that the amendment of soil with MCPA did not affect diversity indices significantly (Table 3). Also, Ławniczak et al. (2016) found that the application of oligomeric herbicidal ionic liquids with MCPA did not cause any changes in the diversity indices. In variants amended with SA and MCPA diversity indices (i.e., Obs.ASV) were higher than in untreated variants. Also, Lipthay et al. (2004) reported higher species richness and Shannon index in herbicideacclimated sediments. This effect can be explained by the direct and indirect effects of the applied compounds (MCPA and SA). Formed metabolites and root exudates produced in response to stress conditions can also influence the structural diversity (Uhlik et al., 2013;Musilova et al., 2016). It is important to emphasize that interactions between the studied compartments and the application of both compounds, MCPA and SA, determined the diversity of bacteria. After the addition of phenolic compounds (SA and MCPA), the bacterial communities in the unplanted soil, rhizospheric soil, and root endosphere were found to be dominated by Proteobacteria and differed only slightly from those of the untreated variant (Figures 2A, 3A, 4A). Also, the addition of DDE (0.0001 mg/L) to vermiculite led to the dominance of Proteobacteria in the roots endosphere of zucchini (Eevers et al., 2016). Similarly, Tejada et al. (2010) report that the use of MCPA (1.5 L/ha) did not appear to affect microbial structural diversity in soil. In contrast, it has been found that microbial communities isolated from the rhizosphere of soil contaminated with mecocrop (MCPP; 0.53 or 1.06 g/L) differed substantially from those isolated from unplanted soil (Lappin et al., 1985). The above-mentioned results show that in unplanted soil, rhizospheric soil, and roots endosphere, the amendment of soil with SA and MCPA not significantly affect the predominant taxa. However, the application of MCPA and SA, either separately or together, considerably affected the composition of the endophytic bacterial community in leaves, although they demonstrated the lowest diversity indices (Figures 2-4). MCPA is absorbed through both leaves and roots and is translocated throughout the plant (via xylem and phloem) to the meristematic regions (Polit et al., 2014). The induced changes result in a series of biochemical and physiological processes which influence and disturb the morphology of plant roots, stems, and leaves (Grossmann, 2009). The observed changes in the composition of the plant endosphere microbiome can be due to the application of herbicides. Endophytes in plants can directly contribute to the detoxification of pollutants in plants or promote plant growth in stress conditions (Tétard-Jones and Edwards, 2016). However, the latter was not observed, since fresh weight was not higher after the simultaneous amendment of SA and MCPA. To the best of our knowledge, this is the first study showing that after the application of phenolic compounds (i.e., SA and MCPA) to soil, the most profound changes in structural diversity across the plant microbiome are observed in the leaves. Further investigation of structural diversity showed that amendment of the soil with SA or MCPA led to the prevalence of the Actinobacteria (genus Pseudarthrobacter) in the leaves endosphere (Figures 2D, 3D). Similarly, Zhou et al. (2018) found that amendment of soil with the phenolic compound p-coumaric acid enhanced the relative abundances of Actinobacteria (i.e., Pseudarthrobacter) in the rhizosphere. Actinobacteria are described as common endophytic microorganisms, which can have multiple beneficial functions in plants, such as the production of antimicrobial agents (Musa et al., 2020) and biodegradation of petroleum and plastic compounds (Singh and Dubey, 2018). Combined MCPA and SA treatment, however, resulted in Firmicutes (genus Psychrobacillus) becoming the predominant taxon in leaves endosphere ( Figure 4D). Regar et al. (2019) suggest that an increase in the abundance of Firmicutes can be a response to environmental stress such as herbicide contamination. Rangjaroen et al. (2019) mentioned that endophytic strains of Bacillus isolated from an herbicidetreated environment can enhance the resistance of plants to fungal pests. Our observations confirmed that the amendment of the soil with SA and MCPA favored the presence of Actinobacteria (genus Pseudarthrobacter) and Firmicutes (genus Psychrobacillus) in leaves. The shifts in bacterial communities that occur after SA treatment could influence the degradative capacity against MCPA. Various genes belonging to the tfd cluster (responsible for initial steps of phenoxy herbicides biodegradation) have been identified in several taxa, e.g., Burkolderia, Pseudomonas, Achromobacter, Delftia, Bradyrhizobium, and Cupriavidus (Suwa et al., 1996;Kamagata et al., 1997;Baelum et al., 2010). Previous studies also demonstrated that SA doubled the presence of tfdA genes (Mierzejewska et al., 2019;Urbaniak et al., 2019a). In our study, the enrichment of soil with SA induced the occurrence of certain Proteobacteria genera such as Pseudomonas in leaves endosphere (Table 4), while the addition of MCPA + SA positively influenced the presence of Pseudomonas and Burkholderia in the rhizosphere ( Table 4). Germaine et al. (2006) demonstrated that the endophytic strain Pseudomonas putida VM1450 enhanced the removal of 2,4-D from soil and demonstrated high biodegradative potential. In addition, the presence of Burkholderia in soils has been confirmed in our previous studies (Mierzejewska et al., 2019;Urbaniak et al., 2019a). Our findings indicate that combined MCPA + SA treatment induced the presence of Sphingomonas and Pandoraea in the roots endosphere (Table 4). Several Sphingomonas strains have been reported as being MCPAdegraders Liu et al., 2013;Nielsen et al., 2013), and Pandoraea was found to substantially contribute to the degradation of lindane (HCH) in water and soil slurries (Okeke et al., 2002); however, to our knowledge, no studies have examined whether this genus was able to degrade phenoxy herbicides or related phenolic compounds. The above findings suggest that the amendment of the soil promoted the growth of specialized taxa; however, they were not the predominant taxa in the investigated compartments. Still, the functions of the identified strains need to be further confirmed by culturing techniques and molecular as well as metabolic analysis. CONCLUSION The above findings show that the addition of SA can enhance the removal of MCPA from unplanted soil. This removal is further enhanced by combining SA amendment with the cultivation of zucchini; although SA itself did not have a positive effect on the zucchini growth. The application of both phenolic compounds (i.e., MCPA and SA) led to changes in bacterial diversity: the highest bacterial diversity indices were observed for unplanted soil and rhizospheric soil, and the lowest in leaves. Despite the lowest diversity, the leaf endophytes were strongly affected by the addition of the studied phenolic compounds favoring the growth of the phyla Actinobacteria (especially Pseudarthrobacter spp.) and Firmicutes (especially Psychrobacillus). SA also promoted the growth of specific genera in the rhizosphere (Burkholderia, Pseudomonas, Sphingomonas) and in leaves endosphere (Pseudomonas), which were previously reported to harbor functional genes for MCPA biodegradation. Hence, it appears that SA not only influenced the structure of the bacterial communities in either MCPA-contaminated soil and zucchini, but also enhanced the bacterial degradation of the herbicide in the soil-plant system. This is the first study to collectively examine different aspects of MCPA removal from soil: biostimulation (use of the PSM as an enhancer of MCPA removal) and phytoremediation (use of zucchini as a potential tool for removal of the herbicide) and to study the composition of soil and endophytic bacterial communities as an effect of addition of structurally related phenolic compounds (SA and MCPA). However, to disentangle the mechanisms behind the shifts in bacterial communities and their functions, additional research is needed. DATA AVAILABILITY STATEMENT The datasets presented in this study can be found in online repositories. The latest version of link to repository of our sequencing data is https://www.ncbi.nlm.nih.gov/bioproject/ PRJNA809520. AUTHOR CONTRIBUTIONS EM, ST, MU, and JV designed the experimental setup and analysis and wrote the manuscript. EM and MU performed the experiments and prepared the samples. EM and ST prepared the samples for DNA extraction and sequencing and analyzed the results using bioinformatics tools. EM determined the fresh biomass of plants and analyzed the fresh biomass and MCPA removal data. KZ prepared the samples and measured the MCPA concentration in the soil. All authors agreed on the final version of the manuscript. FUNDING The Council of the National Science Centre in Poland ETIUDA 7-funded the project "Cucurbits and their plant secondary metabolites as stimulators of biological soil remediation contaminated with phenoxy herbicides" [No. 2019/32/T/NZ9/00403]. This work was also supported by the UHasselt Methusalem project 08M03VGRJ.
2022-05-31T13:26:35.504Z
2022-05-31T00:00:00.000
{ "year": 2022, "sha1": "93a9b7e14fc617fe78473dea719dad6ccf57d987", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fpls.2022.882228/pdf", "oa_status": "GOLD", "pdf_src": "Frontier", "pdf_hash": "93a9b7e14fc617fe78473dea719dad6ccf57d987", "s2fieldsofstudy": [ "Agricultural And Food Sciences", "Environmental Science" ], "extfieldsofstudy": [ "Medicine" ] }
14528390
pes2o/s2orc
v3-fos-license
Distinguishing the Littlest Higgs model with T-parity from supersymmetry at the LHC using trileptons We analyse hadronically quiet trilepton signatures in the T-parity conserving Littlest Higgs model and in R-parity conserving supersymmetry at the Large Hadron Collider. We identify the regions of the parameter space where such signals can reveal the presence of these new physics models above the Standard Model background and distinguish them from each other, even in a situation when the mass spectrum of the Littlest Higgs model resembles the supersymmetric pattern. According to a recent study [15], once one filters away the kinematical regions where gluino cascades contribute, the ratio between like-and unlike-sign dileptons is different in the two cases. While such a claim is reassuring, it is better to have some additional discriminating signals, preferably involving the electroweak sector alone. The purpose of this note is to suggest the 'hadronically quiet' trilepton channel in this context. It should also be emphasised that we envision a situation where new signals are seen, and some idea about the masses of the new particles is available from the hardness of leptons and/or jets or the missing-p T distribution. In the LHT a global symmetry SU (5) is spontaneously broken down to SO (5) at a scale f ∼ 1 TeV. An [SU(2)×U(1)] 2 gauge symmetry is imposed, which is simultaneously broken at f to the diagonal subgroup SU(2) L × U(1) Y , which is identified with the SM gauge group. This leads to four heavy gauge bosons W ± H , Z H and A H with masses ∼ f in addition to the SM gauge fields. The SM Higgs doublet is part of an assortment of pseudo-Goldstone bosons which result from the spontaneous breaking of the global symmetry. This symmetry protects the Higgs mass from getting quadratic divergences at one loop, even in the presence of gauge and Yukawa interactions. Electroweak symmetry is broken via the Coleman-Weinberg mechanism and the Higgs mass is generated radiatively, leading naturally to a light Higgs boson. The multiplet of Goldstone bosons contains a heavy SU(2) triplet scalar Φ as well. In contrast to SUSY, the new states which cancel the quadratically divergent contributions to the Higgs mass due to the top quark, gauge boson and Higgs boson loops, respectively, are heavy fermions, additional gauge bosons and triplet Higgs states. In order to comply with strong constraints from electroweak precision data on the Littlest Higgs model [3], one imposes T-parity [4] which maps the two pairs of gauge groups SU(2) i × U(1) i , i = 1, 2 into each other, forcing the corresponding gauge couplings to be equal, with g 1 = g 2 and g ′ 1 = g ′ 2 . All SM particles, including the Higgs doublet, are even under T-parity, whereas the four additional heavy gauge bosons and the Higgs triplet are T-odd. The top quark has two heavy fermionic partners, T + (T-even) and T − (T-odd). For consistency of the model, one has to introduce the additional heavy, T-odd vector-like fermions u i H , d i H , e i H and ν i H (i = 1, 2, 3) for each SM quark and lepton field. For further details on the LHT, we refer the reader to Refs. [5,8,9,18]. As shown in Refs. [18][19][20], a scale f (which dictates the masses of most new particles) as low as 500 GeV is compatible in the LHT with electroweak precision data. Further constraints on the parameters of the LHT come from flavour physics [21]. The masses of the heavy gauge bosons in the LHT are given by where corrections of O(v 2 /f 2 ) are neglected in the approximate numerical values. Thus these particles have masses of several hundreds of GeV for f ∼ 1 TeV, although A H , the heavy partner of the photon, can be quite light, because of the small prefactor, and is usually assumed to be the LTP. The masses of the heavy, T-odd fermions are determined by general 3 × 3 mass matrices in the (mirror) flavour space, m ij q H ,l H ∼ κ ij q,l f with i, j = 1, 2, 3. We simplify our analysis by assuming that κ ij q = κ q δ ij . The parameter κ q ∼ O(1) thus determines the masses of the heavy quarks in the following way: Similarly, masses of the heavy leptons in the spectrum are determined by a common parameter κ l . We further assume that the values for κ q,l are not close to the upper bound κ ≤ 4.8 (for f = 1 TeV) obtained from 4-fermion operators [18] and that limits of m > O(100 GeV) from direct searches at the Large Electron Positron (LEP) collider apply to the mirror fermions in the LHT. Thus our analysis takes κ q,l in the range 0.2 < ∼ κ q,l < ∼ 2, thereby allowing all new heavy fermions to have masses ranging from several hundreds of GeV to a TeV, for f ∼ 1 TeV. For our analysis we have used κ l = 0.4, with κ q = 1 and 1.5. 1 This yields masses of the heavy leptons and quarks which are spaced relative to each other in a way often encountered in SUSY for sleptons and squarks, so that the situation where one spectrum fakes the other at colliders is best addressed. A value of κ l < ∼ 0.2 leads to a heavy neutrino LTP, whose phenomenology is somewhat different from that of SUSY with a neutralino LSP. Thus f , together with κ q,l , determines the part of the LHT spectrum relevant for us. The mass of the triplet scalar Φ is related to the doublet Higgs mass by We will take m H = 120 GeV throughout this paper. Two more dimensionless parameters λ 1 and λ 2 appear in the top quark sector; the top mass being given by m t = (λ 1 / √ 1 + R 2 )v and R = λ 1 /λ 2 . The masses of the two heavy partners of the top quark, T + and T − , can be expressed as We use m t = 171.4 GeV in our analysis and set R = 1, although this does not have any significant bearing on our analysis. Hadronically quiet final states comprising of trileptons can be produced in an LHT scenario via qq ′ → W ± H Z H (see Figure 1(a)). The most obvious way to trileptons from this is However, the SM backgrounds, of which WZ-production is the dominant source, need to be eliminated, the easiest way being to disallow events with the invariant mass of any two opposite-sign leptons in the neighbourhood of the Z-mass. While this takes away many signal events, one can still have trileptons if the T-odd heavy leptons are lighter than the Z H . In that case, the decay channel Z H → l ± H l ∓ followed by l ± H → A H l ± opens up, and the trilepton events can be quite copious in such a region of the parameter space. As an example, one can see that for κ l = 0.4 and f = 500 GeV one has m Z H = 317 GeV, m l ± H = 283 GeV and m A H = 65 GeV. We shall comment later on the case where a real l H cannot be produced in Z H -decays. Figure 1: Representative leading order Feynman graphs contributing to the pair production of W ± H Z H in the LHT (a) and to χ ± 1 χ 0 2 in SUSY (b) at the LHC. Similar signals in SUSY are well-studied by now at hadron colliders [22,23]; the main production channel in a minimal model being qq ′ → χ ± 1 χ 0 2 (see Figure 1 The signals are not affected by the invariant mass cut so significantly as in the case of LHT. This feature (namely, the susceptibility to invariant mass cut) itself enables discrimination between the two scenarios. However, as we shall see below, clear quantitative distinction can be made from the predicted strength of the signal as a whole. The other major source of trileptons can be the production of heavy quarks followed by their cascade decays into leptons via W ± H or Z H decays. The SUSY counterpart of such processes will be the production of squarks followed by their cascade decays into leptons via chargino and neutralino decays. Such processes will be accompanied by two jets, but if these jets are extremely soft, they can escape detection and therefore such events can be misidentified as hadronically quiet. As we shall see later, our jet recognition criteria disallow such final states, and thus one can fully concentrate on final states which genuinely originate in the electroweak sector. To see whether the LHT signal may be mimicked by the corresponding supersymmetric signal, one has to go to situations where their particle spectra have a close correspondence. It is assumed that the masses of the particles produced in the hard scattering, and that of the invisible particle (LTP/LSP) in the final state, can be extracted from various kinematic distributions. These masses, it has been claimed, can be estimated at the LHC up to an uncertainty of about 20 − 30 GeV for SUSY particles [24]. These references also indicate that the uncertainty can be much less in situations where the masses are correlated, as in a supergravity scenario. It is not unreasonable to expect a similar level of precision in the case of the LHT where the masses of W H , Z H and A H are all related to the parameter f . At each chosen value of (f, κ q , κ l ) in our analysis, we equate the masses of the squarks and sleptons to those of the heavy quarks and leptons determined by the relations given in Eq. (2). Next, we try to align the heavy gauge boson masses in Eq. (1) to those of the low-lying neutralinos and charginos ( χ 0 1 , χ 0 2 , χ ± 1 ) which play a crucial role in the production of trileptons. This is best done in a minimal SUSY scenario where the gaugino masses (M 1 , M 2 ) are not constrained by the requirement of unification at a high scale. M 1 , the Bino mass, is set equal to m A H . Next, we set a correspondence between (m e χ ± 1 , m e χ 0 2 ) and (m W H , m Z H ) for both the cases where the former pair is dominated by the Wino and the Higgsino. This is done by adopting two scenarios, namely, (a) M 1 = m A H , M 2 = m Z H and µ = 1.5 TeV (henceforth to be called the SS1 scenario), and (b) M 1 = m A H , µ = m Z H and M 2 = 1.5 TeV (henceforth to be called the SS2 scenario). The physical chargino and neutralino states are subsequently obtained by diagonalisation of the respective mass matrices, and, as seen in Table 1, they indeed demonstrate a very close resemblance of the spectra between LHT and SUSY. 2 We assume M 3 to be 5 TeV, thus decoupling the gluinos as desired. For the sake of simplicity, we set all the trilinear couplings (A) to zero, except A t , which we tune to get the lighter CP-even Higgs mass m H = 120 GeV as in the case of the LHT. Our analysis is not affected by this tuning. Note that it is not possible to match all particles in the minimal supersymmetric standard model (MSSM) into corresponding states in the LHT and vice-versa; for instance, the heavy quarks T ± do not have a counterpart in the MSSM. On the other hand, there are no states in the LHT that would correspond to the heavier chargino and neutralinos. Furthermore, the rest of the Higgs sector (the charged scalar, the heavier neutral scalar and the pseudoscalar in SUSY, and the triplet states in LHT) does not correspond similarly. This, too, does not affect the signals under consideration here. Finally, we use tan β = 10 and m A = 850 GeV throughout for the calculations in the MSSM. We use the CalcHEP 2.5.i [25] model file for the LHT written by the authors of Ref. [11] to calculate cross-sections and branching fractions. For the subsequent simulations, the cross-sections generated with CalcHEP are then interfaced into PYTHIA 6.410 [26]. The cross-sections and branching fractions for the MSSM are calculated directly in PYTHIA. The parton densities for the calculation of cross-sections at the LHC are evaluated at leading order using CTEQ6L [27] with renormalisation and factorisation scale fixed by µ R = µ F = √ŝ . In Figure 2 we plot the pair production cross-sections at the LHC for W ± H Z H with 2 The states χ ± 1 and χ 0 2 should be close to each other in mass (like W H and Z H ), but considerably heavier than χ 0 1 (like A H ). Matching the LHT spectrum, controlled by the parameter f , in this manner is not possible with µ < ∼ M 1 . Consequently, χ 0 1 remains Bino-dominated in our analysis. κ q = 1 and κ q = 1.5, as functions of the LHT scale f . As we vary f and scan the LHT spectrum, the cross-sections for χ ± 1 χ 0 2 -production have also been calculated (both for SS1 and SS2), the SUSY spectrum being matched at each point as described above. The difference between the LHT and the SS1 cross-section can be partially attributed to the vector vis-a-vis fermionic final states in the respective signals. A further suppression of the SS2 cross-section in comparison to that of SS1 is also noticeable. This is because for SS2 the couplings involved in the t-and u-channels are predominantly Yukawa in nature and thus suppressed for light quarks from the proton beams, while for SS1 the gauge coupling g 2 plays the vital role (see Figure 1(b)). The cross-section enhancement with increased κ q , i.e. increased masses for the mirror quarks, shows that we are in a region of the parameter space where the not so unusual destructive interference between the s-and t-channel processes in Figure 1(a) becomes less effective with increase in κ q [10,11]. A similar interference effect occurs for the SS1 case in Figure 1(b). Therefore, the relative difference between SS1 and SS2 increases when going from lighter to heavier squarks when increasing κ q . In the case of SS2, with Higgsino dominated charginos and neutralinos, it is mainly the s-channel diagram in Figure 1(b) which contributes, so that there is very little difference between the cross-sections for the two values of κ q . While the production cross-sections are controlled by the parameter κ q for a fixed f , the decay rates are primarily governed by κ l , see also Ref. [13]. In Figure 3(a-c) we plot the branching fractions of the particles produced in the initial hard scattering for LHT, SS1 and SS2, respectively, as functions of κ l , with the SUSY spectrum appropriately matched. For each value of f the mass spectra in the two SUSY cases have been matched to the LHT spectrum as described in the text. LHT In Figure 3(a) (3(b)), we see that the leptonic branching fractions are larger for W ± H ( χ ± 1 ) or Z H ( χ 0 2 ) up to κ l = 0.44. This is because the masses of the heavy leptons (sleptons) are smaller than the masses of the heavy gauge bosons (chargino and neutralino). Above κ = 0.44, the decays are purely into the LTP (LSP) and a gauge boson or a Higgs. In case of SS1, as the produced particles are gaugino dominated, their decays are governed by gauge couplings, whereas for SS2 the produced particles being Higgsino dominated, it is the Yukawa coupling which enters in the decay. This explains why the leptonic branching fractions for SS1 (Figure 3(b)) are higher compared to SS2 (Figure 3(c)). The event analysis is performed with PYTHIA at the parton level, turning off initialand final-state radiation. To select our final trilepton states, we apply the following cuts on our sample events: • In order that the events are hadronically quiet, we reject jets having p T j > 30 GeV and |η j | < 2.7. This reduces the tt background considerably [23]. • Each lepton should have p T l > 25 GeV and |η l | ≤ 2.5, to ensure that they lie within the coverage of the detector. • ∆R ll ≥ 0.2, (where (∆R) 2 = (∆η) 2 + (∆φ) 2 ) such that the leptons are well resolved in space. Table 2: Efficiency of the cuts on trilepton events at the LHC from W ± H Z H (LHT), χ ± 1 χ 0 2 (SUSY) and from the SM background. The integrated luminosity is assumed to be 300 fb −1 . The missing energy cut is shown in two stages to convey the usefulness of the finally chosen value E T / > 100 GeV. The values of the LHT parameters are f = 500 GeV, κ q = 1 and κ l = 0.4. The SS1 and SS2 parameters corresponding to this LHT point are given in Table 1. • A missing transverse energy cut, E T / ≥ 100 GeV has been employed to suppress the SM background. • We analyse only those events where m l + l − > 20 GeV which ensures the absence of leptons emitted from off-shell photons. An additional cut in the form of m l + l − < m Z − 15 GeV or m l + l − > m Z + 15 GeV is used, in order to eliminate the SM backgrounds from on-shell Z-bosons. Furthermore, we demand m T (lE T / ) < m W − 15 GeV or m T (lE T / ) > m W + 15 GeV to reduce the backgrounds arising from Wbosons [28]. The efficiency of the cuts is shown in Table 2. In Figure 4 we present the variation of the number of trilepton events against the scale f for LHT, SS1 and SS2, after imposing the above event-selection criteria. An integrated luminosity of 300 fb −1 at the LHC has been used for obtaining the number of events. This is done for κ l = 0.4, with κ q = 1.0 and κ q = 1.5, respectively. We find that the LHT trilepton event rates remain higher after the cuts in comparison to SS1 and SS2. This is primarily because of the larger cross-sections for the LHT. The SS2 rates are further suppressed in comparison to SS1 because of the small branching fractions for the leptonic decays of χ ± 1 and χ 0 2 . As mentioned earlier, the production of heavy quarks (squarks), followed by their cascade decays, might also lead to hadronically quiet trilepton events, if the accompanying jets are very soft. We simulated such events and found them to be negligible, since, with such masses as chosen here, the jets in the final state almost always emerge with p T > 30 GeV. Also, in the SUSY cases, the cascade decays of the heavier charginos and neutralinos do not yield a significant number of trileptons after the cuts are imposed. The number of SM background events surviving the cuts is shown as a flat dotted line in Figure 4. A closer look at the figure shows that the LHT trilepton events can be clearly distinguished, at least at the 6σ level, from either of SS1 or SS2, even in the presence of the SM background, up to f ≃ 1.5 TeV for κ q = 1.0, and f ≃ 1.7 TeV for κ q = 1.5. Figure 4 also shows that a comparable number of events for LHT and SS1 may only result from widely different values of f (differing by almost 300 GeV or so). Thus, conservatively, the event rates are likely to be still distinguishable if the uncertainty in f is around 200 GeV or less. Actually even with an integrated luminosity of 30 fb −1 a differentiation between LHT and SUSY above the SM background could well be possible, although of course with less significance. It should be noted, however, that we did not take into account systematic errors such as uncertainties from higher order QCD corrections, parton distributions, initial-and final-state radiation and detector effects. Only a more detailed and realistic analysis could show whether a distinction between the different models can be made with lower statistics. Moreover, distinguishing between LHT and MSSM will only be feasible, if we already have some knowledge about the mass spectrum in the underlying model. Maybe not enough information on all the masses will be available after the early phase of LHC with 30 fb −1 of data. Once we have precise enough information on the relevant masses, LHT and SS1 would be clearly distinguishable by the trilepton yield which, as is clear from Figure 4, would be an order of magnitude higher for LHT than that for SS1 for similar masses. The SS2 events, on the other hand, are going to be well below the backgrounds, for the mass range under investigation here. A similar study for a higher κ l value, namely, κ l = 1 does not yield fruitful results, as in this case the heavy leptons are more massive than the heavy gauge bosons. The only possible decay modes are W ± H → W ± A H and Z H → hA H . Thus the Higgs boson controls the number of trilepton events in the higher κ l regions. In the LHT, below about f = 470 GeV, a Higgs boson with a mass of 120 GeV decays invisibly into two heavy photons about 90% of the times [20], while beyond f = 470 GeV, it decays into bb with a branching fraction of about 70%. Therefore, there are only a few trilepton events generated via h → ττ → lνν τl ν lντ . In fact, from Figure 3, it is clear that the above conclusions will remain unaffected for any κ l beyond κ l > 0.44, because the mirror leptons remain heavier than the heavy gauge bosons in this κ l region. The trilepton signals from neither LHT nor SUSY (with its slepton masses correspondingly higher) can rise above the SM backgrounds in this region. Earlier studies have predicted appreciable rates for trilepton signals in SUSY at the Tevatron [22] and the LHC [23]. However, they analysed regions with lower masses than what has been considered here. Since the values of f in the LHT corresponding to such masses are ruled out by precision electroweak observables, such regions are not pertinent to the distinction between the LHT and SUSY scenario. Moreover, the choice of cuts in these studies is different from ours. Before we end, we want to reiterate that, apart from ensuring m ν H > m A H , we have not made any 'tailored' parameter choice. The masses of SUSY particles are kept at par with those of the LHT spectrum in each case, in order to have similar event kinematics. Also, both the cases of gaugino and Higgsino domination in the lighter chargino and the second lightest neutralino are included in our study, making the comparisons practically exhaustive. In conclusion, we have analysed hadronically quiet trilepton events arising in both T-parity conserving Littlest Higgs and R-parity conserving SUSY models at the LHC. We found a clear excess of trilepton events in the LHT over the corresponding number of events in the two SUSY scenarios with a mass spectrum that matches the one in the LHT. While for κ l ≤ 0.44, it is possible to rise above the Standard Model backgrounds, such backgrounds become a problem for larger values of κ l . Therefore, while the hadronically quiet trilepton signal suggests a promising way of distinguishing between SUSY and LHT, this signal is perhaps best usable if the heavy leptons (sleptons) do not exceed the heavy gauge bosons ( χ 0 2 / χ ± 1 ) in mass.
2007-12-06T12:55:12.000Z
2007-08-14T00:00:00.000
{ "year": 2007, "sha1": "7eb9487a6f3bc2e119f3646d999a546706b0f114", "oa_license": null, "oa_url": "http://arxiv.org/pdf/0708.1912", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "16111a6ca0ccb3d468d08ef20abfb036b76dfafb", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
261567262
pes2o/s2orc
v3-fos-license
MINIMALLY DESTRUCTIVE RADIOCARBON DATING OF CAPRINE DUNG ABSTRACT Archaeological dung pellets are time capsules of ancient herbivore diets and gut flora, informing on past agropastoral activity, ecology, and animal health. Improving multi-proxy approaches is key to maximizing this finite archaeological resource. Through experiments with standard pretreatments used in radiocarbon (14C) dating, we address a fundamental problem in maximal multi-proxy analysis: How to chronometrically date individual caprine pellets while conserving as much as possible for additional analyses? We applied acid-alkali-acid (AAA) or acid-only pretreatments to 37 samples of ancient and recent sheep/goat dung pellets from sites in the Negev desert, Israel, measuring weight-loss due to pretreatment. Shavings of outer surfaces and remaining inner pellets of four pairs were dated and compared. We found that (i) sample-specific factors affect pretreatment survivability, including preservation quality and initial sample size; (ii) given sufficient start weight, AAA can be used to pretreat sheep/goat coprolites; (iii) 100 mg appeared a desirable minimum sample weight before pretreatment; and (iv) shavings of coprolites’ outer surface produced 14C dates equivalent to dates obtained from inner coprolites. Whereas standard coprolite analysis protocols discard shavings removed from outer surfaces to avoid contamination, our findings indicate their efficacy for 14C dating. This offers an important addition to workflows for multi-proxy coprolite analysis. INTRODUCTION Coprolites, or ancient feces, are increasingly under investigation by researchers interested in records of past economy, environment, and evolution (Hunt et al. 2012;Qvarnström et al. 2016;Shillito et al. 2020).A variety of techniques are employed in coprolite analysis (e.g., Miller 1984;Poinar et al. 1998;Kühn et al. 2013;Linseele 2013;Camacho et al. 2018;Égüez and Makarewicz 2018;Sistiaga et al. 2014;Perrotti and van Asperen 2019;Zhang et al. 2019;Wood et al. 2020), and many studies apply multiple techniques to different coprolites in an assemblage (Reinhard and Bryant 1992;di Lernia 2001;Delhon et al. 2008;Shahack-Gross 2011;Marinova et al. 2013;Pineda et al. 2017;Baeten et al. 2018;Landau et al. 2020).Yet the full benefits of the multi-proxy approach will be realized when different complementary analyses are applied to each individual coprolite investigated, making the most of this finite archaeological resource (Fuks and Dunseth 2021).Meanwhile, multi-proxy approaches to analyzing individual coprolites are being employed and refined (Dunseth et al. 2019;Jouy-Avantin et al. 2003;Rifkin et al. 2020;Romaniuk et al. 2020;Polling et al. 2021;Velázquez et al. 2021).Human coprolites and those of other large mammals are often big enough to be subdivided such that each coprolite subsample is used for a different analysis or for a repetition of the same analysis, and much discussion concerns optimal subsampling strategies (Beck et al. 2019).Another standard procedure in coprolite studies is removal of the outer surface to reduce contamination (Wood and Wilmshurst 2016).However, these procedures present problems for multi-proxy analysis of individual livestock coprolites, particularly sheep/goat pellets, which are the most common type of dung in "Old-World" archaeology, and which have added research value as indicators of rangeland vegetation, seasonality, and pastoral practices (Akeret et al. 1999;Ghosh et al. 2008;Fuks and Dunseth 2021). First, subdividing individual sheep/goat pellets for different analyses may sacrifice representativeness to the point of being counterproductive.Second, removing the outer surface significantly reduces the size of the starting sheep/goat pellet sample (as shown in this study), leaving even less material for subsampling and analysis.One solution is to maximize the number of analyses that can be applied in series to a single pellet.Thus, non-destructive analyses (description, weighing, imaging, NIR spectroscopy), could be followed by semidestructive analyses (dissecting for plant macrofossils, FTIR spectroscopy) and fully destructive analyses in turn (pollen, phytolith, dietary fiber, lipid, protein, and DNA analyses).Yet this still leaves the sizable outer surface as unusable discard.Meanwhile, the richer and more interesting the information gleaned from coprolite analyses, the greater the need to establish its antiquity through direct radiocarbon dating.This creates a third problem in adopting a multi-proxy approach: there is no guarantee that an individual sheep/goat dung pellet can be directly dated and subjected to additional destructive analyses.Thus, a priori subdivision of individual caprine pellets for radiocarbon and other analyses risks sacrificing this scarce resource and producing no results. We addressed these problems by exploring possibilities for minimally destructive radiocarbon dating of sheep/goat dung pellets preserved by desiccation in Israel's Negev desert.Our primary research question was, how can an individual caprine pellet be chronometrically dated while preserving as much of it as possible for additional analyses?To answer this question, we conducted experiments on standard pretreatments used in radiocarbon analysis applied to desiccated dung pellets from three archaeological sites in the region.Our ultimate objective was to achieve minimally destructive reliable radiocarbon dating of dung pellet samples.The following specific research questions guided the experimental design: • Which sample-specific factors are related to pretreatment losses and survivability? • Which pretreatment (acid-alkali-acid or acid-only) best balances survivability and reliability? • What is a minimal dung sample start weight for reliable radiocarbon dating? • Can shavings of a coprolite's outer surface be used to produce a reliable date? Sample Retrieval and Preparation Analyzed coprolites derived from three sites in the Negev desert, Israel: Avdat (Oboda, Abde); Orhan Mor (Moyat Awad) and Nahal Omer (Table 1).The copious dung remains from these sites were variously preserved, often in semi-compacted dung layers or pulverized.We selected only uncharred intact pellets for analysis. Archaeological coprolites from Avdat were retrieved in the 2016 excavation of the Avdat in Late Antiquity Project by Scott Bucking and Tali Erickson-Gini, which yielded hundreds of dung pellets (Bucking 2017;Bucking and Erickson-Gini 2020;Bucking et al. 2022;Erickson-Gini 2022).The particular coprolite assemblage used in this study was preserved by desiccation and comes from a sealed collapse layer dated to the late-medieval, or local Late Islamic period (Table 1, Figure 1) by whole sheep/goat dung pellet (UBA-47071, 418 ± 22 BP, 1σ 1445-1470 cal CE; following acid-only pretreatment).In addition, modern dung pellets collected by the author (D.F.) in 2018 from the ground of Avdat's acropolis were used in the first batch of pretreatment experiments. Coprolites from Orhan Mor and Nahal Omer were retrieved by the author (D.F.) in February 2022 during the Negev Camel Caravan Project excavation headed by Guy Bar-Oz and Roy Galili (Galili et al. 2021;Bar-Oz et al. 2022).The Nahal Omer pellets appeared exceptionally preserved by desiccation and derive from two different Early Islamic rubbish middens: Areas A and B of the 2020 excavations.The Orhan Mor coprolites come from a small hillside mixed organic assemblage whose ceramics suggest a 3rd c.CE terminus, or the local Roman period.Unlike the other contexts, however, this one was not well-stratified or sealed, and a later intrusion of dung pellets cannot be ruled out. Over 100 pellets and pellet fragments from these contexts were individually prepared in the Pitt Rivers Laboratory of the McDonald Institute for Archaeological Research at the University of Cambridge.Each pellet/fragment was individually weighed and photographed, and observations of external preservation and color were recorded (Figure 2; Supplementary Table 1).Shaving of the outer surface, including any folds or cracks in contact with the encasing sediment, was performed on some of the fully intact pellets (Figure 3).This was conducted manually with a scalpel and tweezers, and all equipment was sprayed and wiped between pellets with an ammonium-chloride-based laboratory disinfectant.External shavings and the remaining inner part were stored in separate glass vials for each pellet and labeled accordingly. Pretreatment Samples were brought to the 14 CHRONO Centre for Climate, the Environment & Chronology at Queen's University, Belfast, where they were further selected from among the originally intact pellets for pretreatment experiments (Supplementary Table 2).Shaving of the outer surfaces was performed on additional select pellets.Acid-alkali-acid (AAA) pretreatment was selected for Batches 1-3 because the alkali step removes potentially contaminating humic acids whereas acid-only pretreatment removes only carbonates.AAA consisted of the following steps: • Acid -Sample placed in a polypropylene 50-mL test-tube solution of 0.1M HCl.Test-tubes placed in 80°C bath for 20 minutes. Tubes then filled with deionized water, spun, and decanted 3 more times. • Centrifuge and washsame as above.Note: Only one alkali rinse was needed as little color was removed in these samples. • Centrifuge and washsame as above. In Batch 1, AAA pretreatments were conducted on six pairs of modern pellet samples (from OBD-rec-2018), where each pair included the external shavings and the remaining inner part of the pellet (P ex and P in ).Preliminary observations at the alkali stage suggested sufficient survivability to continue using AAA. In Batch 4, three pairs of whole pellets were selected from Orhan Mor and Nahal Omer to compare loss from acid-only against AAA pretreatments.These were performed with the same solutions described above but without hot baths. Data was collected on start weights and end weights after drying for all samples, and qualitative observations of color and fibrousness were additionally considered to predict whether sufficient carbon content remained for radiocarbon measurement by AMS (Supplementary Table 2). In order to test the reliability of dates retrieved from the outer surface of dung pellets, we dated eight samples from Batches 2 and 3 consisting of external shavings and the remaining inner part for four pellets (Table 2): As a guide to the labeling system used below note that OBD-2016-L101-B4-P8, for example, refers to dung pellet 8 from Locus 101, Basket 4, of the 2016 Avdat (Oboda) excavations (see also Table 1).OBD-2016-L101-B4-P8-ex refers to that pellet's external shavings whereas OBD-2016-L101-B4-P8-in refers to its inner part (Figure 3). The dried samples were weighed in pre-purified tin capsules and burned in oxygen with helium carrier gas in the element analyzer (Elementar Vario Isotope), then transferred to the AGE3 automated graphitization system, which uses the hydrogen reduction method (Němec et al. 2010).The prepared graphite was compressed into vacuum-cleaned aluminum holders and placed in an AMS magazine.The ratios 14 C/ 12 C and 13 C/ 12 C were measured using accelerator mass spectrometry (AMS) in the Ionplus Mini Carbon Dating System (MICADAS).The sample 14 C/ 12 C ratio was background corrected and normalised to the HOXII standard (SRM 4990C; National Institute of Standards and Technology).The radiocarbon ages were corrected for isotope fractionation using the AMS measured δ 13 C which accounts for both natural and machine fractionation.The radiocarbon age and one standard deviation were calculated using the Libby half-life of 5568 years following the methods of Stuiver and Polach (1977). Outer Shavings Weights of the original intact pellets (P) and of the external shavings (P ex ) of 20 pellets used in this study appear in Table 3.The proportion of the external shavings' weight over whole pellet weight (P ex /P), ranged from 18% to 64% with a mean of 37% and a standard deviation of 11% (n=20).These values varied among sample groups: For all recent and late-medieval pellets from Avdat, P ex /P was under 35% whereas for all Early Islamic pellets from Orhan Mor it was above 35%.Three pellets from Orhan Mor had P ex /P values of above 45% whereas the remaining two from Orhan Mor were 37% and 42%.P ex /P ranged from 30-45% for all Nahal Omer pellets.These results reflect observed sample-specific differences in whole pellet preservation quality.In most of the recent pellets, the outer layer could be peeled off with the scalpel, whereas the Orhan Mor pellets had a tendency to crumble.Pellets from Nahal Omer and late-medieval Avdat were fairly rigid but not as easily shaven as the recent pellets.Figure 4 presents weight loss due to pretreatment by sample (see also Supplementary Tables 3-4).Of 37 pretreated samples, two yielded end weights larger than those of the original sample and were rejected as measuring or recording errors.Of the samples undergoing AAA pretreatment, weight-loss ranged from 58-100%, with a mean of 78% and standard deviation of 13% (n=32).As with the external shavings' relative weights, weight losses due to pretreatment varied by assemblage: For the recent Avdat pellet samples used in Batch 1, weight losses ranged from 58-76%, with a mean of 65% and standard deviation of 6% (n=10; based on 6 inner pellets and 4 associated external shavings).To formulate a working assumption regarding minimum datable sample size, we used a previously AMS-dated late-medieval Avdat dung pellet (UBA-47071) where carbon content measured 60.54% (Table 4).At this carbon content, we estimated a minimum sample weight for radiocarbon dating following pretreatment as 1 mg for compatibility with the AGE3 regular sample size setup, but we considered 2-4 mg to be preferable in the event of higher weight loss in pretreatment.In Batch 1, end weights were all well above this threshold (≥ 20 mg) and we therefore continued to experiment with AAA pretreatment in Batches 2 and 3.However, subsequent batches displayed different pretreatment survivability ranges and means, which varied according to site and starting weight. Four late-medieval Avdat pellet sample losses ranged from 63-79%, within the range of the recent Avdat pellets.By contrast, pellets from Orhan Mor undergoing AAA pretreatment had effective total losses in four out of five AAA pretreatments (≥97%, with end weights of 0.001 g or less).In the fifth sample, an 80% loss was recorded, with 16 mg remaining out of the initial 82 mg.However, on inspection with a stereo microscope the surviving material appeared to be almost entirely composed of quartz granules with a couple pieces of microcharcoal, and the sample was deemed non-datable. Nahal Omer pellet samples fared in between the Orhan Mor and Avdat pellets, with a range of 68-95% losses due to AAA pretreatment, a mean of 84% and standard deviation of 8% (n=12). End weights ranged from 4-28 mg for the external shavings (n=7) and 18-51 mg for the inner pellets (n=3).The surviving material was light yellow in color and appeared to be highly fibrous under the stereo microscope. We observed that washing of samples with deionized water after each pretreatment stage accounts for some of these losses, especially among the lighter samples.However, most loss appeared to have occurred at the alkali stage of pretreatment.This suggested that acid-only would yield lower losses.To test this hypothesis, we compared acid-only to AAA pretreatments on pairs of pellets from three different assemblages in Batch 4, one from Orhan Mor and two from Nahal Omer.Each pair consisted of two pellets from the same archaeological locusbasket, where one whole pellet underwent acid-only pretreatment and the other underwent AAA (Figure 5).Comparison of weight losses demonstrates much greater loss under AAA: for the Orhan Mor pair, loss was 87% under AAA compared with 68% under acid-only.For the Nahal Omer pairs, losses were 74% and 68% under AAA compared with only 39% for each of the two acid-only treated pellets. A final factor observed to affect pretreatment survivability is starting weight.Pretreatment losses by starting weight appear in Figure 4.All samples with start weights >200 mg exhibited weight losses <70%, while all weight losses >90% derived from samples with start weights <100 mg.To test the reliability of radiocarbon-dating external pellet shavings, pairs of AAA pretreated inner and external pellet were separately dated by AMS from four pellets.None of the Orhan Mor pellet external shavings were deemed datable due to pretreatment weight losses and observations of surviving content.Samples were drawn from each of the four remaining archaeological loci used in this study, including one pellet from late-medieval Avdat and three from Early Islamic Nahal Omer (see Tables 1 and 4).Carbon content ranged from 21.15%-50.27%.In each case, the radiocarbon date obtained from the external shavings closely matched that obtained from the inner part of the same pellet, and all pairs pass the chi-squared test at 95% confidence level (Table 4).Although dated at a preliminary stage using acid-only pretreatment, data for UBA 47071 is presented at the end of Table 4 for comparison with the other late-medieval Avdat pellet (UBA 47567, 47568).Unlike the other samples, weight presented for UBA 47071 refers to its whole pellet weight prior to pretreatment. DISCUSSION Information obtained from each of the three stages of this study offers useful insights for radiocarbon dating of coprolites.This has particular relevance to minimally destructive analysis of sheep/goat dung pellets.We discuss findings from each stage. Outer Shavings Data on relative weights of external shavings suggest that the proportion of pellet weight lost from removing the outer layer to avoid contaminants is on the order of ⅓ to ½ for ancient sheep/goat pellets.This is a significant proportion of the dung pellet which is lost in rigorous coprolite analysis and can probably only be reduced slightly through finer instrumentation.This certainly justifies checking whether such external coprolite shavings can be reliably used for any component of multi-proxy analysis.Differences between samples in the proportion of pellet weight lost from removing the outer layer are related to coprolite preservation quality and might be used as a proxy for general preservation.Qualitatively speaking, we observed that the way a pellet sample performs under handling at this stage may indicate how it will perform in pretreatments and subsequent analyses. Pretreatment Pretreatment weight loss of dung pellet samples was found to be correlated with start weight (Spearman rank correlation ρ = 0.763) and with site (Figure 4).Variation in sample loss according to site and start weight may well be linked to a third common factor, namely, preservation.Preservation in this sense is a qualitative factor based on observable characteristics.Visible traits which we associate with good preservation include a minimum of nicks and dents in the pellet, pellet rigidity, lightness of color, visible fibers and a greater propensity for macroscopic plant remains within the pellet.Poorly preserved pellets are associated with external nicks and dents, a greater propensity to crumble under light pressure, darker hue, few or no visible plant remains, and a sandier rather than fibrous internal structure. Using these criteria, the best-preserved study samples were the recent pellets from Avdat, which were also the heaviest and exhibited the lowest losses in these experiments.Hence, we cannot disentangle their start weight and preservation quality as factors affecting percent loss.On the other hand, Orhan Mor pellets were generally lighter than the rest, their preservation was observably poorer with a sandy texture, no visible fiber and a tendency to crumble, and their losses due to pretreatments were higher than samples of comparable start weight from Nahal Omer.These findings support the observation made by Dunseth et al. (2019) that weight may be a proxy for organic preservation in dung pellets.Nevertheless, one way that small starting weight independently contributes to high percentage losses during pretreatment is through the greater suspension of light crushed pellet solids in water during the washing stages, which are poured out.In theory, additional centrifuging or longer settling times for suspended particles could help, but this is usually impractical in a busy radiocarbon lab.Instead, pellet weight and observations of preservation quality such as internal fibrousness, may be used to select pellets for radiocarbon dating. The differences between weight losses under acid-only and AAA pretreatments in Batch 4 demonstrate that the greatest losses resulted from the alkali stage.This indicates the presence of undigested biomolecular compounds such as plant waxes, lipids and proteins as well as potentially some humic acids, despite the dry conditions.We would expect the plant-derived compounds to have been consumed as part of the diet and therefore unlikely to be a concern for radiocarbon dating.However, humic acids can be derived from younger, or occasionally older, organic material in sediments, which can affect radiocarbon measurements.Indeed, the date obtained from an acid-only pretreated whole pellet from late-medieval Avdat (418 BP ± 22) was older by about 100 14 C yrs when compared to the AAA pretreated samples from the same assemblage (Table 4).This suggests the importance of using alkali as part of pretreatments for dung pellets.The effect of either humic acids or biomolecular compounds on stable isotope analysis should be considered.C:N measurements may be useful indicators of the presence of these additional sources of carbon. We found that AMS radiocarbon dating of external pellet shavings yielded essentially the same results as dating the inner part of the same pellet.This is significant because coprolites' outer surfaces are removed and discarded to reduce contamination in other analyses (e.g., pollen, phytolith, sedimentary and biomolecular analyses) because the extraction process would not remove the contaminating material.Our findings show that external pellet shavings may be reliably used for radiocarbon dating, at least for some assemblages, as most contamination would be removed by the AAA pretreatment. By capitalizing on this otherwise useless coprolite component, reliable radiocarbon dating can be performed without sacrificing material used in other analyses, presenting a new addition to multi-proxy coprolite analysis workflows.Future research on minimally destructive coprolite dating could investigate the taphonomic mechanisms underlying carbon preservation in dung pellets, in concert with soil chemistry and micromorphology.First, if reliable radiocarbon measurements can be obtained from external shavings, can other isotopic measurements be reliably obtained from the same source?Second, more experiments could be performed to quantify the significance of humic acids and the differences between coprolite radiocarbon dates after acid-only versus AAA pretreatments.Meanwhile, our success in dating external shavings which underwent AAA pretreatment suggests a practical yet ideal protocol in which rigorous pretreatment is applied in the radiocarbon dating of an otherwise useless coprolite component.The main thing to test in the future is its applicability to a wider variety of samples and contexts. CONCLUSIONS This study enabled us to answer the following research questions concerning minimally destructive radiocarbon dating of sheep/goat dung pellets, based on samples from archaeological sites in the Negev desert: Which sample-specific factors are related to pretreatment losses and survivability? Site and start weight were correlated with weight-based pretreatment survivability.In addition, observable preservation features-including external surface, rigidity, color, fibers and other macroscopic plant remains within-appear to be correlated with survivability. Which pretreatment (AAA or acid-only) best balances survivability and reliability? Our results demonstrate that AAA can be used as a pretreatment for sheep/goat dung pellets, above a certain minimal start weight.Acid-only pretreatment is less destructive but should only be used after humic acid contamination is ruled out. What is a minimal dung sample start weight for reliable radiocarbon dating? Our results suggest that an initial weight of 100 mg is a desirable minimum threshold for dating samples of sheep/goat dung from the Negev desert. Can outer shavings of dung pellets be used to produce a reliable date? Yes.Our findings indicate that it is just as reliable to date the external shavings of a dung pellet as it is to date the remaining inner pellet.This suggests an important addition to multi-proxy Figure 4 Figure 4 Loss from pretreatment by start weight. Table 1 Sample sites and contexts. Table 2 Samples dated from Batches 2 and 3. Table 3 Weights of whole pellets and external shavings. Table 4 AMS dates from dung pellets and Chi-squared test for external-internal pellet pairs.
2023-09-07T15:10:29.894Z
2023-08-01T00:00:00.000
{ "year": 2023, "sha1": "44908ea0dfc50f966723b434d9e2697d6b3bcb3a", "oa_license": "CCBY", "oa_url": "https://www.cambridge.org/core/services/aop-cambridge-core/content/view/BC82FF96DFF89D122C8AC1324E57F1EE/S003382222300070Xa.pdf/div-class-title-minimally-destructive-radiocarbon-dating-of-caprine-dung-div.pdf", "oa_status": "HYBRID", "pdf_src": "Cambridge", "pdf_hash": "0c9ac3c22a664bb1b8ab454e7dd04060d14e5d1b", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [] }
270815389
pes2o/s2orc
v3-fos-license
Overview of Artificial Intelligence Research Within Hip and Knee Arthroplasty Hip and knee arthroplasty are high-volume procedures undergoing rapid growth. The large volume of procedures generates a vast amount of data available for next-generation analytics. Techniques in the field of artificial intelligence (AI) can assist in large-scale pattern recognition and lead to clinical insights. AI methodologies have become more prevalent in orthopaedic research. This review will first describe an overview of AI in the medical field, followed by a description of the 3 arthroplasty research areas in which AI is commonly used (risk modeling, automated radiographic measurements, arthroplasty registry construction). Finally, we will discuss the next frontier of AI research focusing on model deployment and uncertainty quantification. Introduction Total hip arthroplasty (THA) and total knee arthroplasty (TKA), which we will refer to as "arthroplasty" for the rest of this paper, are high-volume procedures that have witnessed remarkable advancements and growth.In the U.S. alone, there are over 7 million individuals with total hip or total knee replacements [1,2].The national arthroplasty volume increases annually, and this trajectory is expected to continue for the foreseeable future [3,4].In addition to the growing case volume, there has also been an increase in the availability of arthroplasty data due to the widespread adoption of electronic medical record systems and clinical orthopaedic registries, which provides an opportunity for population-scale research [5,6].Analyzing the large volume of data produced annually poses a significant hurdle to traditional methods of data analysis that require extensive human manual effort.Fortunately, the gradual adoption of techniques from the field of artificial intelligence (AI) in medicine provides a new opportunity to leverage these data resources. AI is a broad subfield of computer science that deals with technologies that are capable of mimicking human cognitive functions.While AI is not a new field, its use in orthopaedics has grown significantly during the last decade [7].AI algorithms can be trained to efficiently extract meaningful insights from large datasets, which may enable more personalized treatment decisions, optimized surgical techniques, enhanced postoperative care, and improved patient outcomes in total joint arthroplasty.This article briefly describes AI in the context of arthroplasty, followed by a summary of the major AI-related research topics within arthroplasty.We will end by discussing the future direction of AI in arthroplasty. What is artificial intelligence? AI encompasses a range of computer systems that learn to emulate human intelligence.Commonly used phrases "machine learning" (ML) and "deep learning" (DL) are actually subfields of AI (Fig 1).ML is the application of statistical models that learn patterns in datasets and can provide predictions on unseen data.Many popular algorithms involved in regression, classification, and clustering can be thought of as ML models.Within ML, DL is an increasingly popular technique that relies on the use of a specific ML algorithm and artificial neural networks with many layers [8].Because DL algorithms can easily process unstructured data (like images and clinical report text) and because they learn for themselves what features of the data are important, they are ideal for addressing the common challenges encountered in the medical field.Which ML/DL solution is most appropriate is inextricably tied to the characteristics of the dataset being analyzed. Data can be either structured or unstructured.One example of structured data is a spreadsheet, where each row represents an instance and each column is a variable or feature associated with the instances.For example, the rows in a dataset may represent patients, with lab test results occurring in the columns.While a table is structured data at a basic level, complex relations between instance types can be modeled using relational database technologies such as Structured Query Language.Many statistical and ML techniques naturally lend themselves to structured data. The vast majority of medical data is unstructured.Unstructured data cannot be easily formatted into rows and columns.Clinical notes, operative reports, and medical images are all examples of data that are unstructured.Entire subfields of AI have grown out of the need to develop specialized ways of analyzing unstructured data (Fig 1).Natural language processing (NLP) is a subfield of AI that involves the development of algorithms that can interpret and generate human language in a meaningful way [9,10].Likewise, computer vision refers to the field of computer science that focuses on enabling computers to interpret visual information from images or videos.In the context of arthroplasty research, computer vision plays a crucial role in analyzing medical images, such as radiographs, computed tomography scans, or magnetic resonance imaging scans, to assist in diagnosis and surgical planning.DL is especially powerful when used for the analysis of unstructured data because deep neural networks simultaneously learn a task and learn which features are most important to that task. The following 3 sections will focus on major arthroplasty research topics. Risk modeling The use of AI in risk modeling refers to the analysis of patient data to make predictions that can then assist with clinical decisionmaking.While clinical prediction models have always been a fundamental part of biomedical research, the concomitant rise in computing power and large datasets has catalyzed the use of AI techniques in prediction models [11,12].In this section, we will focus on complication and outcome prediction on the topic of risk modeling. Using the enormous quantity of patient data now available, DL/ ML models can predict the likelihood of complications such as infections, implant-related issues, and other adverse events with high accuracy.Yeo et al. used structured clinical data points to develop ML models that predict surgical site infections following a primary TKA [13].While the most important features in that model were mostly previously identified factors for infection, the ML algorithm employed allowed for much higher accuracy than traditional analytic methods.Wyles et al. created an algorithm able to show how the risk of periprosthetic fracture changes in a specific patient based on surgical factors such as uncemented femoral fixation, collarless femoral implants, and surgical approach [14].Likewise, Jo et al. developed a model that used patient demographics, labs, and history to predict transfusion requirements after a TKA.This model also demonstrated good predictive performance when applied to patient data from an external institution [15].In many cases, ML models learn a data distribution from a single institution too well and fail to generalize to data elsewhere.There are many more examples of risk modeling for TKA and THA complications including prolonged opioid abuse [16], delirium [17], and acute kidney injury [18]. ML/DL models can also integrate data of multiple types.Wyles et al. created a model capable of incorporating the possible dislocation risk modification based on surgeon decisions such as the use of dual-mobility constructs and elevated liners [19].In a subsequent publication, Khosravi et al. demonstrated that adding embedding data (the abstract features a DL model learns during training) from a radiograph to that model improved its performance [20].Further, this algorithm was designed to show the patient-specific risk in addition to the degree of risk modification achievable with surgical decisions, thus yielding actionable tools for surgeons. Similar to complication prediction, outcome prediction models forecast patient outcomes after surgery, including pain levels, functional improvements, and overall satisfaction with the procedure.In a study using Medicare data, the investigators developed an algorithm to predict postoperative outcomes, which was then compared to 3 of the most commonly used risk adjustment indices [21].The novel Complexity Score had the highest accuracy in predicting perioperative morbidity.Harris et al. developed a model to predict the Activities of Daily Living, Pain, Symptoms, and Quality of Life subscales of the Knee Injury and Osteoarthritis Outcome Score following a TKA [22].Fontana et al. used presurgical registry data to train 4 different models to predict which patients would not achieve 2-year postsurgical minimally clinically important differences following total joint arthroplasty with fair-to-good ability [23]. In comparison with traditional statistical methods, ML/DL models typically achieve higher predictive performance at the expense of explainability.This "Black Box Phenomenon" represents a key challenge that differentiates AI-based calculators from other calculation tools.Several explainable AI techniques, from feature importance metrics to saliency maps and uncertainty quantification, can help end users of an algorithm feel more confident about how a model is making its prediction.Which method is most appropriate is specific to the ML/DL model employed.Additionally, some may be concerned about the impact of online, publicly available calculators and how they could confuse rather than help patients.Clear explanations of how individual factors impact a risk score will help with popular usage as well.The advent of a trusted and accurate calculator for patient-specific postoperative complications and outcomes offers an additional tool for clinicians in order to continue to improve shared decision-making between surgeons and patients. Automated radiographic analysis Medical providers in orthopaedic surgery rely on imaging to diagnose pathology, plan treatment, and monitor outcomes.It is perhaps the most important data source in arthroplasty.Clinicians take measurements using imaging to assess either the degree of anatomic abnormality in a patient or to determine the position of implanted components, for example.In the context of revision surgery, accurate identification of existing implants' manufacturer and model is paramount.While this information plays a significant role in providing optimal patient care, the measurements themselves can be tedious, difficult to perform, and potentially subject to significant interrater variability.Computer vision algorithms can automate radiographic measurements and improve measurement robustness. AI in radiology is a huge topic, spawning numerous journals and conferences in recent years.In arthroplasty research, AI studies have focused mostly on automating measurements and extracting semantic information (radiological findings) [24][25][26][27].Rouzrokh et al. utilized computer vision techniques to measure femoral component subsidence between 2 serial anteroposterior radiographs; the median difference between the independent orthopaedic surgeon reviewer and automated measurements was 0.3 mm [28].Another study by the same group presented an algorithm that calculates acetabular inclination and version with similarly high performance [27].Evaluation of these algorithms is crucial; investigators must show that the performance of their algorithm meets or exceeds the performance of a human annotator. AI techniques can also automate the extraction of semantic information from the image itself.Stotter et al. demonstrated that an AI algorithm ranked better than at least one manual reader for the majority of outcome measures when measuring radiological parameters that identify femoroacetabular impingement and hip dysplasia [29].A particularly exciting use for DL/ML models is to extract information that would be difficult for an expert to ascertain.For example, while an experienced surgeon may be able to identify several models of hip arthroplasty implants, several recent published studies have trained models to identify a wide variety of implants with near-perfect accuracy [30]. Automated radiographic measurements can greatly increase the efficiency and generalizability of treatment planning for arthroplasty.Lambrechts et al. demonstrated a 39.7% reduction in the number of corrections the surgeon had to make from an AIgenerated preoperative plan compared to the manufacturer's default plan [31].Following the current process, THA surgeons often template based on personal experience resulting in different outcomes based on experience level [32].A universally accepted algorithm for templating may eliminate some of this variability, and the ability to create preoperative plans within seconds would save surgeons time [29,33].Generative AI may also help with visualization of postoperative hips [34]. Of course, employing AI in the analysis of radiographs presents several potential challenges.One primary concern lies in the diversity of image collection methodologies across various hospitals or even between different imaging personnel based in the same hospital.It is conceivable that differences in positioning can significantly impact measurements on planar images.However, uniform imaging methodologies are necessary even without the application of AI to radiograph analysis.Human readers will suffer from the same errors as AI when faced with radiographs taken using different techniques or of patients in different positions.Developers of AI algorithms can combat this variability with diverse data sources, best practices to avoid data leakage during training, and robust external validation.An algorithm is only as good as its training data, so proper oversight and training of the annotators curating the training data is also important.Finally, these AI algorithms allow for the extraction of radiographic information for further research on a volume of images not previously possible.In datasets of that size, it is not possible to validate each measurement or data point extracted by the algorithm.Again, robust validation can help improve the trustworthiness of these algorithms. Moving on, we will now discuss the development of large registries, an extremely important task required for robust orthopaedic research, which can be expedited by automated annotation of radiographic images and other applications of AI. Arthroplasty registry construction Large-scale clinical registries have long been an important source of data for orthopaedic research, where clinical trials are especially expensive and difficult.Institutional and national registries provide an invaluable resource for researchers.Registries that rely on manual abstraction of data points are expensive, while registries that only use data coded into electronic health record (EHR) fields are usually shallow or incomplete.In the most recent American Joint Replacement Registry update, only 10% of procedures had a surgical approach reported.Automated methods of data curation from images and medical records could help bridge the gap from depth to completeness. Most data in the EHR is unstructured text data, which requires specific analytic techniques in a field called NLP.NLP is a broad field that uses a wide range of techniques, from traditional statistical models adapted to analyze unstructured text to highly advanced DL models.When applied to medical research, NLP techniques can analyze text found in EHRs and subsequently use that for registry input [35][36][37][38][39]. Wyles et al. published a proof-of-concept of the NLP technology and utilized it to identify common elements described by surgeons in operative THA.The NLP algorithm extracted the operative approach, fixation technique, and bearing surface with accuracies of 99.2%, 90.7%, and 95.8%, respectively, mimicking the performance of human annotators with much higher efficiency [35].NLP decreases the necessity of specialized, highly trained medical professionals to extract the data.By removing the laborintensive part of the project, NLP allows the information to be collected expeditiously and cost-effectively. Quite recently, the field of NLP has experienced a renaissance with the advent of increasingly sophisticated large language models (LLMs).LLMs are gigantic DL models (most have billions of parameters) that generate text after receiving some input text (or images) as a prompt.ChatGPT, GPT-4, LLaMa, and Gemini are all recent examples of LLMs and offer unique promise for the efficient extraction of free text data and also for the novel generation of data summaries [40].While well-known for their human-like response capabilities, LLMs have shown remarkable success in completing medical exams [41], summarizing radiology reports [42], and plenty of other tasks that had previously employed NLP techniques.Their main function is to understand and generate natural language that can be applied to tasks such as summarization, translation, and question-answering [43].LLMs may soon supplant NLP algorithms in automating registry curation. As previously mentioned, AI has the potential to aid in radiographic annotation and measurements, which in turn can be leveraged for imaging data extraction for registry establishment.Rouzrokh et al. trained a DL model to efficiently annotate and categorize the view, laterality, and operative status of THA patient radiographs.The algorithm demonstrated impressive results, achieving 99.9% accuracy, 99.6% precision, 99.5% recall, and a 99.6% F1 score in assessing radiographic characteristics [44].A fully automated registry could rely on NLP to extract valuable data from clinical text and computer vision to automate data extraction from medical images, collectively improving registry accuracy and efficiency. What is next? Over the next decade, the field of AI in arthroplasty will continue to expand and change.We believe the next wave of AI research will focus on 3 themes: 1) clinical implementation of algorithms; 2) AI trustworthiness; and 3) increased utilization of generative AI including LLMs [45][46][47]. Of the thousands of AI algorithms published in biomedical research each year, it is likely that very few will be integrated into the clinic and impact patient care.This is a complicated issue, but we see a few obvious reasons.The first, as we have discussed above, is that AI algorithms rarely generalize well outside of the data used to train it.This means each algorithm, unless trained and validated on a broad swath of data, only performs well at the institution that trained it.The second reason is that there is a lot of infrastructure necessary to transform a model on a researcher's computer into something that can be integrated into a clinical system and then monitored and updated.This process is the focus of a field called MLOps, well known to technology companies but still somewhat new to healthcare organizations.Any MLOps processes need to be closely paired with implementation studies.Successful implementation of the model is not the final step; it is essential to validate and intermittently improve the model by adding new training data [48].A third reason that makes implementation difficult is the complex legal and regulatory issues related to the clinical use of an algorithm [49].Ultimately, the final responsibility for the patient's health rests with the attending physician; for clinicians to regularly use AI algorithms, they need to trust them. One of the current problems with AI approaches is the lack of explanation for the output of the models, commonly called the "black-box" phenomenon [50].Without using the techniques of explainable AI, it is difficult to comprehend how AI arrives at its outputs or predictions, raising concerns about accountability and potential biases embedded within its operations [51].One way to help providers appropriately use the output of models is to know how certain the model is about its prediction (uncertainty quantification) or to include information on what factors helped the algorithm reach its prediction (feature importance). Explainable involves assessing i) the variability in scientific models and ii) the way the algorithm uses input features to make a prediction.There are a variety of techniques for adding explainability to an algorithm that analyze input parameters, model assumptions, and measurement errors (56-58).For example, Rouzrokh et al. added conformal prediction to an AI model trained to identify THA implants, thus providing the ability to quantify prediction uncertainty and flag outlier test datadboth essential for clinical trustworthiness [30,34].By comprehensively characterizing model uncertainties, researchers can enhance the accuracy and reliability of their models, ultimately leading to improved AI trustworthiness.Furthermore, quantifying uncertainty can identify areas that necessitate further research to reduce uncertainties and enhance our understanding of arthroplasty approaches. Another rapidly evolving field in AI is generative AI.This branch of AI focuses on the novel synthesis of content rather than analyzing existing data.The newly created data can come in the form of images, text, audio, and other mediums.This area of computer science is rapidly advancing in medicine; Epic recently announced a collaboration with Microsoft Corp. to "develop and integrate generative AI into healthcare" [52].LLMs, a type of DL previously discussed in this manuscript, are a type of generative AI.In the field of arthroplasty, generative AI could potentially be used for data augmentation and synthesis, custom implant design, and surgical simulation.LLMs hold promise to accurately summarize extensive patient data and research publications to aid physicians in informed decisions [53].The translation function could be applied to not just language barriers but also the jargon-dense medical text within EHRs that sometimes challenges patients [54].The question-answering function could relieve providers of the often tedious task of answering simple questions patients send via the online messaging function within most EHR systems.Additionally, a chatbox-like function could also be implemented in other areas of AI to attempt to add transparency to existing algorithms [50]. Conclusions An increase in the volume of arthroplasty procedures and the data produced have opened the door to new research opportunities.AI techniques are a powerful way to analyze these new data streams.This article has surveyed several major research areas of AI within arthroplasty: risk modeling, automated radiographic analysis, and automated registry curation.These themes are both mechanistic and infrastructural.In the coming years, we expect some of the major themes of future AI research in medicine to include 1) implementation science, 2) explainable AI, and 3) generative AI.Despite already having a profound effect on the research landscape, we expect that the largest changes to the arthroplasty community will occur with the migration of AI technologies to the clinic. CRediT authorship contribution statement John P. Mickley: Writing e original draft, Methodology, Investigation, Conceptualization, Writing e review & editing.Elizabeth S. Kaji: Investigation, Writing e original draft, Writing e review & editing.Bardia Khosravi: Conceptualization, Supervision, Writing e original draft, Writing e review & editing.Kellen L. Mulford: Investigation, Methodology, Project administration, Supervision, Writing e original draft, Writing e review & editing.Michael J. Taunton: Supervision, Writing e original draft, Writing e review & editing.Cody C. Wyles: Conceptualization, Supervision, Writing e original draft, Writing e review & editing.
2024-06-29T15:26:13.006Z
2024-06-01T00:00:00.000
{ "year": 2024, "sha1": "6de3162614b0d34c0bf583cd15bb7a53bba7172d", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.1016/j.artd.2024.101396", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "b9055236ee91d5de62aca3e8e43cca08f9889165", "s2fieldsofstudy": [ "Medicine", "Engineering", "Computer Science" ], "extfieldsofstudy": [ "Medicine" ] }
210781320
pes2o/s2orc
v3-fos-license
Participatory evaluation of sorghum technolo- gies in the marginal dryland zones of Wag-lasta, Ethiopia Despite dryland areas are diverse in agro-ecology; technological recommendations are summative and based on researchers’ on-station genetic traits, which deprived the farmers’ preference, the economic and technical efficiencies. Participatory sorghum technologies evaluation was thus initiated to compare dryland sorghum technologies (Misikir and Girana-1) against the local practice in a wider scale, comprising 450 farmers from marginal districts of Wag-lasta. The agronomic, economic and preference data were collected and analysed in descriptive statistic, ANOVA, partial budgeting and weighted ranking matrix. The combined result indicated that Misikir, Girana-1 and the local sorghum provided mean grain yields of 2.9, 1.6 and 1.5 ton ha, respectively. Accordingly, Misikir technology has 81.3% and 93.3% yield advantage over Girana-1 and the local, respectively. The marginal rate of return (MRR) of Misikir is 477.6% but insignificant for Girana-1 by the cost higher than the local practice. The weighted ranking matrix also shows that Misikir was preferred by its earliness, seed setting performance, grain and biomass yields. Dissemination of Misikir technology is thus safely recommended. The finding further revealed that technological recommendations using on-station plot trials is dwarfing the adoption rates since farmers would hesitate to uptake as if they did not evaluate the technologies on their local context. The paper concludes that scale wide farmers’ participation is vital in Ademe Mihiretu ABOUT THE AUTHOR Ademe Mihiretu is a citizen of Ethiopia. He is a full time researcher in Socioeconomic and Agricultural extension, Sekota Dryland Agricultural Research Center. He holds a BA degree in Development and Environmental Management Studies (Honours) from University of Gondar in 2009; and MSc in Rural Development from Haramaya University with great distinction and Excellent (A) accreditation to his thesis, in December 2017. Ademe has practical experience and solid meticulous talent in Socioeconomic and Agricultural Extension research. He has published research articles in reputable journals (e.g. Taylor and Francis) and in ARARI’s proceeding scheme. He also reviewed scientific articles (e.g. cogent food and Agriculture) and regional Agricultural extension proceedings. PUBLIC INTEREST STATEMENT In dryland areas, the technological recommendations are provided for wider production domains using simply plot level on-station genetic traits, which deprive farmers’ preference, the economic and technical efficiencies. This in turn dwarves technological adoption in the marginal dryland zones worldwide. These areas contain more than seven agro-ecologies thus entails diverse production potential for different sorghum technologies. Therefore, implementing a participatory approach to evaluate technologies best suiting for the local context is an important opportunity for farmers to enhance demand-driven technology adoption, which ultimately increases production and productivity on top of providing feedback to the agricultural scientists. Besides, scale wide farmers’ participation in on-farm experiments is vital to sustainable technological development and diffusion as it approximates the real farm circumstances. Mihiretu et al., Cogent Food & Agriculture (2019), 5: 1671114 https://doi.org/10.1080/23311932.2019.1671114 © 2019 The Author(s). This open access article is distributed under a Creative Commons Attribution (CC-BY) 4.0 license. Received: 07 July 2019 Accepted: 19 September 2019 First Published: 04 October 2019 *Corresponding author: Ademe Mihiretu, Socioeconomic and Agricultural Extension, Sekota Dry-land Agricultural Research Center, Sekota, Ethiopia E-mail: ademe_78@yahoo.com.sg Reviewing editor: Fatih Yildiz, Food Engineering and Biotechnology, Middle East Technical University, Turkey Additional information is available at the end of the article PUBLIC INTEREST STATEMENT In dryland areas, the technological recommendations are provided for wider production domains using simply plot level on-station genetic traits, which deprive farmers' preference, the economic and technical efficiencies. This in turn dwarves technological adoption in the marginal dryland zones worldwide. These areas contain more than seven agro-ecologies thus entails diverse production potential for different sorghum technologies. Therefore, implementing a participatory approach to evaluate technologies best suiting for the local context is an important opportunity for farmers to enhance demand-driven technology adoption, which ultimately increases production and productivity on top of providing feedback to the agricultural scientists. Besides, scale wide farmers' participation in on-farm experiments is vital to sustainable technological development and diffusion as it approximates the real farm circumstances. Introduction Sorghum (Sorghum bicolar) grows over a wide range of latitudes from 0°to 45°North and South of the equator (International Crop Research Institute for the Semi-arid Tropics [ICRISAT], 1991). Because of its drought resistance, sorghum is the crop of far excellence for dry regions and areas with unreliable rainfall. It is one of the important indigenous food crops and is only second to tef among cereals in the dryland areas of Ethiopia (Geremew et al., 2004). As sorghum is growing under a wide range of environmental conditions, the range of both biotic and abiotic production constraints are diverse, resulting in very poor performance under farmers' circumstances (Asmiro, Ademe, Lijalem, & Tsega, 2016). The average national yield of 2.4 ton ha −1 is by far very low compared to 3-6 ton ha −1 achieved using improved sorghum technologies (Central Statistical Agency [CSA], 2015). In northeast Ethiopian dryland zones, where sorghum is the major food crop, its average productivity ranges from 1.5 ton ha −1 up to zero in sever moisture deficit seasons, which is by far lower than the national average (Central Statistical Agency [CSA, 2017). Because of the low amount, uneven distribution and erratic nature of the rainfall on top of fewer improved technologies fitting to diverse growing conditions and lower utilization rates, crop production is seriously affected in these areas (Ademe, Asmiro, Lijalem, & Tsega, 2018). The major constraints of farmers in using improved technologies are weak linkages between farmers and research as well as limited use of extension and research results. One can still witness the persistence of subsistence agriculture with an ever more dynamic and competitive environment. This entails the risk of the existence of wider gaps between the performance of research and farm averages (Amede, Assefa, & Stroud, 2004). With the aim of addressing these problems, several dryland sorghum technologies were developed and released nationally having special merits. Sekota Dry Land Agricultural Research Centre has been undertaking adaptation trials on these technologies in Wag-lasta marginal areas and released two improved varieties (Misikir and Girana-1) with their production package for wider production domain (Asmiro et al., 2016). However, the trial was on-station at plot level and the evaluation was designed and managed by researchers merely. The recommendation was provided purely on the bases of biological standpoints, devoid of farmers' involvement, economic efficiency, societal acceptability, and technological applicability. These factors are of course, the building blocks for wider technology diffusion in general and demand-driven knowledge tracking in particular. Over 70% of new technologies developed for marginal dryland areas have failed to take root among farmers and remain confined to research stations. Further intensification of extension services has not also shown great promise in improving the situation (Sanghi, 1987). An increasing number of scientists, therefore, have recognized that there is a need to modify research methodologies in order to make it more sensitive to local conditions (Ashby, 1986). In recent years, agricultural researchers started working with low-income farmers in hard environmental conditions. These environments tend to be highly fragile, characterized by combinations of low and unreliable rainfall, poor and easily degradable soils, hilly topographies, lack of economic and social infrastructures (Kojo, 1956). As a result, technologies generated on research stations have not performed well under farm conditions and have not been widely adopted (Matlon, 1984). According to Ogwal-Kasimiro, Wakulira, Kiyini, Mwebaze, and Yiga (2012), to circumvent these problems and to achieve better results, responsive adaptive research trials should be established with actively participating farmers in the technology development process. Moisture stressed areas in Ethiopia are grouped into seven agro-ecologies and covers 66% of the total area of the country entails diverse potential to different sorghum technologies (Geremew et al., 2004). Efforts to overcome these problems have led to the development of methods of collaborating with farmers to understand local-level farm conditions and strategies, and the processes through which small-scale cultivators adopt and adapt new agricultural technologies (Fernandez, 1988). This collaboration is termed as participatory technology development and its aim is to minimize the researchers' bias and maximize the input of the farmer, based on their concerns (Sollows, 2012). It is also argued that large scale on-farm trials should be simple and flexible. They should be farmer managed to approximate farm conditions. Researchers should concentrate on monitoring and measuring the experiments and refrain from imposing controls on the trials (Siriwardena, 1988). With simple trial design, a large number of farmers can participate and inter-farm variations be noted. These activities need to be synchronized with the activities of the farmer and not developed for the convenience of the researcher. The most important aspect of on-farm research is seen as building the relationship with the farmer (Kleene, 1984). This study is, therefore, designed to provide farmers with a menu of sorghum technologies then to select and demonstrate economically feasible, technically applicable and socially acceptable technologies in a participatory approach in the marginal drylands of Wag-lasta, Ethiopia. Therefore, findings from this participatory evaluation would provide information to the universal dryland areas about the importance of considering farmers' participation, wider coverages as well as the economic and technical efficiency traits in technology development. Because these traits are the building blocks for farmers in identifying their favourite technologies to enhance demanddriven technology adoption, ultimately increases production and productivity. Description of the study area The study was conducted at three districts (viz., Ziquala, Abergele and Lalibela) in Northeastern marginal drylands of Ethiopia for three consecutive production years (2015/16-2018/19). More specifically, Ziquala district is geographically located at 12°48ʹN and 38°47ʹE latitude and longitude, respectively (Asmiro et al., 2016). The district has an altitude of 1462 masl. Its annual average rainfall and temperature are 255 mm and 22°C, respectively (Dereje, 2004). Abergelle district is also located at 13°20ʹN and 38°58ʹE latitude and longitude, respectively. Its altitude ranges from 1150 to 2500 masl, with the annual mean temperature and rainfall of 23-43°C and 250-750 mm, displaying semi-arid nature of the district (Ademe et al., 2018). Lalibela district is located at 12°55ʹ559ʹ'N latitude and 38°4 2ʹ293ʹ'E longitude at an altitude of 2400 masl having mean annual temperature and rainfall of 26.2°C and 895.2 mm, respectively (Woreda Office of Agricultural Development [WoA, 2015). The districts' rainfall is unimodal, short and erratic that extends not more than 2 months per year, usually from the end of June to the end of August. Hence, the districts' crop production usually fails due to low soil fertility and high moisture stress, almost every year (Ademe et al., 2018). Sampling, experimental design and farmers' participation Two-stage sampling technique was employed to select the participant farmers. In the first stage, three districts were purposively picked to denote sorghum production areas in marginal drylands of northeastern Ethiopia. In the second stage, 50 farmers who had 0.25 ha clustered farmland on average were randomly selected from each district to host the experiment. In combination, 450 (75 female) farmers in the 3 years were involved for the scale wide participatory on-farm evaluation. Host farmers in each district were organized into farmers' research and extension groups (FREGs) in order to ease the participatory evaluation. The group members were selected with the consultation of local agricultural experts and key informants, conversant to the areas to represent different social segments (to have diverse spectrum of age, sex and wealth status). Each group had a chairman and secretary to facilitate FREG tasks in collaboration with researchers and extension workers. A timely action plan and meeting schedule were set out by the group members to evaluate the technologies following the main physiological growth stages. Before launching the experiment, researchers organized operational platform to create awareness and to identify responsible stakeholders in the experiment. Then, memorandum of understanding (MoU) was signed between stakeholders to part duties in the whole trial course. Farmers thus provided training on basic agronomic practices and technology packages for 2 days comprising the theoretical and practical components. Training provision, technical backstopping and data collection were managed by the researchers while farmers undertook the experiment (Table 1). The experiment was conducted using two improved sorghum technologies 1 (Girana-1 and Misikir) and adjacent to them local cultivar under traditional production practice for comparison. The treatments were laid on three un-replicated simple block considering farmers as replications. The improved technologies were planted in a row at 10 kg ha −1 seed rate along with 100 and 50 kg ha −1 diammonium phosphate (DAP) and Urea, respectively. All the required management practices were done as per the recommendation. The study was carried out for three consecutive years in order to minimize the risk of seasonal variation as well as to increase farmers' confidence in the provided technologies. Finally, field days were held involving concerned stakeholders to evaluate and endorse the performances of different technologies to the wider community. Data collection and analysis Both quantitative and qualitative data were collected from farmers using a checklist as well as focus group discussions. The quantitative data (days to maturity, grain and biomass yield) were collected at the plot level. The qualitative data (socioeconomic parameters: profitability and acceptability) were also collected. On the other hand, secondary data were collected from different published and unpublished (working reports from districts) sources to triangulate and support the quantitative results. The collected data were analysed using descriptive statistics such as mean, frequency and percentage points. Besides, different methods as suggested by Yadav, Kamboj, and Garg (2004) were used to analyse the technological gaps, extension gaps and the technological index among treatments using the following formulas: Technology gap ¼ Improved yield À Farmers yield (1) Extension gap ¼ Potential yield À Improved yield Technology index % ð Þ¼ Technology gap=Potential yield ð Þ Â 100 Data from the treatments (Misikir, Girana-1 and Local) were subjected twice to the analysis of variance (ANOVA) followed by Tukey's honestly significant difference (HSD) test (SPSS, 2007). The ANOVA table is constructed to illustrate the effects of treatments and other factors like experimental errors on the parameter values under consideration. The post hoc analysis (Tukey-HSD) carried out to compare the means of every pair of treatments in the study districts (i.e., identifying which technology has significantly larger mean as compared to other technology). The first of which was depending on agronomic records as explanatory variables and the second was depending on the indicative scores as explanatory variables. The coefficient of determination (R 2 ) and the Tukey's test (HSD) have been applied to significant variables in both analyses. The data of the indicative scores of sites for the three agronomic records were standardized and the sample variance (S 2 ) was calculated by the following formula: where S 2 is sample variance, Σ is sum, x i is the term in data set (indicative scores of sampling sites), x is sample mean, and n is sample size. The results of ANOVA (R 2 , F, P) and the sample variance (S 2 ) have been taken to express the impact of the agronomic records and their order of importance, on the different treatments of the trial area (Alaa & Mahgoub, 2019). Economic data (production costs and benefits) were collected to compare the cost-effectiveness of treatments. The costs of the whole package components such as improved seed, fertilizer and management practices were collected for each district in Ethiopian Birr (ETB). Yield obtained from each treatment was adjusted by 10%, and also the selling price of grain and biomass yield at the farm gate was taken. The average labour cost for improved practice (row planting and weeding) was expressed in person day, where one person day was assumed to be 8 hours of work. Finally, the economic advantage (efficiency) of each technology was calculated in the marginal rate of return (MRR) using the following formula: where MRR = marginal rate of return, ΔNB = change in net benefits and ΔTVC = change in total variable input costs (CIMMYT, 1998). To make the partial budget more useful, administering sensitivity analysis was worthwhile. Therefore, it was managed by computing the worst, most likely, and best-case scenarios of the cost-benefit sides of treatments. The worst, most likely and best case figures can be computed using a general error factor rate of 10%, or by adjusting the item that most likely to fluctuate. This is because farmers deal with market uncertainties, or not knowing weather the prices will increase or decrease by tomorrow. The sensitivity analysis hence will relax farmers' forced decision-making built on the imperfect market information. Thus, combining partial budget and sensitivity analysis was robust enough to handle the efficiency questions of farmers on the technology package (Ademe et al., 2018). Group discussion with FREG members, field visit and field days were used to evaluate the technologies. Farmers' reaction to each technology was asked using focus group discussions (FGDs) by assigning literate farmers in each group to lead the evaluation since most participant farmers were unable to read and write. Host farmers, therefore, brainstormed to identify their main evaluation criteria to be considered in selecting best sorghum technology under the local context. Five preference parameters (viz., grain yield, earliness, seed setting performance, stalk yield and marketability in descending order) were identified and weighted on the bases of their significance. Farmers were ranking the accredited preference criteria pair-wisely and then considered the rank as weight. The scores given by farmers to each variety were multiplied by the respective weight. Products were aggregated for each variety for final selection (1, 2, 3; 1 = the best) (Russell, 1997). Moreover, Spearman's rank correlation was used to see the degree of coincidence between farmers' preference rank with the actual value of measured attributes (Ferdous, Datta, Anal, Anwar, & Khan, 2016). The correlation coefficient is defined as: where d = difference in the ranks assigned to the same phenomenon and n = number of phenomena ranked. Finally, extension activities like diagnostic field visits and field days were undertaken to create awareness about the technology in general and the variety in particular, which can benefit the farmers in the long run (Feder, 2002). Yield and yield component performance The local sorghum variety under farmers' customary practice gave a mean grain yield of 1.5 ton ha −1 . On the other hand, the improved sorghum technologies, Misikir and Girana-1 provided mean grain yields of 2.9 and 1.6 ton ha −1 , respectively ( Table 2). The result thus revealed that, Misikir and Girana-1 improved sorghum technologies have an overall yield advantage of 93.3% and 6.6% respectively, over farmers' variety under the existing practice in all sites. Sorghum productivity problem in dryland area thus could be overwhelmed by adopting efficient package practices on top of using improved varieties. Likewise, the higher technological index in Misikir technology provided evidence that necessitates a wider scope of further improvements in sorghum production (Yadav et al., 2004). The ANOVA result in Table 3, shows that statistically significant difference between treatments in grain yield, stalk yield and days to maturity across locations (p < 5%). Moreover, Tukey-HSD test in Table 4, indicates that among treatments, Misikir was best performing technology in grain yield, stalk yield and days to maturity (p < 10%). Partial budget analysis Expenditures which were similar to treatments were not taken and analysed (citrus paribus), hence given the prevailing farm gate prices, the benefit-cost ratio was computed for grain and stalk yield on a hectare basis. The total variable cost of farmers' practice was lower than that of improved technology. The use of improved production package for Misikir technology thus provided a higher net benefit of ETB 20 802.8 ha −1 , followed by the farmers' practice with the net benefit of ETB 12 292.4 ha −1 ( Table 5). The marginal rate of return (MRR) of Misikir technology was thus 477.6%. This implies that for every ETB 1.00 invested in improved technology (changing from local practice to Misikir technology), farmers can expect to recover the ETB 1.00 and obtain an additional ETB 4.78. The economic return of Girana-1 technology is not promising thus it is rejected from further analysis due to lower net benefit than Misikir and local practice at similar and higher total variable cost, respectively. The process of rejecting dominated treatments from the further analysis is called dominance analysis (Ademe et al., 2018). The sensitivity analysis is a change in the net benefit and the return on marginal capital as revenue and input prices vary by 10% above and/or below their values. Thus, a 10% change in the revenue of Misikir sorghum technology influenced the net benefit by 24.2%. Whereas, a 10% change in the total input price influenced its net benefit by less than 10%. The return of marginal capital always beats 100%, which indicates an investment in Misikir sorghum technology will be gainful. Generally, the sensitivity analysis result showed that if the price of output becomes constant and the price of inputs increase by 1023.8%, the technology has a positive return. Farmers' preference for different sorghum technologies Due to their homogeneous sociocultural entities, farmers across location identified five preference parameters in common to select their best sorghum technology. The comparison result of the weighted ranking matrix thus revealed that a technology which has lower aggregated product was peaked as a primary choice. As a result, farmers preferred Misikir, local and Girana-1 technology treatments as the best, fair and worst, respectively, in all parameters (Table 6). Note that: ** and *** imply significance levels at 5 and 1% respectively Spearman's rank correlation coefficient was calculated to see the degree of coincidence between farmers' preference rank and the actual value of measured agronomic attributes. Therefore, the degree of coincidence between farmers' preference rank and actual values rank for grain yield, biomass yield and earliness attributes were 100%, 100% and 100% respectively (Table 7). At the end of the trial, field days were organized involving model farmers, development agents, farmers from the trial areas, experts and administrative officials. A total of 750 (200 females) participants visited the trial in different sites and applaud Misikir technology for its grain and stalk yield as well as earliness traits than the rest of treatments. Conclusion The study was basically focused on the participatory evaluation of sorghum technologies on a wider scale to create demand-driven technology diffusion in the north-eastern drylands of Ethiopia. Two improved sorghum technologies (Misikir and Girana-1) along with the local sorghum variety under farmers' practice were evaluated at 450 farmers' field. The result thus revealed that the mean grain yield of Misikir sorghum technology significantly out-yielded Girana-1 sorghum technology as well as the local variety under farmers' practice. The Tukey-HSD test also indicated that among treatments, Misikir was best performing technology in grain yield, biomass yield and days to maturity in all locations. The MRR result similarly indicated that among treatments, investing in Girana-1 improved sorghum technology was not promising due to its lowest net benefit at the higher and even equal variable input costs. The farmers' preference ranking matrix also shows that Misikir, the local and Girana-1 technologies were preferred as first, second and third choices, respectively, by the overall preference parameters. The agronomic (ANOVA), economic (partial budget) and the weighted ranking matrix result thus indicated that farmers have preferred and perceived the higher yield potential and profitability of Misikir technology. As a result, Misikir improved sorghum technology is recommended for further dissemination in the marginal drylands of Wag-lasta. This finding tells that recommendations based on researchers' plot level on-station trial, deprived of economic profitability evaluation as well as limited or even zero farmers' participation leads to meagre technological adaptation and/or complete rejection. Therefore, suppliers (i.e. researchers) should deliver a basket of choice of improved varieties developed through active and scale wide farmers' involvement for the needy to pick one, two, or all depending on their context (Table 5). Backup studies should take the farmers' preference parameters and the feedback into account for future variety and/or technology development activities. Ranks: 1 = Best; 2 = fair; 3 = worst. The score multiplied by the weight to provide the overall preference of technologies.
2020-01-16T20:04:04.570Z
2019-01-01T00:00:00.000
{ "year": 2019, "sha1": "956c5092c3a609babe06a5dd1f62f74d36c93931", "oa_license": "CCBY", "oa_url": "https://www.tandfonline.com/doi/pdf/10.1080/23311932.2019.1671114?needAccess=true", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "6469cde5832cad6be3ebe392012e96c20cad8ca7", "s2fieldsofstudy": [ "Agricultural and Food Sciences" ], "extfieldsofstudy": [ "Geography" ] }
248603192
pes2o/s2orc
v3-fos-license
Scintigraphic evaluation of the kidney There are more than one technique used to evaluate the kidney, besides the standard ultrasound, computed tomography (CT), and magnetic resonance imaging (MRI), there is also renal scintigraphy. The renal nuclear medicine procedures are grouped as in vitro (urine counting wells, basic probe detectors for clearance studies) and in vivo procedures (static and/or dynamic examinations done with planar gamma cameras, and single-photon emission computed tomography (SPECT) to determine kidney parameters or for cortical imaging). Renal scintigraphy has been a useful tool, since the early 1950s, in the diagnosis and management of many pathological changes in the kidney, especially in measuring renal function (e.g. obstructive/nonobstructive uropathies, renal inflammatory diseases, tumours, renal hypertension, and renal transplant viability). [1] INTRODUCTION The radionuclide investigation of the kidney includes detection of renal afflictions and measurements of quantitative indices that estimate the renal perfusion and function.Ultrasound and computed tomography are commonly used for the evaluation of renal structural anatomy, and the role of nuclear renal imaging is more for functional analysis, less in anatomical imaging (e.g.cortical imaging).[1,2] Renal scintigraphy, also known as a "renal scan" or "renal radionuclide imaging" or "renography" includes various investigations that use different radioisotopes to evaluate renal blood flow, renal split function, and the renal excretion performance of both.It yields specific and often unachievable information by using other imaging procedures.[2] Clinical indications for renal scintigraphy (adapted from Nuclear Medicine: The Requisites, 4th ed): [3][4][5] 1. Blood flow abnormalities 2. Function quantification (reduced performance of one or both kidneys) a. Differential function b.Glomerular filtration rate (GFR), effective renal plasma flow (EPRF) 3. Cirrhosis of the kidney(s) 4. Differentiation between a mass lesion and a column of Bertin 5.In infants with abnormalities of the urinary tract to study the urinary flow 6. Obstruction: ureteropelvic junction, ureteral 7. Pyelonephritis: both acute and chronic tubuleinterstitial nephritis and parenchymal scarring 8. Renal failure: acute and chronic PURPOSE The purpose of this paper is to provide a better understanding about dynamic renal imaging in several kidney pathologies, with an emphasis on obstructive renal pathology and to clarify the meaning of some quantitative parameters used in renal scintigraphy. MATERIAL AND METHODS Nuclear renal scans are various, complex and also a subject to institutional preferences, that is why a standardization of the techniques is hard to achieve.All nuclear renal procedures have some basic common conditions that must be taken into account before, during and after patient examination.First of them involves the intravenous bolus injection of a renotrophic radiopharmaceutical (a medicine marked with a radioisotope) that emits a small amount of radioactivity into the patient.Usually, the radiotracer administered to the patient is well tolerated, with no systemic toxic effects.Some mild-moderate allergies or adverse reactions (such as dizziness, headache, metal taste, flushing) are extremely rare reported.The radiation dose is relatively low, less than the standard chest x-ray.The examination is performed with the patient either lying on the examination table, in the supine position, or standing.[3][4][5][6][7] The pre-scan preparatory measures that are common are hydration, about 30 minutes prior to the actual examination, the patient should drink about 1 litre of fluid, and an empty bladder, the patient should void before examination.[3][4][5][6][7] It is of great importance that in the last 3-6 months prior to the renal scintigraphy, no high doses of iodine substances to be taken, like x-ray and CT contrast agents, or certain medicines (e.g.Amiodarone) because the results could otherwise be distorted.[3][4][5][6][7] In general, chronic therapy medications should not be discontinued prior to the exam.For specific problems, specific blood pressure lowering drugs (ACE inhibitors, diuretics) must be mentioned to the nuclear medicine physician as there is a period of drug abstinence in some cases.[3][4][5][6][7] Different scanning protocols with various radiopharmaceuticals are available for scanning the kidneys.Choosing the right technique and optimal radiopharmaceutical depends on the patients' medical history, clinical setting and indication.[3][4][5]8] Plenty and divers radiopharmaceuticals have been produced in the last 60 years to assess renal function; some are used in laboratory assays by measuring blood samples and determining the renal clearance of the radiopharmaceutical and others for dynamic or static studies with a gamma camera or a SPECT system.The gamma cameras and SPECT systems monitor the passage of radiopharmaceuticals through the kidney and urinary tract by registering and processing of the emitted gamma radiation.[9,10] The radionuclide agents usually used for determining renal function and anatomy can be grouped into three main categories [5,8]: those filtered by glomerular filtration, those excreted by tubular secretion via proximal tubule receptor-mediated endocytosis from the glomerular filtrate, and those retained in the renal tubules for long periods, useful for cortical imaging.[8] The most frequently used radiopharmaceuticals in renal scintigraphy are illustrated in Table 1.There are four important types of renal imaging methods obtained by using a planar gamma-camera or a SPECT system in order to evaluate whether the kidneys are working normally or abnormally:  Renal perfusion and renal function imaging are dynamic studies with images taken in series, over a period of 30 minutes immediately after the bolus radiotracer injection and determines the blood flow distributed to the kidneys, recognizes a potential narrowing of the renal arteries, and helps in determining the kidneys functioning.[2,11]  Diuretic renal scintigraphy is used to detect kidney obstruction.The procedure is similar to the renal perfusion, and seriated images are taken before and after the introduction of a diuretic (in the 15 th minute of acquisition) to help urine elimination from the kidneys.[2,11]  ACE-inhibitor renal scintigraphy utility is in detecting renovascular hypertension, not renal artery stenosis, by comparing renographic images before and after taking an ACE-inhibitor.[2,11] It is also a dynamic study and the image acquisition process begins when the radiotracer is injected intravenously in bolus.[3-5, 8, 10]  Cortical renal scintigraphy is a static study and detects the amount of normal functioning kidney tissue.[2,11] After the tracer administration, there is a three hour delay before the imaging acquisition can begin.[3-5, 8, 10] Generally the dynamic renal functional studies are acquired in two parts.The first part evaluates the renal blood flow, which is calculated in the first pass of the radiopharmaceutical bolus through the abdominal aorta and renal arteries.The second part assess the kidney uptake and clearance function over the next 25 to 30 minutes of acquisition.[3-5, 8, 10] GFR and EPRF are important kidney function markers evaluated by dynamic renal scintigraphy.Normal GFR varies in accordance with age, sex, weight (nutritional status), diet, race, and kidney size, which is proportional to body surface area.[12, 13, 14] The estimated GFR is calculated using these factors and the serum creatinine value, but it is not helpful when there are unilateral changes or when kidney function is very abnormal. RESULTS The images obtained with the gamma camera should be inspected after acquisition in order to evaluate if the examination was done in proper conditions.After visual inspection the image data is processed and regions of interest (ROIs) are defined over the kidney and surrounding background so that the renogram curves can be generated.[5,7] The time-activity curve (TAC) represents a graphic illustration of the renal function and is composed of 3 parts: initial rise, upslope and downslope.The first part reflects the radioactivity that arrives via the renal artery to the kidney.The upslope (ascending limb) reflects kidney uptake before the radiopharmaceutical begins to be excreted by the kidney (the descending limb).The peak time accurately indicates the point at which the extraction and accumulation trend is reversed to the evacuation process.[5,7] This curve is important in diuretic renal scintigraphy.Diuresis renography helps to discern between obstructive (calculus) and nonobstructed dilated urinary tract, and in the postsurgical evaluation of the renal system function and urodynamic.Acute obstructive uropathy is a commonly encountered condition.When unilateral obstruction occurs the changes in the measured renal function are a little decrease or imperceptible, but the bilateral form can result in significant kidney function losses.[5,16] The radionuclide renogram TAC provides an accurate graphic illustration of the dynamics of urinary excretion.In obstructive uropathy a calculus can give various degrees of hydronephrosis, depending on the site in which it is lodged.In the case of acute obstruction due to a renal calculus, pressures increase fast in the pyelocaliceal system and in the ureters above the point of obstruction.The affected kidney has a characteristic aspect: a dilated urinary tract with a thin cortex, and it shows on the TAC a slow continuous accumulation of tracer in the collecting system, a slow increasing ascending curve with no downslope.In contrast, the opposite kidney, which has a normal function and no obstruction, will show a good uptake and excretion, and the three part TAC.[7,17] In some cases the calculus produces a partial obstruction of the urinary excretion pathways.In order to demonstrate this a diuretic like Furosemide is administrated at various times during the course of the renal scan.The decreasing aspect of the curve after the administration of the diuretic indicates an incomplete obstructive pattern (as in Curve 3), and as for the rising curve a complete obstruction (as in Curve 2).[7] QUANTITATIVE INDICES Renal perfusion is evaluated by visual and quantitative analysis (1-to 3-second images) of the initial bolus as it transits the abdominal aorta and enters the renal arteries (used in renal number anomalies, renal transplant).[10,11] Relative Function represents the relative uptake of the radiopharmaceutical for the evaluation of uni-/ bilaterally impaired kidneys.[10] The split function is particularly useful because estimated GFR and serum creatinine may not identify unilateral lesions.[3] Renal Size.Several chronic renal diseases will result in bilaterally small kidneys, whereas the kidneys may be bilaterally enlarged in early diabetic renal disease, acute interstitial nephritis, HIV nephropathy, and amyloidosis.The resolution of structures of the renal parenchyma with nuclear medicine may not be as clear as with other imaging techniques, such as CT or MRI that is why nuclear renal images are not usually used to differentiate between cysts and tumours.[10,11,18] The time to peak, or T max, refers to the time from radiopharmaceutical intravenous administration to the peak height of the renogram curve.99mTc-MAG3, 99mTc-DTPA and OIH renograms normally peak by 5 min and drop to half-peak value until the 15th minute after injection; however, in some cases, physiologic retention of the radiotracer in the renal calyces or pelvis can alter the aspect of the TAC in normal kidneys and lead to prolonged values for the time to peak, 20-min/ maximum count ratio, and T½.The T½ represents the time is necessary for the radioactivity in the kidney to drop to 50% of the maximum value (time to peak, or T max); this index is important in diuretic renography for patients with suspected urinary tract obstruction.[8,10,19] The 20-min/maximum count ratio is an index of the transit time and a measurement of residual cortical activity, the ratio between the kidney counts at 20 min to the maximum (peak) radioactivity; it is useful in monitoring patients with suspected urinary tract obstruction and for detecting renovascular hypertension.[8,10,19] CONCLUSION Renal radionuclide studies are versatile procedures and vary depending on institutional preference, clinical setting of the patient and medical indication. Renal scintigraphy is a complex subject and understanding some basic principles, the use of radiopharmaceuticals available to image the kidney and monitor its function, the quantitative indices that can be generated and protocols can be useful for the physicians specialized in nephrology and urology in evaluating patients with diseases of the urinary tract and renal physiology. Figure 1 . Figure 1.Time activity curves (adapted from EANM: Dynamic renal imaging in obstructive renal pathology.A Technologist's Guide) Curve 1-normal kidney function; Curve 2-complete obstruction; Curve 3 -incomplete obstructive pattern after the administration of Furosemide at minute 15. Table 3 . Unilateral renal function changes are difficult to identify with other imaging techniques, but easy to determine with camera-based renal scan split functions.The percent differential function (split function) calculates the contribution of each kidney to the renal function and can be applied to GFR or ERPF data.[3-4, 15] Renal
2017-10-19T16:34:05.733Z
2015-05-19T00:00:00.000
{ "year": 2015, "sha1": "8615f08066d881b42a33290bfbf528bb08d8d694", "oa_license": "CCBY", "oa_url": "https://doi.org/10.55453/rjmm.2015.118.3.6", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "8615f08066d881b42a33290bfbf528bb08d8d694", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
119086370
pes2o/s2orc
v3-fos-license
Key Features and Adverse Weather of the Named Subtropical Cyclones over the Southwestern South Atlantic Ocean This work documents the main features of six subtropical cyclones occurred between the years 2010 and 2016 over the southwestern South Atlantic Ocean, near the Brazilian coast, which received names (with the exception of one) from the Brazilian Navy Hydrographic Center. The fine-resolution ERA5 reanalysis and rainfall estimates from the Tropical Rainfall Measuring Mission (TRMM) were used to describe the synoptic environment and the adverse weather conditions during the six events. The support of a small-amplitude trough at mid-levels or a cut-off low, weak vertical wind shear, and moisture flux convergence are the main features contributing to the subtropical cyclogenesis at the surface. On the other hand, sea surface temperature (SST) presents a secondary contribution since the cyclones develop over the ocean with a wide range of SST values (from 22.5 ◦C to 28.6 ◦C in the initial phase of cyclones). The six subtropical cyclones are less deep in the atmosphere column than the tropical ones and, unlike the extratropical cyclones, they have little or no westward tilt with an increase in height. The studied subtropical cyclones produced adverse weather conditions such as (a) strong winds (reaching 17 m·s−1 at 10 m high) for a long period occurring east/southeastward of the cyclone center, and (b) high amounts of rainfall along the southeastern coast of Brazil, where the accumulated rainfall varied between 170 to 350 mm, being in most cases higher than the monthly climatology. Over the continent, the Brazilian states of Rio de Janeiro and Espírito Santo were the most affected by the intense rainfall associated with the cyclones. Introduction Cyclones are one of the most studied atmospheric systems as they produce abrupt weather changes with great societal impacts and are a major factor in controlling the global climate [1].Synoptic scale cyclones are normally classified as extratropical, subtropical, and tropical according to their physical characteristics [2,3].In terms of the thermal structure, tropical cyclones have a warm core in all the troposphere, while extratropical cyclones have a cold core.On the other hand, subtropical (or hybrid) cyclones present a warm core at low levels, which is similar to the tropical cyclones, and a cold core at upper levels, as normally observed in the extratropical cyclones [4][5][6]. Although subtropical cyclones have been mentioned in the literature since the 1960s (e.g., Reference [7]), greater attention to these systems only occurred after Reference [4] had developed the Cyclone Phase Space (CPS) methodology to classify cyclone types.The CPS application has contributed to the identification of subtropical cyclones over different oceanic basins, as synthesized by Reference [8], and with this information some studies have investigated their precursors.According to the References [6,[8][9][10][11][12][13], the genesis of the subtropical cyclones is generally associated with a weak cyclonic anomaly at the surface, which is dynamically supported by the presence of a trough or cut-off low at the mid-upper levels of the troposphere. The attention of researchers and weather forecasters to the genesis of subtropical cyclones over the southwestern South Atlantic Ocean (SAO) was attracted with the occurrence of the first documented hurricane in March 2004 [14][15][16], called hurricane Catarina, once this system resulted from a tropical transition (an extratropical disturbance evolved to subtropical and after to a tropical cyclone).Until Catarina, little or no attention was given to the tropical development over the SAO due to unfavorable environmental conditions for tropical systems, i.e., the necessity of a sea surface temperature (SST) warmer than 26.5 • C and a weak vertical wind shear (lower than 8 m•s −1 between the 200 and 850 hPa vector winds) [7,15].Although, climatologically, the SAO does not present these conditions, certain combinations of the atmospheric environment can propitiate them as occurred during Catarina.References [14,15] described that Catarina began as an extratropical cyclone moving east-southeastward off the Brazilian coast.When it encountered a dipole-blocking structure at mid-levels, which propitiated weak vertical wind shear, Catarina transitioned from an extratropical to a tropical storm.The blocking pattern, beyond favoring the conditions for the tropical transition of Catarina, also changed the basic flow at mid-upper levels (providing easterly winds) helping the westward movement of Catarina, in the direction of the southern Brazilian coast [14]. After Catarina, the first subtropical cyclone over the SAO that received a name was Anita, occurring in March 2010 [3,11,17,18].References [11][12][13][14][15][16][17] studied the lifecycle of Anita and documented the presence of a dipole-blocking pattern at mid-upper levels, which provided weak vertical wind shear and, consequently, adequate conditions for the cyclone intensification.Moreover, Reference [11] suggested that Anita did not transition to a tropical cyclone near the Brazilian coast because its semi-stationary behavior contributed to both rainfall and the mix of the upper level layer of ocean that helped weaken the sea-air turbulent heat fluxes.Catarina and Anita encouraged the climatological study of the subtropical cyclones over the SAO in the following years (2012)(2013)(2014).Reference [19] found a frequency of 1.2 systems per year, while Reference [6] obtained a frequency of 7.2 systems per year.This difference is explained by the less restrictive criteria applied by Reference [6] to identify the subtropical cyclones, i.e., Reference [6] did not impose the need of closed low at upper levels or the requirement of maximum wind at 925 hPa reaching 17 m•s −1 to characterize a subtropical cyclone.According to Reference [6], the subtropical cyclones are more frequent in the austral summer. Both the Catarina and Anita names were attributed by an agreement among the meteorological centers from Brazil.In 2011, the Brazilian Navy Hydrographic Center established a list with official names, based on the Tupi Guarani indigenous language, for subtropical and tropical cyclones occurring over the SAO [20].From 2010 to 2016, five subtropical cyclones were named by the Brazilian Navy Hydrographic Center: Arani (November 2011), Bapo (February 2015), Cari (March 2015), Deni (November 2016), and Eçaí (December 2016).All systems (including Anita) developed close to the southern/southeastern Brazilian coast, greatly influencing the weather conditions with intense winds and precipitation (as shown in Section 3).As until now, only Anita and Arani were documented [11,18], the purpose of this study was to describe the main synoptic features and associated adverse weather conditions during the lifecycle of these six subtropical cyclones.The analyses performed here use the new fine-horizontal-resolution ERA5 reanalysis and the Tropical Rainfall Measuring Mission (TRMM) rainfall estimates to characterize the synoptic environment and weather conditions during the lifecycle of the cyclones.The study is organized as follows: Section 2 describes the data and methodology.Section 3 presents the synoptic environment and the adverse weather conditions associated with the cyclones.Finally, the conclusions are in Section 4. Data As described in Table 1, we used data from two reanalyses (ERA5 and Climate Forecast System Reanalysis (CFSR)), and rainfall and brightness temperature estimates from satellites.ERA5, the fifth generation of the European Centre for Medium-Range Weather Forecasts (ECMWF) atmospheric reanalysis [21], was downloaded with a 0.3 • horizontal grid every 6 h for atmospheric variables in pressure levels, and every 3 h for surface variables.Climate Forecast System Reanalysis [22] variables were obtained every 6 h with a resolution of 0.5 Satellite precipitation estimates from TRMM, version 3B42 [23], were employed to compute the precipitation associated with each cyclone.TRMM-3B42 has 0.25 • of horizontal grid spacing and a time-frequency of 3 h.The cloudiness associated with the cyclones was shown through the brightness temperature obtained from the Gridded Satellite (GridSat-B1) data geostationary facilitates provided by the National Oceanic and Atmospheric Administration [24].The Climate Data Record (CDR)-quality infrared window (IRWIN) methodology merges satellites (GOES, Meteosat, and GMS) by selecting the nadir-most observations for each grid point.These data are available every three hours on 0.07 • of horizontal grid spacing for the period from 1980 to present. Cyclone Tracking The position every six hours of the subtropical cyclones was identified through the minima of relative vorticity obtained with the horizontal wind components at 925 hPa from ERA5 reanalysis.The relative vorticity allows identifying the circulation centers associated with the cyclones before their configuration (closed isobars) in the mean sea-level pressure [25]. Cyclone Classification The six cyclones were characterized as subtropical using the Cyclone Phase Space (CPS) [4] parameters.CPS is an objective methodology, described in detail in References [4,6,8], which considers the thermal vertical structure to classify cyclone types.Three parameters are used in the CPS:thermal symmetry (B), and the low-level (-V T L ) and upper-level (-V T U ) thermal winds.. B measures the 900-600 hPa thickness difference between two semicircles (with a 500-km radius) centered in the surface low.For subtropical cyclones, the B threshold is −25 < B < 25 m.Thermal wind parameters are defined as the change in thickness between two pressure layers, i.e., 900-600 hPa (-V T L ) and 600-300 hPa (-V T U ). Thus, subtropical cyclones occur when the low-level warm core results in positive -V T L , while the cold core at upper levels results in negative -V T U .For the six studied cyclones, CPS parameters were computed based on the geopotential height of ERA5 and with the position of the cyclones (latitude, longitude, and date) obtained from the tracking procedure (Section 2.2.1). Synoptic Analysis Key synoptic features of the six subtropical cyclones during their genesis and maturity are discussed using different atmospheric and oceanic fields.As in Reference [13], the vertical wind shear (200 hPa minus 850 hPa) and the moisture content of the atmosphere were investigated, but using ERA5.Low values of vertical wind shear indicate a potential for the cloudiness organization in the subtropical cyclones [11,13], while the moisture content is also important for cloud formation and, indirectly, decreases the surface pressure.As the low values of vertical wind shear can be found in the regions of trough or cut-off low [11], to identify these systems, the geopotential height at 500 hPa was used. More details related to the thermodynamic and dynamic properties of the cyclones were obtained through the vertical cross-sections of the relative vorticity and of the zonal departures of air temperature and geopotential height.As in the tropical cyclones, the transfer of energy from the ocean to the atmosphere should be important for the subtropical cyclone development, and this was investigated using the total heat fluxes (sensible plus latent) and the difference between SST and air temperature at 2 m high every 6 h. Adverse Weather Conditions Caused by Subtropical Cyclones Rainfall associated with cyclones was obtained by adding the accumulated precipitation every 3 h in a radius of 10 • from the cyclone center.In this way, a map was produced showing the total precipitation following the cyclone trajectory.This methodology is similar to that of References [26,27].The same idea was applied to calculate the time series of atmospheric variables (precipitation, latent (LH) and sensible (SH) heat fluxes, and maximum wind intensity at 925 hPa and 10 m high) following the cyclone trajectory, but we considered a radius of ~5• from the cyclone center (i.e., a box of 10 • × 10 • centered in the cyclones).In addition, the maximum wind at 925 hPa was also computed in a fixed box shown in Figure 1 and named RG1, once it is the region where Reference [6] found a high frequency of subtropical cyclones. Trajectory and CPS Figure 1 shows the trajectories of the six subtropical cyclones.All systems had their genesis northward of 30 • south and near the southeastern/southern coast of Brazil.Moreover, these cyclones, except Anita, formed inside the RG1 (box of Figure 1), which is the region with the highest frequency of subtropical cyclones over the SAO [6].The systems preferentially displaced to the south and southwest until the maturity phase.Only Arani displaced slightly northward during its initial phase.Some features of the six subtropical cyclones such as date of genesis, initial position (latitude and longitude), lifetime (days), and traveled distance (considering two metrics: the sum of the traveled distance every six hours, and the distance computed between the initial and final positions of the cyclone) are presented in Table 2. Cari had the longest lifetime (10 days) followed by Anita and Arani (9 days), while Deni and Eçaí had the shortest lifetime (3 days).Anita traveled the longest distance.All cyclones, except Eçaí, presented a semi-stationary characteristic, which is highlighted by the large difference between the traveled distances considering the displacement every 6 h and the distance from initial to final positions (Table 2).While previous studies described the subtropical characteristics of Anita [11,17] and Arani [18] using CPS, in the other four subtropical cyclones, this feature was only declared by the Brazilian Navy Hydrographic Center.As there are no studies for these four systems, it is important to characterize the subtropical features of the six cyclones using the same methodology and reanalysis data.In this context, Figures 2 and 3 present the CPS for each cyclone.As previously mentioned in Section 2, a cyclone is classified as subtropical if, in the CPS, −25 <B< 25 m, -V T L > 0, and-V T U < 0. These three features were observed in Figures 2 and 3 from the genesis to the maturity of the six systems, but not in the final phase of five cyclones since they transitioned to extratropical cyclones (characterized by -V T L < 0).Only Arani was a pure subtropical cyclone during its entire lifecycle (Figure 2 c,d).Positions at 0000 Z are labeled with the day. Synoptic Analysis In order to assess the main features of the subtropical cyclones, we present the synoptic fields at the genesis and maturity phases of each system.Genesis is defined as the first synoptic time having a closed isobar in the mean sea-level pressure field, which occurs after that obtained using the relative vorticity criterion to identify the cyclones [25,28] shown in Table 2.The choice of the date based on pressure was due to the fact that the atmospheric fields are more organized than when there is only cyclonic circulation registered in the vorticity field.The mature phase is defined as the stage when the system presents the highest cyclonic relative vorticity vertically extended in the atmosphere column, a similar criterion used by Reference [18].These genesis and maturity criteria indicate the same dates for the Anita and Arani lifecycle showed by References [11] and [18], respectively.Table 3 summarizes some features of the six cyclones in the genesis and maturity phases, calculated as the average in a 10 • × 10 • box centered in the cyclone.The genesis of the subtropical cyclones occurs under a small-amplitude trough at 500 hPa, as in Anita, Arani, Deni, and Eçaí, or in the presence of a cut-off low (more details in the next paragraph), as in Bapo and Cari (Figure 4).At the surface, the closed isobars are located under the eastern-southeastern side of the trough axes at 500 hPa, and the cyclones present weak intensity.The central pressure in Arani, Bapo, Deni, and Eçaí is 1008 hPa and, in Anita and Cari, it is 1012 hPa (Figure 4).Eastward-southeastward of the center of the cyclones there is a high pressure, which is connected to the subtropical anticyclone of the South Atlantic Ocean.This pattern helps to intensify the northeasterly wind on the eastern side of the surface cyclones.From genesis to maturity, the cyclones remain near the Brazilian coast (Figures 4 and 5), and both troughs at mid-levels and the central pressure of the surface cyclones become deeper (Figure 5).Consequently, a cut-off low at 500 hPa develops inside the trough in the Anita and Eçaí events (Figure 5a,f), while those associated with Bapo and Cari intensify (Figure 5c,d).Cut-off lows are cold closed lows at upper-levels located on the equatorial side of the upper-level jets [29,30].This last feature distinguishes cut-off lows from occluded extratropical cyclones that extend to the upper levels since the latter is located on the polar side of the upper-level jets.Here, the six subtropical cyclones developed northward of the strong westerlies at upper levels (although we did not show the winds, this feature can be inferred from the geopotential height in Figures 4 and 5), in other words, in a region with weak horizontal temperature gradients.This pattern also characterizes one of the differences between extratropical and subtropical cyclones.Cut-off lows are components of upper-level blocking patterns [14].According to Reference [31], there are four configurations for blockings and, in three of them (dipole blocking or Rex blocking, omega blocking, and cut-off low blocking), the cut-off low is an intrinsic part and is located on the equatorial side of a high-pressure center.At 500 hPa, it is clear that Anita and Bapo in their maturity were involved in a pattern similar to dipole blocking (Figure 5a,c), while Cari and Eçaí were involved in a pattern that resembles cut-off low blocking (Figure 5d,f).On the other hand, Arani and Deni were coupled with a small-amplitude trough at 500 hPa (Figure 5b,e). Independently of the blocking-pattern type, there is a higher-pressure region at the middle-upper levels around the cut-off low mainly in its southeastern sector, and this influences the circulation.As documented by Reference [14], in the transition region between the cut-off low and the high-pressure center in the dipole blocking, the circulation of these systems reduces the westerly wind intensity, or even changes the wind direction from west to east at mid-levels.An example of this atmospheric pattern is shown in Anita, where easterly winds are present at 500 hPa southward of the cut-off low (see the zoom in Figure 5a).Consequently, this new circulation feature influences the trajectory of the surface cyclones, imposing their displacement to the west/southwest or straight to the south.This occurred in Catarina [14], Anita [11], and is documented here (Figure 1) until the maturity phase of the other subtropical cyclones (only Arani displaced directly eastwards).Moreover, the circulation induced by middle-upper levels contributes to coupling the surface cyclone with the cut-off low promoting a barotropic environment (Figure 5a,c,d,f). The atmospheric configuration at 500 hPa from the genesis (with a trough or a cut-off low) to the maturity (with a blocking pattern, except for Arani and Deni events) of the subtropical cyclones also contributed to the low values of vertical wind shear (Figures 6 and 7).As shown in Figure 6, the cyclogenesis occurred in regions with lower vertical wind shear (200-850 hPa) than the climatological value of ~26 m•s −1 computed by Reference [15].Anita, Arani, and Cari especially developed under vertical wind shear lower than 8 m•s −1 (Figure 6), which is considered an ideal threshold for the genesis of tropical cyclones [7].During the maturity of the subtropical cyclones, the vertical wind shear changes to negative or intensifies the negative values observed in the genesis (Figures 6 and 7).These negative values indicate more intense winds at low levels (850 hPa) than at upper levels (200 hPa), having great destructive potential (as also normally occurs in tropical storms), as shown in Section 3.3.Low values of vertical wind shear provide a potential for the vertical organization of the cloudiness in the cyclones favoring pressure deepening [11,32].According to Reference [32], in the literature, the effect of vertical wind shear on tropical cyclone intensity change is explained through "ventilation".In this mechanism, heat and moisture at middle-upper levels are advected away from low-level circulation, preventing cyclone development.However, Reference [32] provided another explanation for the contribution of the vertical wind shear for cyclone development; in the presence of vertical wind shear (more intense winds at upper levels and less intense at lower levels), the potential vorticity associated with the vortex circulation is vertically tilted.To maintain the mass balance, it is necessary to increase mid-level temperature perturbation near the vortex center.Thus, the hypothesis is that the mid-level warming reduces the convective activity and inhibits the storm development.According to Reference [32], the effect of the vertical wind shear is due to tilting and stabilization.Summarizing, regardless of the physical process presented, the vertical wind shear needs to be weak to help the tropical cyclogenesis, and this can be reflected in the subtropical cyclones.In this context, we show that the troughs and/or cut-off lows (Figures 4 and 5) are important to reduce the vertical wind shear (Figures 6 and 7) and that the latter favors the organization of the convective clouds around the cyclone center (Figure 8).Therefore, it is implicit that there is latent heat release by condensation in the atmosphere, which in turn helps intensify upward motions and decrease the pressure at the surface.Convection is an important process for transporting heat and moisture from near-surface layers to upper levels of the troposphere.According to References [33,34], moist convection is the primary driving force for tropical cyclone development.As the subtropical cyclones have some features similar to tropical ones, it is important to know their sources of heat and moisture.Before presenting our results, we summarize some important findings of the literature. According to the potential intensity (EPI) theory of References [35][36][37][38], SST and tropopause temperature matter most in determining the upper limit of tropical cyclone intensity (wind speed and central pressure).In this theory, tropical cyclones are like a Carnot engine, in which entropy is acquired under the eye wall from latent and sensible heating fluxes and exported vertically to the colder tropopause.Hence, the surface-layer thermodynamic conditions have a crucial importance for tropical cyclone development.Latent and sensible heating fluxes are parameterized using bulk aerodynamic algorithms [39], which involve wind speed at 10 m high and a vertical temperature gradient (the difference between SST and air temperature at 2 m for sensible heat) and vertical moisture gradient (difference between specific humidity of saturation at the surface and specific humidity at 2 m for latent heat).The formulation indicates that the higher the wind velocity is, the higher the heat and moisture fluxes will be.These heat fluxes are also sensible to SST; for example, Reference [40] showed that changes of 1 • C in SST may provide heat fluxes with a difference of nearly 40%.On the other hand, Reference [41] discussed the role of latent and sensible heat fluxes in the instability of the boundary layer, in which enhanced fluxes lead to instability and contribute to vigorous convection.Other authors [42,43] emphasized that the tropical cyclones extract latent energy from the ocean through latent heating and release it into the atmosphere through the convective clouds.Thus, a part of the latent heat released acts to increase the kinetic energy of the system.In summary, we would like to emphasize that latent heat flux is a fundamental energy source for tropical cyclones and can be also important for subtropical cyclones. For the subtropical cyclones studied here, the sources of heat and moisture are locally provided by latent (LH) and sensible (SH) heat fluxes or transported from remote areas.Considering the local source of SH, we show by the difference in SST and the 2-m air temperature (Table 3) that the cyclones develop in an environment where the sea is warmer than the air above.During the mature phase (Table 3), in some cyclones, this difference remains positive and, in others, it becomes negative, indicating colder SST than the atmosphere above (that would result from cloud cover and mixing in the upper layer of the sea, e.g., Reference [44]). In terms of local moisture source, Table 3 indicates that Anita developed in a region with a mean LH of ~180 W•m −2 , while Arani developed in a region with ~138 W•m −2 .Lower LH averages (~100 W•m −2 ) occurred in the genesis of the other four cyclones.Except Anita, all the other cyclones had their genesis in an environment with LH close to the climatology of the RG1, which is ~100 W•m −2 in the summer and ~130 W•m −2 in the autumn [45].It is an indicative that the remote moisture sources may have an important contribution to the subtropical cyclone development as previously discussed by Reference [46].The role of external moisture sources has been emphasized in tropical [47] and extratropical [48,49] cyclogeneses.The moisture travels from its source to the cyclone region like a corridor in the lower atmosphere [48], which is called conveyor belt or atmospheric river.For extratropical cyclones, the atmospheric rivers contribute to deepening these systems by providing more water vapor for latent heat release [49]. As in Reference [49], we identified the remote moisture sources by vertically integrated moisture flux.Figure 6 shows that the convergence of the vertically integrated moisture flux is a common feature in the genesis region of the six cyclones, resulting from northwesterly winds from the continent and northeasterly winds from the eastern sector of the South Atlantic subtropical high.From genesis to maturity, there is an increase in the moisture flux convergence around the cyclones (Figure 7), which is favored by the intense winds as shown in Figure 8.These findings agree with Reference [46], which showed a strong contribution of the same remote moisture sources favoring the development of the subtropical cyclones in RG1.Both moisture sources (local LH and moisture flux convergence) are the fuel for cumulus convection, with consequent latent heat release by condensation.In the satellite images, the moister areas and, consequently, the cloudiness are characterized by the presence of large areas with low values of brightness temperature (white color) over the eastern/southern sides of the cyclones (Figure 8). Although the wind intensity at 925 hPa was analyzed for the whole lifecycle of the six subtropical cyclones, it is only presented the maturity phase herein (Figure 8).During this phase, more intense winds in Anita, Cari, and Eçaí were observed in their southern sectors, while in Arani, Bapo, and Deni, such winds occurred in their eastern/southeastern sectors.The occurrence of stronger winds mainly in the southern and/or eastern sectors of the cyclones center agrees with previous studies that showed near-surface maxima winds far from the center of the subtropical cyclones [6,7,50].Over the SAO, the location of intense winds in relation to the cyclone center is associated with greater horizontal pressure gradients due to the presence of the subtropical high eastward or a migratory anticyclone southward of the cyclones (Figures 4 and 5). From the previous analyses, it becomes evident that the weaker vertical wind shear than the climatology, the dynamic support of the small amplitude trough or a cut-off low at mid-levels, and the moisture flux convergence are the "ingredients" contributing to the subtropical cyclogenesis at the surface.As these systems develop in a wide range of SST, from 22.5 • C (Eçaí) to 28.6 • C (Anita), this indicates a secondary contribution of warm SST to the cyclogenesis (Table 3).The weak dependence of the subtropical cyclogenesis on the warm SST was also documented in the North Atlantic basin [5], the Mediterranean [12], and the South Atlantic Ocean [6].These studies showed subtropical cyclone development over colder water than 26 • C, i.e., which is not favorable to tropical cyclones (26 • C is the threshold normally favoring hurricanes considered by Reference [51] and 26.5 • C by Reference [7]).According to Table 3, the SST during the maturity phase of the cyclones is colder than in the initial phase, which would result in cloud cover and the mixing of the upper layer of the sea due to strong winds and the upwelling process.As the six subtropical cyclones present a semi-stationary feature (Table 2), when they stay for a long time over the same area, the upwelling of colder water to the sea surface can occur [44].Some studies [52,53] indicate that this cooling contributes to a decrease in the transference of SH from the ocean to the atmosphere, leading to a weakening of the tropical cyclones. The vertical structure of the cyclones was analyzed through west-east vertical cross-sections based on the central latitude of these systems.In terms of thermal vertical structure, during the genesis of the subtropical cyclones the air temperature from near-surface until mid-levels was, in general, warmer than the zonal mean (data not shown).However, in some events (Arani, Cari, and Eçai) there was a near-surface thin layer of colder air than the zonal mean.This colder layer is also a feature documented in some tropical cyclones [54][55][56] and, in general, results from the following processes: initially, the convergence of low-level winds favors the transference of energy (latent and sensible heat fluxes) from the ocean to the adjacent atmosphere; on the other hand, since the energy is being removed from the sea surface, it becomes colder than the atmosphere above, and heat begins being transferred from the atmosphere to the sea, resulting in the cooling of the adjacent air layer on the sea.Another process contributing to the thin colder layer is the evaporation of both oceanic spray (due to the break of sea waves) and precipitation.It is important to mention that, as the colder temperatures occur in a very thin layer (below 900 hPa), they do not affect the B parameter shown in CPS, i.e., the thermal symmetry continues indicating a small difference in the geopotential height between the two sides from the cyclone center (Figures 2 and 3). Regarding the maturity phase (Figure 9), the near-surface thin layer of colder air disappears, and, in general, warmer air than the zonal mean predominates from the surface to mid/upper levels near the cyclone center, and colder air is found aloft in most events (Figure 9).Therefore, this pattern corresponds to the vertical structure of subtropical cyclones, which is different from that observed in extratropical (warm in the eastern and cold in the western side of the cyclones) and tropical cyclones (the center of the system is warmer than the surroundings).When the subtropical cyclones reach maturity, a tube of intense cyclonic vorticity occupies most of the depth of the troposphere, from the surface to ~300 hPa (Figure 9).However, the more intense cyclonic vorticity is observed at low levels, with the exception of Arani (Figure 9h).Through the lifetime of the subtropical cyclones, the vorticity tube almost does not tilt, which is more characteristic of the tropical cyclones (see, e.g., Reference [18]).On the other hand, the vorticity tube is shallower (extending to ~400 hPa) than normally observed in tropical cyclones (extending more than 100 hPa). Adverse Weather Conditions Caused by Subtropical Cyclones Subtropical cyclones can cause severe damage in the coastal regions due to heavy precipitation, strong winds, and storm surges.Moreover, over the ocean, they can generate high sea waves with negative consequences for navigation and offshore oil platforms.Figures 10 and 11 show that the six subtropical cyclones produced great amounts of precipitation over coastal regions in southern/southeastern Brazil.Few works have documented the impact of subtropical cyclones on precipitation along the Brazilian coast.One of them described an extreme precipitation event over the Paraíba do Sul river basin (located in southeastern Brazil) in January 2000, which was associated with a cyclonic vortex with subtropical characteristics [57]. In order to highlight the extreme nature of the subtropical cyclones, the total precipitation associated with these systems was compared with the monthly precipitation climatology of the Brazilian states, from 1981 to 2010, available at the Meteorology National Institute of Brazil (http://www.inmet.gov.br/portal/index.php?r=clima/normaisclimatologicas). Among the six cyclones, the highest precipitation total over the continent was caused by Arani once it moved close to the shore for a long period (Figure 10b).This system produced about 350 mm of precipitation in the state of Espírito Santo and in the south of the state of Bahia (~18-21 • S), which represents more than the double of the 150 mm of the monthly climatology for November.The precipitation associated with Anita was great around the genesis area (eastward of Bahia and Espírito Santo states at ~20 • S) and during the southwesterly cyclone displacement (eastward of Santa Catarina state at ~30 • S; Figure 10a).Cari also produced a great amount of precipitation over the continent.Anita and Cari were also responsible for precipitation totals exceeding the monthly climatology.The precipitation climatology in March from the south of Bahia to Rio de Janeiro states is about 150 mm, and Anita caused ~230 mm in southern Bahia, while the same amount of rainfall was observed (~230 mm) in Rio de Janeiro during the Cari event.On the other hand, the precipitation caused by Eçaí (~170 mm) did not overestimate the December climatology of 250 mm over Rio de Janeiro, but was still intense since it occurred in only four days.Bapo and Deni produced more precipitation over the ocean than the continent (Figure 10).All cyclones studied here impacted Rio de Janeiro state, an important oil producer and tourist center in Brazil.Therefore, extreme events in this region can also cause negative impacts on the local economy.Figure 11 depicts the mean precipitation and the total near-surface heat fluxes (sensible plus latent heat) every 6 h following the cyclone trajectory.The time evolution for Anita, Bapo, and Eçaí shows the increase/decrease in rainfall occurring in phase with the increase/decrease in the surface heat fluxes.For Arani, Cari, and Deni, this connection did not occur.These time evolutions of both variables indicate that, in some events, the local moisture source is more important to organize the rainfall, while, in others, the non-local moisture sources are more important as reported in References [6,46].For all events, until the maturity phase, the near total surface heat fluxes are, in general, higher than climatological values presented by References [45,58] (Figure 11).When considering Anita, Figure 11a shows that, until March 8, the near-surface turbulent heat fluxes were intense (~200 W•m −2 ) and, after, there was an abrupt decrease, which may have prevented the Anita transition to a tropical system as discussed by Reference [11]. In the previous section, we presented the winds at 925 hPa.However, as the damages associated with cyclones occur near the surface, Figure 12 shows the maximum wind intensity following the cyclone center (Lagrangean analysis) for both 925 hPa and at 10 m high.The maximum wind intensity at 925 hPa in the RG1 (Eulerian analysis) is also shown in Figure 12. Considering 925 hPa(Figure 12), it is clear that ERA5 and CFSR present similar wind intensity values, giving reliability to the wind estimates over the ocean.In both reanalyses, the lifecycle of the cyclones was characterized by strong maxima winds, exceeding 17 m•s −1 (for more than two days), which is the minimum threshold used to consider a cyclone as subtropical in some studies [5,19].The similarities of wind intensity in the areas following cyclone center and RG1 in Figure 12 indicate the period when the cyclones were acting in RG1.In the last stages of their lifecycle, the wind intensity in RG1 decreased since the cyclones were distanced from that region (Figure 1), while winds remained intense along the cyclone trajectory.Bapo, Deni, and Eçaí sustained maxima winds above 30 m•s −1 for most of their lifecycles (Figure 12).The maxima winds at 10 m high following the cyclone trajectories present a similar time evolution to the wind at 925 hPa and, as expected, they are weaker, varying between 10 and 15 m•s −1 .However, in some periods of the lifecycle of Anita and Eçaí, the winds reached 20 m•s −1 .Although the subtropical cyclones present intense winds near the surface, they do not reach hurricane intensity (wind intensity higher than 33 m•s −1 ) as defined by the Saffir-Simpson scale (https://www.nhc.noaa.gov/aboutsshws.php?). The strong winds and horizontal shears in the fine-resolution reanalysis result in strong relative vorticity minima at the center of the cyclones (figures not shown).The relative vorticity at 925 hPa in the initial phase of the cyclones (data not shown) ranged from −10 × 10 −5 s −1 (Anita and Cari) to −50 × 10 −5 s −1 (Eçaí).Considering the lifecycle of all cyclones, the strongest relative vorticity was observed in Eçaí at 18 Z on December 4 (−70 × 10 −5 s −1 ) in ERA5 reanalysis (−50 × 10 −5 s −1 in CFSR).This case presented the highest difference between the reanalyses.At this same date, the difference was ~3 hPa in the mean sea-level pressure. In the initial phase, subtropical cyclones present mean sea-level pressure ranging from 1005 to 1010 hPa, except Eçaí, which presented 995 hPa (figures not shown).Although in Table 3 we used a criterion based on the relative vorticity to identify the maturity phase of the cyclones, in general.the maturity date coincides with the minimum value of mean sea-level pressure registered during the lifecycle of the cyclones.In the maturity stage, the cyclones are about 5 hPa deeper than in the genesis.However, this difference was doubled in Bapo and Cari. Figure 1 . Figure 1.Trajectory of the six named subtropical cyclones based on the minima of relative vorticity at 925 hPa.The circle indicates the first position of each system.The purple box defines the southeastern coast of Brazil called RG1.The acronyms RS, SC, PR, SP, RJ, and ES indicate the Brazilian states, respectively, Rio Grande do Sul, Santa Catarina, Paraná, São Paulo, Rio de Janeiro, and Espírito Santo. Table 2 . Characteristics of the six named subtropical cyclones over the South Atlantic Ocean (SAO).Dates of genesis and cyclolysis, and coordinates of the initial position were based on the cyclonic relative vorticity at 925 hPa. Figure 2 . Figure 2. Cyclone Phase Space (CPS) of Anita, Arani, and Bapo.(a,c,e): The left column presents the parameters B versus -V T L ; (b,d,f): the right column presents the parameters -V T L versus -V T U .A and Z indicate, respectively, the first and the last time of the cyclone lifecycle.The circle size indicates the mean radius of the wind gale force at 925 hPa (17 m•s −1 ) and the colors are the sea-level pressure values (see scale in the right of figure).Positions at 0000 Z are labeled with the day. Figure 3 . Figure 3. CPS of Cari, Deni, and Eçaí.(a,c,e): the left column presents the parameters B versus -V T L ; (b,d,f): the right column the parameters -V T L versus -V T U .A and Z indicate, respectively, the first and the last time of the cyclone lifecycle.The circle size indicates the mean radius of the wind gale force at 925 hPa (17 m•s −1 ) and the colors are the sea-level pressure values (see scale in the right of figure).Positions at 0000 Z are labeled with the day. Figure 4 . Figure 4. Geopotential height (m) at 500 hPa (shaded and green line) and mean sea-level pressure (hPa; blue lines) during the genesis phase of the six subtropical cyclones: (a) Anita, (b) Arani, (c) Bapo, (d) Cari, (e) Deni and (f) Eçaí.L indicates the surface low-pressure center and the red dashed lines show the trough axis at 500 hPa. Figure 5 . Figure 5. Geopotential height (m) at 500 hPa (shaded and green lines) and sea-level pressure (hPa; blue lines) in the maturity phase of the six subtropical cyclones: (a) Anita, (b) Arani, (c) Bapo, (d) Cari, (e) Deni and (f) Eçaí.L indicates the surface low-pressure center.In panel (a), a zoomed-in region in the cut-off low area to highlight the easterly winds at 500 hPa is also shown. Figure 11 . Figure 11.Mean precipitation (mm, blue line) and total (latent plus sensible) heat fluxes (W•m −2 , green line) every 6h registered in a region of 10 • × 10 • area following the center of the cyclones (Lagrangean analysis): (a) Anita, (b) Arani, (c) Bapo, (d) Cari, (e) Deni and (f) Eçaí.For the total heat fluxes only the values over the ocean were considered. Figure 12 . Figure 12.Maximum wind intensity (m•s −1 ) at 925 hPa every 6 h in a region of 10 • × 10 • following the cyclone center (Lagrangean analysis) for ERA5 (blue line) and Climate Forecast System Reanalysis (CFSR; green line) and maximum wind intensity in RG1 (Eulerian analysis) for ERA5 (orange line) and CFSR (yellow line): (a) Anita, (b) Arani, (c) Bapo, (d) Cari, (e) Deni and (f) Eçaí.The maximum 10-m winds following the cyclone center are also presented for ERA5 (grey line).The horizontal red line indicates the intensity of 17 m•s −1 , which is a threshold used by Reference [19] to consider a cyclone as subtropical at 925 hPa. Table 1 . Description of the dataset used in the study.. Table 3 . Date of genesis (time of the first close isobar in the sea-level pressure field) and maturity (time of the most intense cyclonic relative vorticity in the vertical column) and box mean (10 • × 10 • of latitude by longitude from the cyclone center) of the variables (sea surface temperature (SST), near-surface latent (LH) and latent plus sensible heat fluxes (LH+SH), and difference between SST and air temperature at 2 m) for each subtropical cyclone.Values during the maturity phase are in parentheses.
2019-04-12T13:26:27.534Z
2018-12-27T00:00:00.000
{ "year": 2018, "sha1": "f6c847b296df397cbe6e451188dd3288ceeca950", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2073-4433/10/1/6/pdf?version=1545961616", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "f6c847b296df397cbe6e451188dd3288ceeca950", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Environmental Science" ] }
252875956
pes2o/s2orc
v3-fos-license
Calibrating spatiotemporal models of microbial communities to microscopy data: A review Spatiotemporal models that account for heterogeneity within microbial communities rely on single-cell data for calibration and validation. Such data, commonly collected via microscopy and flow cytometry, have been made more accessible by recent advances in microfluidics platforms and data processing pipelines. However, validating models against such data poses significant challenges. Validation practices vary widely between modelling studies; systematic and rigorous methods have not been widely adopted. Similar challenges are faced by the (macrobial) ecology community, in which systematic calibration approaches are often employed to improve quantitative predictions from computational models. Here, we review single-cell observation techniques that are being applied to study microbial communities and the calibration strategies that are being employed for accompanying spatiotemporal models. To facilitate future calibration efforts, we have compiled a list of summary statistics relevant for quantifying spatiotemporal patterns in microbial communities. Finally, we highlight some recently developed techniques that hold promise for improved model calibration, including algorithmic guidance of summary statistic selection and machine learning approaches for efficient model simulation. Introduction Microbial communities are ubiquitous [1]. They are responsible for life-sustaining planetary processes [2,3], and they maintain health in almost all metazoans, including humans [4]. Humanity has a long history of harnessing the power of natural microbial communities in, e.g., food fermentation [5], waste water treatment [6], and health [7]. Advances in sequencing and omics technologies have elucidated the roles of individual microbes within their communities, and how they contribute to the overall community function. This, in turn, has opened opportunities for manipulating and designing microbial communities to perform useful tasks across the bioeconomy [8]. Within microbial communities, species interact through, e.g., physical contact, competition for nutrients, metabolite exchange, toxin production, antibiotic inactivation, and quorum sensing. These interactions are shaped by a multitude of factors such as evolution [9] and abiotic features of the environment [10]. The network of interactions determines species abundances in a microbial community, thereby influencing the community's operation [11][12][13]. Growth of most communities involves attachment, and so cell-cell interactions influence the community's spatial arrangement [14]; the spatial structure may in turn influence the evolution of cooperative or competitive interactions [15,16]. To complicate things further, the community composition can also be impacted by the environment's colonization history [17,18]. The complex dependencies among cellular interactions, spatial dynamics, evolution, and community function make precision manipulation of microbiomes difficult. Mathematical models can be used to address this challenge by untangling the factors governing community behaviour. To engineer microbial communities to suit our needs, we must first acquire a thorough understanding of how these communities operate [19]. Mathematical models can be used to guide rational manipulation and design of microbial communities, to predict how communities will behave, and to determine how well they will perform desired functions in, e.g., biotechnology, health and medicine, food and agriculture, and energy production [8]. Fig 1 Fig 1. Modelling frameworks commonly used for capturing the behaviour of microbial communities, with associated spatial scales. ABM, agent-based model; ODE, ordinary differential equation; PDE, partial differential equation. https://doi.org/10.1371/journal.pcbi.1010533.g001 depicts 3 classes of predictive models commonly used to describe microbial communities: ordinary differential equations (ODEs), partial differential equations (PDEs), and agent/individualbased models (henceforth referred to as ABMs) [20,21]. The primary distinction between these model types is their spatial resolution. ODE models are built on the assumption that the dynamics are not dependent on spatial distribution. Consequently, simulation and analysis of these models incurs a relatively low computational cost. PDE models explicitly account for spatial distribution, but describe local averages, rather than individuals. In contrast, ABMs can capture the spatiotemporal behaviour of individuals within populations, and so can account explicitly for heterogeneity among cells. This high degree of resolution comes at substantial computational cost, which scales (potentially nonlinearly) with the number of individuals and interactions in the population. ABMs are often combined with PDE and ODE submodels to describe phenomena such as intracellular biomolecular network dynamics and extracellular diffusion. Recent applications of these modelling frameworks to microbial communities are reviewed in [22]. The choice of modelling framework is influenced by the system under consideration, the modelling objective, the data available, and the computational resources at hand. For each modelling framework, there are numerous open-source simulation packages available. The choice of software depends primarily on whether the built-in features are suitable for the application at hand. Reusability can be a challenge due to the diversity of programming languages and documentation formats employed. Some groups are developing packages with graphical user interfaces to facilitate reuse of their simulation software [23][24][25]. Predictive models are necessary for applications that require precise design and manipulation of complex microbial communities [8]. These applications include human and animal health, food production, and environmental remediation. ABM and PDE models are suitable for modelling microbial growth in heterogenous environments such as the mammalian gut and soil. To make accurate predictions, these models must be validated against experimental data, such as direct observations of populations of cells growing in spatially distributed environments. Observations at single-cell resolution can simultaneously provide data at the singlecell, population, and community scale. Such single-cell data are especially valuable for validating ABMs that aim to capture emergent population-level features by modelling single-cell behaviour [26,27]. Validation against independent patterns occurring at multiple scales generally improves model accuracy and predictive power [28]. To define the scope of the following discussion, we first establish a working definition of "microbial community." Although a broad definition could be "microbes living together," there is no consensus on how much variability is required to distinguish a microbial community from a microbial "monoculture": Even these exhibit some degree of genetic and phenotypic variability. For this review, we define the fundamental property of a community as the presence of at least 2 distinct characterized organism types, and thus we exclude monocultures that have developed some uncharacterized genetic heterogeneity. For details on the application of single-cell technologies to investigations of heterogeneity in such monocultures, the reader is referred to [29,30]. A number of distinct categories of communities have been investigated at the single-cell level [31]: • Communities of "isogenic mutants" are cocultures composed of at least 2 strains derived from the same parent that exhibit some genetic differences (due to either engineered or natural genetic alterations). Communities of isogenic mutants commonly serve as testing grounds for design and characterization of ecological interactions. • Designer laboratory communities are composed of distinct species purposefully combined in a laboratory environment. The number of species is typically small compared to natural communities. • Natural communities are sampled from natural or engineered environments (e.g., soil, animal guts, wastewater treatment plants, fermentation cultures). Below, we provide a brief overview of current techniques for collecting single-cell level observations of microbial communities. We then survey how such measurements have been used to calibrate computational models of community dynamics, and we highlight systemic approaches that could be used to improve the rigour of model calibration procedures. Finally, we discuss techniques from (macrobial) ecology, topology, and data science that hold promise for facilitating efficient calibration. Single-cell level observations of microbial communities Microbial communities are most commonly observed by flow cytometry and microscopy ( Fig 2). Flow cytometry can be used to categorize cells by their morphology and physiological characteristics (e.g., fluorescence). Modern flow cytometers can process approximately 10 4 cells per second. The resulting large samples can provide robust statistics characterizing heterogenous populations. The availability of several out-of-the-box software packages for flow cytometry [32,33] makes processing cytometry data straightforward in comparison to microscopy images, which usually call for customized image processing routines [34]. Flow cytometry is often used to determine the relative fraction of subpopulations within a community. Cells are distinguished by, e.g., fluorescent labeling, viability staining, or morphological differences. This approach has been used to measure short-term population dynamics in synthetic consortia [35], rates of plasmid propagation [36,37] and monitoring of eco-evolutionary feedback between cooperators and cheaters [38], among others. Time-lapse microscopy has been used to collect spatiotemporally resolved measurements of microbes in confined environments. Cells are typically observed growing in monolayer using a widefield microscope, but multilayer growth can be resolved by confocal microscopy [39][40][41]. Fluorescence microscopy experiments can generate data on the spatiotemporal positions of cells and their fluorescence-associated phenotypes. Time-lapse images can be processed to reveal growth rates, lineages, gene expression levels, and to infer intercellular interactions, which together give rise to spatiotemporal features at the population level. Time-lapse experiments generally involve observation of cells growing under agar pads or within microfluidic devices. Agar pads are simple to use but suffer the limitations of batch cultures such as transient effects of nutrient depletion, desiccation, waste product accumulation, and crowding [42,43]. In contrast, microfluidic devices offer more controlled and sustained environments, in which multiple cell generations can be observed through transient and steady-state growth conditions [44]. Quantitative analysis of time-lapse microscopy images demands the use of cell segmentation and cell tracking algorithms [45,46]. Analysis of time-lapse images reveals individual cell properties such as elongation rate, motion, and lineage, as well as population-level features such as population density and species abundance. Populations can sometimes be discriminated by morphology, but it is more common to use fluorescent markers. Moreover, fluorescence intensity can be used as a readout of an internal genetic state [47]. Time-lapse microscopy has been used to obtain both single-cell and population measurements in both isogenic mutant [48][49][50][51][52][53][54] and laboratory designer communities [55,56]. By correlating individual cell elongation rates with counts of neighbouring cells, researchers have gained insight into cell-cell interactions such as metabolite exchange [57,58] and antibiotic efflux [59]. Such studies can take advantage of microfluidic device designs that constrain the proximity of neighbouring cells. For example, Moffitt and colleagues [60] and Gupta and colleagues [61] designed microfluidic devices permitting nutrient exchange between 2 physically separated populations. Contact-dependent interactions can be inferred by comparing changes in cell state to the presence of directly neighbouring cells. This approach has been employed by several groups studying type VI secretion (toxin delivery) systems (T6SS). LeRoux and colleagues [62] and Smith and colleagues [63] measured the efficiency of target cell lysis as a function of contacts made. Steinbach and colleagues [64] investigated how the accumulation of dead cell debris reduces T6SS killing efficiency. Time-lapse microscopy studies of conjugation (contact-dependent horizontal gene transfer) have demonstrated the influence of contact mechanics on conjugation frequencies [65] and have also revealed enhanced gene transfer by transformation (uptake of DNA from the environment) in predator-prey communities [66,67]. Some investigations of community behaviour have relied on representative snapshots of spatial structure provided by single time point (i.e., end-point) microscopy. This approach is useful when time-lapse approaches may not be feasible, such as in highly structured environments like biofilms and solid matrices [68], or in microdroplets [69]. (The recent time-lapse work of Hartmann and colleagues [70] and Nijjer and colleagues [71] characterizing biofilm growth is a notable exception and may represent a new paradigm for such measurements.) Data on 3D arrangements within communities provide quantitative insights on how spatial distributions impact phenotype [72] and vice versa [73,74]. Co-occurrence networks in nonspatial environments can also be determined in microdroplets [10,69]. End-point cell arrangements constrained in 2 dimensions have been used to measure interaction ranges of quorum sensing mechanisms involved in horizontal gene transfer [75]. Calibration of spatial mathematical models of microbial communities against single-cell measurements When mathematical models are employed to explore a range of possible behaviours, parameterizations need not accurately capture specific observations (e.g., [76][77][78]). In contrast, when models are used for predictive purposes (as in most engineering applications), models must be fit to observations. In such cases, descriptions of the formulation, calibration, and validation of the model are needed to specify the predictive strengths and limitations of the model. A first step in communicating a model's formulation is the statement of the model's purpose, which clarifies the scope of the model structure and parameterization. This is highlighted in the ODD (overview, design concepts, and details) protocol [79,80], a formal framework for documenting ABMs. The protocol has been used in documenting several spatiotemporal ABMs of microbial communities [25, [81][82][83]. In macrobial ecology, this protocol is often used as part of a larger modelling methodology called pattern-oriented modelling [84,85], discussed further in Section 4.1. Model calibration is the process of assigning values to model parameters to best reproduce available data. Ideally, calibration is complemented by uncertainty analysis, which gauges the degree of confidence in model predictions and parameter estimates through, for example, identifiability analysis and sensitivity analysis [86,87]. The simplest approach to model calibration is to characterize each component of a system independently. This approach is suitable for simple processes, such as growth or diffusion, for which direct measurements can be made. In contrast, it is often the case that calibration of biological models must be posed as an inverse problem: Properties of system components cannot be measured directly and must instead be inferred from observations of overall system behavior. In such cases, model calibration involves selection of a "goodness of fit" function, typically defined as a sum of squared errors (SSE)-the SSE measure aligns with a maximum likelihood measure under idealized assumptions about system and noise structure [88]. When calibrating linear models, a rich theory provides robust uncertainty analysis, such as 95% confidence intervals for parameter estimates and model predictions. When addressing nonlinear dynamic models, the theory provides less support; models that minimize the SSE can be found only through nonlinear optimization procedures (typically iterative global optimization routines; [89]), and uncertainty analysis is approximate (though there are uncertainty tools designed for nonlinear systems, e.g., profile likelihoods; [90]). Bayesian calibration methods, such as approximate Bayesian computing [91], offer an alternative to global optimization searches. Bayesian methods refine uncertainty distributions for model parameter values by comparing with experimental observations. For nonspatial models (e.g., ODEs), systematic model calibration approaches have become standard in the field of computational biology, as reviewed in [86,87]. Such ODE models are used to describe microbial community dynamics through compartmentalization. For example, Gupta and colleagues [61] used a compartmental ODE model to investigate the behaviour of physically separated microbial populations; they calibrated their model parameters using a standard SSE-minimizing approach. A survey of calibration approaches for spatiotemporal models In this section, we survey strategies recently employed for calibration of spatiotemporal models of bacterial communities against observations at (or near) the single-cell level. The corresponding data (as described in Section 2) are complex, and calibration of these models is challenging. Calibration strategies used in recent publications can be roughly classified into 3 categories (Fig 3): manual fitting, systematic calibration to nonspatial data, and systematic calibration against spatial summary statistics. The simplest calibration approach is manual fitting, by which model simulations are qualitatively compared to observed data, and model parameter values are adjusted to arrive at a satisfactory alignment. This approach, often used to extend previously established model structure, is exemplified by comments such as "We chose model parameters to qualitatively fit the experimental results. . ." [51], and "We adjusted the parameters of our simulations until the behaviour matched the images of real cells. . ." [92]. Manual fitting is a pragmatic means to arrive at qualitative agreements between model and data, which is often perfectly appropriate for modelling objectives. However, it is poorly suited for situations in which precise calibration is required, it is unlikely to provide a robust search of high-dimensional model spaces, and it is unsatisfactory in terms of reproducibility. A second common calibration tactic is to apply systematic calibration to nonspatial data compared with an aggregate model output. For example, in their study on the inhibitory role of antibiotic efflux activity from neighbouring cells, Wen and colleagues [59] observed interactions between 2 bacterial populations, one of which expressed antibiotic efflux pumps. They used an SSE-based approach to estimate growth and inhibition parameters from data obtained by suspension growth experiments. They then used those parameters in an ABM that supplemented findings from additional single-cell experiments. Another example is provided by the work of Pande and colleagues [93], who investigated the role of spatial segregation in crossfeeding populations. They used in-suspension growth curves obtained over a range of nutrient concentrations (via Monod growth kinetics) to fit growth parameters, which were then applied to a spatial ABM of the consortium. Such strategies rely on an assumption that the behaviours measured in suspension are representative of behaviour in the spatially structured environments under investigation. Finally, in some instances, model developers have made full use of spatiotemporal data by systematic calibration against spatial summary statistics that capture the spatiotemporal aspects of primary interest, fitted with an SSE-based protocol. For example, Hartmann and colleagues [70] validated a 3D cell tracking algorithm and calibrated an ABM by minimizing the error between measured and simulated summary statistics in a growing biofilm. Another Model calibration techniques for spatiotemporal models of microbial communities. Manual fitting involves direct adjustment of parameter values to achieve qualitative agreement between model predictions and observations. Nonspatial calibration is often systematic (based on a goodness of fit function) but is based on experiments that do not incorporate the spatial features of the system. Spatial calibration, against spatially distributed data, can be systematic (SSE-based) but must rely on summary statistics collected from the data. https://doi.org/10.1371/journal.pcbi.1010533.g003 example is provided by Leaman and colleagues [94], who collected summary statistics from spatial distributions of cells and then used a global optimization scheme to fit parameters of an ABM. Such systematic calibration approaches can be resource-intensive, often requiring detailed image processing pipelines (e.g., [70]), or numerous auxiliary experiments to fit physical or chemical parameters. For example, Leaman and colleagues [94] needed to measure both diffusivity of solute particles in the presence of bacteria and activation time for gene expression controlled by a quorum sensing molecule before calibrating the rest of the model's parameter values. In the next section, we present a collection of summary statistics that are suitable for validation of spatiotemporal models of bacterial community dynamics. Use of these summary statistics typically requires development of a data processing pipeline for image processing and summary statistic calculation. A catalogue of spatiotemporal summary statistics for microbial community dynamics As discussed in Section 2, modelling projects are frequently built on spatiotemporal data that are rich and complex, resulting in a tendency to aim for qualitative agreement or calibration against nonspatial observations. Hartmann and colleagues [70] and Leaman and colleagues [94] provide examples that make more complete use of the richness of spatiotemporal data by selecting summary statistics to capture key spatial features in a quantitative manner and applying SSE-based calibration to ensure accurate representation. As we discuss below in Section 4.1, this strategy has been adopted for many modelling projects in the macrobial ecology community, where spatiotemporal datasets of this type have been collected for decades. One of the challenges of this approach is identification of appropriate summary statistics. These should (i) capture relevant features of the system's behaviour; (ii) be represented by model outputs; and (iii) be computationally tractable (in terms of image processing). In this section, we survey summary statistics that have been used to capture spatiotemporal features of microbial dynamics (Table 1), along with some examples from macrobial ecology that hold promise for use in this context. Monolayer growth is the simplest setup for observing single-cell characteristics of microbial population dynamics. In this setting, single-cell features such as elongation rate and division length threshold can be measured directly. The simplest population to study is an isolated microcolony descended from a single cell. Several groups have proposed summary statistics to capture development of such microcolonies. Volfson and colleagues [96] were one of the first to compare simulations of an ABM to time-lapse images of developing microcolonies within microfluidic devices. They calculated microcolony density, a cell velocity gradient, and an order parameter quantifying the global anisotropy in cell orientation. These summary statistics have been used to calibrate parameters governing physical interactions between rodshaped bacteria in more recent ABM projects (e.g., [97]). Doumic and colleagues [95] used similar metrics in their model of microcolony growth that incorporates unequal mass distribution upon cell division. They also considered the microcolony aspect ratio and the relative orientation of the 2 daughter cells just prior to the second division (called "d 2 " in Doumic and colleagues, and dyad structure in Table 1). Doumic and colleagues highlight additional summary statistics for microcolony development: orientation of cells at the colony boundary (this is referred to as "active anchoring" in [99]) and relative position with respect to age of individual cell poles within the colony [102]. Monoculture microcolony development has also been characterized using summary statistics from liquid crystal theory. These measures quantify the degree of physical alignment between neighbouring cells. The order parameter used by Volfson and colleagues [96] Microcolony shape Microcolony aspect ratio Quantify eccentricity of developing colony Standard image processing feature, defined in 2D (or 3D, projected to 2D) [70,95] Dyad structure Characterize structure of 2-cell "colony" immediately before the second division Normalized dot product of the 2 cells' orientation [95] Biofilm base circularity Characterize shape of the biofilm base Unity minus aspect ratio of projection onto the horizontal plane [70] Internal microcolony structure Microcolony density Quantify packedness of cells within developing colony Standard image processing feature, defined locally or globally [70,[95][96][97] Order parameter Quantify anisotropy within developing colony Mean of projections of orientation of neighbouring cells; defined per-cell, recorded as a colony average or as a distribution [70,[95][96][97][98][99] Correlation length of scalar order parameter Characterize "patchiness": spatial scale over which orientation of neighbouring cells is aligned Correlation of orientation as a function of distance; can be compared as a mean or a distribution [98] Micropatch area Quantify "patchiness"; similar to correlation length of scalar order parameter Cells are clustered into patches based on contact and relative orientation [100] Topological defect density Characterize "patchiness": density of topological defects (i.e., discontinuities in the order-parameter field) Algorithm provided in [101] [98] Defect velocity Characterizes the evolution of a microcolony's internal structure The position of topological defects is tracked over time [98,99] Age distribution of cell poles within the developing microcolony Characterize degree of mixing during colony development Simple measure is distance from centre of colony to oldest cell poles. More complete measures additionally account for younger poles [102] Other metrics Orientation of cells at the microcolony boundary Characterize tendency of boundary cells to align with the colony boundary Colony boundary must be determined by a smoothing operation; cells on the boundary and the corresponding boundary orientation must be identified [99] Gradient of cell velocity normal to microcolony boundary Characterize growth inhibition due to pressure gradients Measured by particle-image velocimetry [96,97] Cell-cell distance Characterize cell spacing Centroid-to-centroid distance to nearest neighbour [70] Vertical and radial alignment Characterize 3D structure; identify transition from monolayer to multilayer growth Angle formed by the z-axis and cell's major axis [70] Summary Neighbour index Characterize interspecific adjacencies relative to the initial adjacencies Count physical contacts between pairs of cells of different phenotypes [104] (Continued ) (mentioned above) is one such example. van Holthe tot Echten and colleagues [98] took the mean of this measure over the entire colony to compare the evolution of real microcolonies to ABM simulations. Dell'Arciprete and colleagues [99] use this same statistic, which they call the global order parameter, to summarize overall microcolony orientational disorder. Orientational order can also be quantified with discrete measurements. You and colleagues [100] investigate the inner structure of microcolonies by segmenting them into patches of similarly oriented cells and comparing the distribution of patch areas between ABM simulations and experiments. van Holthe tot Echten and colleagues [98] also use correlation length of the scalar order parameter (a measure of patch size) and topological defect density as additional measures of microcolony structure. (Topological defects are discontinuities in the orientation field that arise at boundaries between patches of similarly oriented cells. These are locations that lack a representative trend in orientation.) Studies of monolayer growth provide valuable insights into microbial activity, but they represent an idealized version of microcolony formation. In contrast, Hartmann and colleagues [70] present a comprehensive study of biofilm formation in 3 dimensions. They present novel imaging and image-processing tools that allow single-cell level tracking of a V. cholera biofilm from a single progenitor to about 10,000 cells. To quantify growth of this population, they make use of a collection of spatial summary statistics: vertical and radial alignment, local order, cell-to-cell distance, density, and aspect ratio of overall population and biofilm base. In multispecies communities, population counts of each species are a simple, key summary statistic that encapsules population dynamics (Fig 4A). Microbial interactions that have been summarized using population fractions include secretion of nutrients and toxins [103], cell lysis by T6SS [63], and competition for space in microfluidic traps [49]. While population fractions are often measured globally, localized measures are also used. For example, Bottery and colleagues [104] measured population fractions as a function of a microcolony radius, while Dal Co and colleagues [58] measured population fractions within a given radius for each cell to quantify the length scale of interactions mediated by secretion of diffusible molecules. In microbial communities consisting of a large number of species, an index summarizing biodiversity, such as the Shannon diversity index [105], may be more informative than specific population fractions. Most summary statistics that describe spatial patterns in multispecies communities quantify the degree of interspecies mixing or (conversely) of monospecies patchiness. These measures are commonly used in the field of landscape ecology [121]. Metrics from landscape ecology traditionally rely on counting adjacencies between image pixels, each of which is assigned a value corresponding to its dominant occupant. These metrics can be applied in the same manner for low-magnification microscopy images of microbial colonies. Shannon entropy is a canonical metric for species mixing [105]; it quantifies the overall disorder between any number of populations by counting like-and non-alike pixel adjacencies. Kong and colleagues [35] used this measure to assess the extent of red-green pixel colocalization in 2-strain microbial communities from microscopy images taken at 7× magnification. Li and Reynolds [106] developed a contagion index [122] that quantifies the deviation from the maximum entropy state using the same type of pixel adjacency counts. This contagion index is used widely in landscape ecology because it captures both aggregation of single populations and intermixing of different populations. Landscape ecology metrics could be extended to higher-magnification single-cell data by generating a physical contact network and accounting for nonrectangular adjacency structure ( Fig 4B). Alternatively, single-cell images can be smoothed until continuous single-species patches are formed [104,119]. Other metrics defining patch shape, aggregation, and species/ strain diversity (discussed below) could also translate to the single-cell level. Mony and colleagues [123] discuss applications of other higher-level principles from landscape ecology to analysis of microbial community assembly and structure. While the contagion index has not yet been applied to microbial studies, related measures have been used. For example, Bottery and colleagues [104] counted physical contacts in pairs of cells of differing strain/species (Fig 4B). They normalized these counts to initial neighbour counts, arriving at a metric they called the neighbour index. An alternative intermixing measure that does not require counting all physical contacts between cells is a probability matrix for adjacent species identities, computed by identifying the species/strain of a cell's nearest neighbour. Glass and Riedel-Kruse [73] used this type of measurement to quantify effects of surface nanobodies and antigens on cell-cell adhesion. Summary statistics that describe proportions of species within some defined neighbourhood ( Fig 4C) are also used to describe intermixing of microbial populations. The segregation index [110][111][112][113] measures the degree to which cells within a given neighbourhood radius are related to one another (by genotype or phenotype). In this case, the radius is defined as the distance over which interactions mediated by small molecules are expected to equally influence all cells within the neighbourhood [113]. The segregation index has been applied to simulated data in numerous microbial ABM studies but has yet to see use in the context of single-cell microscopy data. Generalizing this measure, the proportion of conspecific neighbours is defined as the probability that 2 randomly selected individuals separated by some defined distance will belong to the same population [107]. Computing this metric over a large sample of individual pairs provides the proportion of conspecific neighbours as a function of distance. An alternative way to define a neighbourhood is by a characteristic length scale. McNally and colleagues [109] used a static structure factor to identify transitions from well-mixed to segregated states in antagonistic 2-strain communities. This metric was computed using Fourier transforms of binarized pixel intensities to assess spatial (patch size) frequencies of each strain within a characteristic length scale. Other metrics for defining intermixing on larger scales use the number of single-species patches as a measure of interspecific mixing. For example, the intermixing index is determined by the average number of single-species patch transitions along a line or arc. This metric has been used as a measure of species colocalization in low-magnification images of biofilms and microbial colonies [117,118]. Blanchard and Lu [103] and Bottery and colleagues [104] used the number of single-strain sectors of a circular colony to characterize spatial patterns in 2-strain communities growing on a surface with open boundary conditions (Fig 4D). Some spatial patterns may be observable from the physical shape of single-species sectors or colony boundaries (Fig 4D). Kan and colleagues [115] and Rudge and colleagues [116] measured the fractal dimension of species patch boundaries, which quantifies jaggedness. Blanchard and Lu [103] noted that the roughness of a growing colony's edge increases when there are antagonistic interactions between different strains. Amor and colleagues [119] and Bottery and colleagues [104] used sector widths as an indirect measurement of spatial mixing, because larger widths imply less mixing. The perimeter-to-area ratio of single-species sectors could also be appropriate as a summary statistic for shape [121], although it has not been used yet in microbial studies. The summary statistics described above are applicable to end-point measurements. Of course, these can be measured through times series, but alternative measures rely explicitly on time series, e.g., through windowed averages and autocorrelation [124]. Periodicity can also be used as a temporal metric, quantified by, e.g., a periodic order parameter, as demonstrated by Kim and colleagues [53], who summarized spatiotemporal synchronization of gene expression in a 2-strain community. Time derivatives of summary statistics can also be assessed. For example, Dell'Arciprete and colleagues [99] and van Holthe tot Echten and colleagues [98], discussed above, both use the velocity of topological defects to characterize microcolony dynamics. Outlooks There is no doubt that spatiotemporal models of microbial communities will continue to grow in complexity (and corresponding computational requirements) as researchers continue making advances in synthetic ecology, in microbiome engineering, and in characterizing natural systems. In this section, we survey some outlooks for standardizing and streamlining model development and validation. Pattern-oriented modelling as a guideline for standardizing microbiological models It can be challenging to describe ABMs efficiently, but complete descriptions are crucial; incomplete reporting leads to difficulties with subsequent implementation and replication, as demonstrated by Donkin and colleagues [125] and discussed in [124,126]. Furthermore, systematic model documentation can improve model quality by enforcing critical thinking about the model's objective, formulation, implementation, and validation. In surveying modelling practices for microbial communities, we found that modelling and documentation practices vary considerably, especially regarding model calibration. A systematic framework for model development and testing, referred to as pattern-oriented modelling (POM) [84,85], sees frequent use in macrobial ecology and has been used occasionally in microbial settings as well [127][128][129][130]. POM addresses "the multi-criteria design, selection and calibration of models of complex systems" [85]. The framework formalizes all stages of the modelling pipeline, from model formulation, to testing, to calibration and validation. The "patterns" in POM are any quantifiable features of model simulations; we referred to these as summary statistics in Section 3. These measures are most useful when they span ecological scales: individual, population, community, ecosystem. As highlighted by the POM framework, summary statistics facilitate validation by reducing system dimensionality [131]. Moreover, they can guide model formulation by focusing attention on the aspects of simulations that will be quantitatively captured. POM's model validation strategy is standard [124,131]: begin with qualitative comparison of model predictions with experimental data, then sample the parameter space to determine the sensitivity of summary statistics to parameter values (typically done in a one-at-a-time fashion, given computational costs) (e.g., [108]). Acceptable parameter fits are then determined based on systematic minimization of SSE quality-of-fit measures using a weighted average of the summary statistics, as in, e.g., [132][133][134]. Documentation of all model formulations and parameter sets tested can provide insights into model behaviour and can potentially reveal underlying mechanisms of emergent community properties. Feature identification through topological data analysis The selection of appropriate "patterns" is a subjective task, as acknowledged by the architects of POM [85]. Moreover, it is not always clear how best to quantify these patterns as summary statistics once they have been identified. Some features are easy to represent numerically (e.g., average population density), but many relevant patterns are qualitative, or manifest as complex spatiotemporal configurations. In some cases, existing theory can offer tools to quantify these features, such as order parameters from liquid crystals, or Fourier coefficients to identify feature scales (both described in Section 3). In the absence of such tools, many researchers rely on visual inspection, which introduces subjectivity into the calibration pipeline. A generic approach to pattern identification is provided by the recently developed tools of topological data analysis (TDA). TDA provides tools to quantify topological (i.e., qualitative) features within datasets. It can be applied either to discrete datasets, like those from ABMs, or to continuous data, like those from PDE models. TDA encompasses a wide variety of tools and techniques. Here, we focus on the most popular: persistence homology. (For a general overview of the field, see [135]; a broad discussion of applications to biology is presented in [136].) Persistence homology can be thought of as a nonlinear analogue of a more familiar technique: principle component analysis (PCA) [137]. PCA is used to identify the variational structure within datasets: If there are correlations within the data, the points will tend to cluster around certain linear subspaces (lines, planes, etc.) and the data will exhibit less variation in the directions perpendicular to these subspaces. One application of PCA is dimensionality reduction. Data can be projected onto these linear subspaces, thereby reducing the dimensionality of the dataset with minimal information loss. Whereas PCA identifies linear structure within a dataset, persistence homology identifies arbitrary nonlinear structure. A basic persistence homology workflow for discrete data can be described as follows (Fig 5). First, data are represented as a collection of points in some (typically) high-dimensional space that characterizes features of interest. If, for example, TDA were to be used to characterize the results of an agent-based simulation, each point might represent an individual agent, with the coordinates corresponding to features of that agent: e.g., species, position, length, orientation. To proceed, a length scale L is chosen, a ball of radius L is constructed around each point, and the topological features of the shape thus produced are determined, e.g., connected components and loops. This analysis is repeated over a wide range of length scales (Fig 5A-5C). At small scales, it will produce only a cloud of disconnected points. As the length scale, L, increases, neighbouring balls intersect, forming larger and larger structures until, finally, they merge into one fully connected component. As L varies, the length scales at which various topological features occur (i.e., over which they persist) is recorded. Each topological feature is thus associated to a pair of numbers: the smallest and largest scales at which the feature exists. These can be represented by a persistence "barcode" in which bars are plotted against length scale. Each bar corresponds to topological feature; the bars represent the length scales over which the features occur. (In Fig 5D, the teal bars end at scales at which connected components merge.) As an alternative visualization, the pairs of length scales associated to each topological feature can be plotted in a persistence diagram (Fig 5E; any features near the diagonal occur over only a short range of length scales and may be dismissed as spurious). A similar analysis can be applied to continuous data (e.g., from a PDE model) by characterizing the topology of the sublevel sets of a continuous function, such as population density [135]. Much like PCA, persistence homology provides a natural way to reduce the dimensionality of the dataset. Features that persist over a narrow range of length scales are likely spurious and can be discarded. The features that persist over a wide range of length scales more likely correspond to meaningful structure in the dataset: Connected components might correspond to discrete clusters, loops to periodicity. Persistence diagrams can be compared to one another using a variety of metrics, allowing them to be used directly for model calibration. They can also serve as a starting point for development of custom summary statistics. This type of analysis was applied by Topaz and colleagues [138] to study ABMs demonstrating swarming behaviour. In that case, the agents each have a position and a velocity, so clusters of points in position-velocity space correspond to swarms-closely grouped agents exhibiting collective motion. Previously, researchers had developed case-specific order parameters to quantify such behaviour [139][140][141]. By applying TDA, Topaz and colleagues [138] were able to detect features that the previous ad hoc metrics failed to quantify. Similar approaches could be applied to characterize the dynamics of microbial communities. Machine learning algorithms to accelerate model calibration Conducting global parameter sweeps over high-dimensional parameter spaces is often infeasible due to computational limitations. For spatiotemporal models, this problem occurs due to long simulation runtimes and is exacerbated further for stochastic models, where large simulation ensembles may be required. One approach to address this challenge is to create a simplified input-output representation of the mathematical model of interest. Such an abstraction is known as a surrogate model (also commonly referred to as a metamodel, or emulator). A surrogate model is constructed by fitting a statistical model or machine learning model to training data generated from the mathematical model of interest. Surrogate models can run several orders of magnitude faster than ABM or PDE models that they are fit to, which has motivated their application in calibration of spatiotemporal models of microbial communities [142][143][144]. A surrogate model enables faster exploration of the parameter space for both simulation and uncertainty analysis, albeit at the cost of relying on emulator approximation of model behavior. Although this approach is relatively new to microbial community modelling, surrogate models have been used extensively in other fields, including engineering design [145], climate simulation [146], health economics and public health [147], and ecology [148]. Surrogate models can provide significantly more efficient simulation engines when compared to the original model formulations. However, when comparing computational costs, the time required to obtain the training data for the surrogate model must be considered. The time required for training depends on several factors, such as parameter space complexity, sampling methods for parameter values, and computational cost of the original model. For example, surrogate models of an ABM for biofilm growth required approximately 1,000 hours of serial computing time (Table 2) [143,144]. In contrast, a surrogate model of a PDE model describing spatiotemporal dynamics of pattern-forming bacteria required approximately 100,000 hours of serial computing time [142]. These training times can be reduced through parallelization. For example, the actual time required to generate the training data for the works listed in Table 2 would range approximately from 1 to 7 days if 64 simulations were constantly run in parallel. The use of surrogates for calibrating microbial community models requires familiarity with data sampling methods, supervised machine learning algorithms, and "big data" processing tools. Such projects may demand collaboration between data scientists and modellers who want to access these tools. Software packages, such as SUMO [149], SMT [150], and spartan [151], are available to facilitate the process of generating surrogate models. A brief discussion of surrogate model selection for ABM applications is provided in [152]. Surrogate models are not the only use for machine learning in this area. Lee and colleagues [153] reversed the standard modelling pipeline (of training a model on experimental data) by training a neural network to ABM data, and then using that network to infer microbial interactions from microscopy data. They demonstrated that the magnitude and direction of interspecific interactions could be quantified from steady-state spatial distributions of 2 interacting bacterial populations. Their work provides a new perspective on the use of mathematical Conclusions Potential applications of microbial communities in biotechnology, health, agriculture, and energy have motivated efforts to design and manipulate both synthetic and natural communities in a predictable fashion. Predictive tools such as ABMs and PDE models will be an essential part of the microbiome engineering toolbox. Systematically calibrating microbial community models to single-cell resolution data is challenging due to the high dimensionality of the data, the intensive image processing requirements, and the specific data processing algorithms required to generate summary statistics. The study of Hartmann and colleagues [70] exemplifies how single-cell data collection and application of systematic calibration techniques can be used to predict community-level properties from single-cell behaviours. We have compiled a collection of summary statistics relevant to microbial communities to facilitate quantitative comparison between experimental data and simulation outputs. Systematic calibration with the aid of these summary statistics can increase confidence in model predictions and overall model utility. Moreover, such calibration can improve the reusability of submodels and specific parameter values by allowing developers to confidently use or build upon these calibrated models. This modular approach is already used in synthetic circuit design workflows (e.g., [154]). Work in macrobial ecology demonstrates how adopting standard model documentation procedures (e.g., the ODD protocol and POM framework) can result in more reproducible and therefore more useful models. Adoption of a similar standard in microbial ecology could yield similar boons in reproducibility. TDA and machine learning algorithms hold potential for facilitating systematic selection of summary statistics and efficient exploration of highdimensional parameter spaces. As experimental methods and computational techniques continue to improve, it is expected that models will play a prominent role in rationally manipulating microbial communities in complex environments such as bioreactors, guts, soils, and wastewater treatment plants.
2022-10-14T06:17:16.451Z
2022-10-01T00:00:00.000
{ "year": 2022, "sha1": "0093bb64b02cb90d0cac537c33f7bb8d1e452525", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "275e2c9ac3623cab1d0b9c79a404c4064f559900", "s2fieldsofstudy": [ "Environmental Science", "Biology" ], "extfieldsofstudy": [ "Computer Science", "Medicine" ] }
22356381
pes2o/s2orc
v3-fos-license
Light-induced anisotropic skyrmion and stripe phases in a Rashba ferromagnet An external off-resonant pumping is proposed as a tool to control the Dzyaloshinskii-Moriya interaction (DMI) in ferromagnetic layers with strong spin-orbit coupling. Combining theoretical analysis with numerical simulations for an $s$-$d$-like model we demonstrate that linearly polarized off-resonant light may help stabilizing novel noncollinear magnetic phases by inducing a strong anisotropy of the DMI. We also investigate how with the application of electromagnetic pumping one can control the stability, shape and size of individual skyrmions to make them suitable for potential applications. An external off-resonant pumping is proposed as a tool to control the Dzyaloshinskii-Moriya interaction (DMI) in ferromagnetic layers with strong spin-orbit coupling. Combining theoretical analysis with numerical simulations for an s-d-like model we demonstrate that linearly polarized offresonant light may help stabilize novel non-collinear magnetic phases by inducing a strong anisotropy of the DMI. We also investigate how with the application of electromagnetic pumping one can control the stability, shape, and size of individual Skyrmions to make them suitable for potential applications. Low-dimensional magnetic structures provide an exciting playground for condensed matter physics and technology applications. Some of them, e. g., helical magnets [1][2][3][4], are known to support topologically nontrivial magnetic textures [5][6][7]. Such noncollinear states emerge as a result of the competition between Heisenberg exchange, antisymmetric Dzyaloshinskii-Moriya exchange, and magnetocrystalline anisotropy yielding magnetic ground states that are far more intricate than those in homogeneous ferromagnets [8][9][10][11][12]. Recent progress in the fabrication of magnetic materials motivated an interest in particlelike domains, such as magnetic Skyrmions [13][14][15][16][17][18][19][20][21][22][23][24][25], that are typical for nonhomogeneous ferromagnets. Skyrmions and other topologically protected magnetic textures have been proposed as building blocks for logical operations and information storage in the rapidly advancing fields of magnon spintronics [26] and Skyrmionics [27,28]. Spintronics, as a branch of applied science, is traced back to the pioneering work on giant magnetoresistance by Grünberg [29,30] and Fert [31]. The subsequent discovery of spin-transfer torque [32,33] provoked the idea to exploit the spin degree of freedom rather than the charge for processing and transferring information. This idea is currently being extended to magnetic materials supporting localized magnetic excitations such as Skyrmions. Ways to create and control Skyrmions and other noncollinear magnetic textures are essential for the practical implementation of this emerging technology. In this Letter, we investigate microscopically how the noncollinear magnetic domains in thin ferromagnet layers with strong spin-orbit coupling may be controlled by linearly polarized light. The effects predicted may be observed in thin films such as Co/Pt heterostructures subject to short light pulses [34]. The exchange interaction alone may lead only to a collinear orientation of magnetic moments in a cubic crystal. Spatially inhomogeneous magnets are usually associated with a lack of lattice inversion symmetry. In his seminal work on noncentrosymmetric magnets [35], Dzyaloshinskii identified the one-dimensional magnetic spiral states stabilized by the Dzyaloshinskii-Moriya interaction (DMI) [36] that favors the noncollinear orientation of neighboring spins. A nontrivial ground state arises in helical magnets as a consequence of the competition between the Heisenberg exchange and the DMI. In two-dimensional structures, this competition leads to a helical spin-spiral ground state configuration that becomes unstable in the presence of a magnetic field with the tendency towards the formation of Skyrmions [37]. Quite generally, Skyrmions correspond to the solitonlike solutions of the field equations of Dzyaloshinskii's theory that destroy the homogeneity of magnetic order [38]. The existence of such localized states and the mechanism of their nucleation as mesoscopic objects are rather common for continuum systems described by the free energy functional with Lifshitz invariants [39]. The strength of DMI can be rigorously approached by the microscopic theory, as an indirect exchange interaction between two neighboring spins facilitated by itinerant (conduction) electrons [40][41][42], as well as adopted from the first-principles simulations [43][44][45]. In a thin film, the DMI can be induced by the Rashba spin-orbit coupling. An external electromagnetic field may strongly modify the properties of an electronic system providing an important tool for manipulating materials in a controllable fashion [46][47][48]. It has been recently shown that the effect of off-resonant electromagnetic radiation (with the frequency exceeding the bandwidth of the system) may be described by effective time-independent models with strongly renormalized parameters [49][50][51][52][53][54][55][56][57][58]. In this Letter, we derive the effective s-d exchange model of a Rashba ferromagnet in the regime of strong coupling to external radiation. We find that the DMI strength can be effectively controlled by the application of off-resonant pumping that opens up exciting opportunities for controlling the stability, size, and shape of individual metastable Skyrmions. Also, we show that the application of linearly polarized radiation induces anisotropy of the DMI that not only provides a finer control over the individual DMI strengths in two orthogonal directions but also leads to the appearance of novel anisotropic phases. From the theory point of view, the effect of timeperiodic fields may be described, with some reservations, by the so-called Floquet theory [59,60]. Periodicity of the driving field enables one to map the original timedependent problem to the eigenvalue problem of Floquet states. Off-resonant pumping takes place if the frequency of the driving field is so high that electrons are not able to follow field oscillations. In this case, real absorption or emission of light quanta cannot happen due to restrictions imposed by energy conservation for radiation frequency exceeding the electron bandwidth. Still, such off-resonant radiation affects the system via virtual processes leading to a significant renormalization of the parameters of the initial Hamiltonian of an electron subsystem. Below, we restrict our attention to the effects of linearly polarized light, since it has a greater impact on the noncollinear magnetic textures. For the sake of microscopic treatment, we rely upon the Floquet-Magnus expansion and its generalizations that have been developed in Refs. [61][62][63][64]. For microscopic analysis, we consider a weak twodimensional ferromagnet that yields an s-d-like Rashba model for conduction electrons: where m is the unit (|m| = 1) local magnetization vector due to, e. g., localized d electrons, ∆ is the s-d-like exchange energy, p stands for the momentum operator for conduction electrons with an effective mass m, α is the Rashba spin-orbit interaction strength,ẑ is the unit vector in the direction perpendicular to the two-dimensional electron gas, and σ = (σ x , σ y , σ z ) denotes the vector of Pauli matrices. Models of the type of Eq. (1) were originally proposed to explain the physics of ferromagnetic metals beyond the Heisenberg exchange picture [65][66][67][68]. This approach relies upon a formal distinction between a localized (classical) magnetic subsystem (e. g., d or f electrons, that are described by an m field which is governed by a classical Heisenberg model) an itinerant subsystem [e. g., s electrons described by Eq. (1)] that are coupled to each other by means of exchange interaction. An external ac electromagnetic field of frequency ω [69] is introduced in the model of Eq. (1) by means of the Peierls substitution p → p + eA 0 cos ωt, where A 0 = E 0 /ω and E 0 is the electric field component of the field. In what follows, we restrict our analysis to the case of linearly polarized light by choosing E 0 = E 0ŷ , whereŷ is the in-plane unit vector (in the y direction). The Hamiltonian of Eq. (1) is conveniently rotated as The transformed Hamiltonian yields the matrix Floquet model of the form with the coefficients h n defined by where the parameter γ = 2eαE 0 / ω 2 describes the effective light-matter coupling and J n (γ) stands for the nth order Bessel function of the first kind. The high-frequency expansion in the form of the Brillouin-Wigner perturbation theory recently developed for this class of problems [70] maps Eq. (3) onto an effective time-independent Hamiltonian: that is valid away from resonance frequencies. The effective model fails only in a tiny vicinity δγ of the zeros of the Bessel function, δγ ≈ 10 −5 ∆ 2 /( ω) 2 , which is well beyond our numerical resolution [71]. The model is equivalent to that of Eq. (1) with anisotropic renormalization of coupling constants: Rashba spin-orbit interaction strength and s-d exchange coupling. In what follows, we assume a weak ferromagnet and treat the exchange interaction term V perturbatively. Based on the symmetry analysis, Dzyaloshinskii discovered that the effective Ginzburg-Landau free energy functional may allow for terms linear in magnetization gradients provided the absence of lattice inversion symmetry [36]. Later, Moriya argued, on the basis of Anderson's theory of superexchange, that the microscopic mechanism of spin-orbit coupling is responsible for such an interaction [72,73]. The latter can also be thought as a coupling between an excited state of a magnetic ion and the ground state of the neighboring ion. Such a coupling can be derived microscopically from the correction to the bare action S 0 [m] (that collects all terms corresponding to magnetic subsystem) computed to the second order with respect to the perturbation V [71]. To construct the perturbation theory, we take advantage of the bare Matsubara Green's function for the Hamiltonian of Eq. (5a): where g(θ) = sin 2 θ + J 2 0 (γ) cos 2 θ 1/2 and the spectrum ε ± k = 2 k 2 /2m ± α k g(θ k ) acquires the dependence on the direction of the wave vector k = p/ = k(cos θ k , sin θ k ) due to the linear polarization of light. The second-order contribution to the effective action in the imaginary time representation is given by where β stands for the inverse temperature while the indexes i and j denote the Cartesian vector components. The polarization operator is expressed as where f (ε) is the Fermi-Dirac distribution function and The straightforward expansion of Π ij (ω = 0, k) around k = 0 up to the terms linear in k yields a fully antisymmetric contribution to the effective action [71]: where we introduce the DMI couplings and Lifshitz invariants L In the absence of an electromagnetic field, i. e., for γ = 0, we obtain an isotropic DMI with D x = D y = ∆ 2 /(2πα ). In the presence of linearly polarized light, the DMI coupling becomes essentially anisotropic as given by Eq. (10). We stress that the employed highfrequency expansion is legitimate only as far as there are no resonant transitions and the parameter γ is away from zeros of the Bessel function J 0 (γ) [71]. To illustrate our results, we consider a classical twodimensional Heisenberg exchange model on a square lattice, that is given by the total energy where S r is the spin on a lattice site r, H ext is an external magnetic field (in energy units) perpendicular to the twodimensional plane,x andŷ stand for the unit vectors in x and y direction, correspondingly, and the lattice constant is set to unity. With the help of the numerical approach described in Ref. [71], we analyze the influence of the anisotropic DMI on the Skyrmion profile. The profile is obtained by relaxing a trial Skyrmion ansatz using the dynamical Landau-Lifshitz-Gilbert equation until the stationary state is reached. To avoid nonuniversal effects of boundary conditions, the numerical simulation is performed in a box of a large size that exceeds characteristic size of a Skyrmion by a large factor (only the central part of the box is shown in Fig. 1). We find that the anisotropic renormalization of the DMI strength of Eq. (10) results in the anisotropic squeezing of a Skyrmion. The Skyrmion becomes elongated along the light polarization direction and develops an elliptic profile as shown in Fig. 1. In the model with γ 1 and positive D x and D y , the in-plane spin projections are directed towards the Skyrmion center. Such a configuration may be referred to as the inverted hedgehog Skyrmion that is distinguished from the hedgehog Skyrmion in which the in-plane spin projections point outwards. The Skyrmion type is, therefore, defined by the overall sign of the DMI coupling. It is also worth stressing that the model (11) supports only Neél-type Skyrmions which are the Skyrmions with a radial orientation of spins. Such Neél-type Skyrmions were observed recently in GaV 4 S 8 [74]. Those should be contrasted with Bloch-type Skyrmions that are characterized by spin orientations perpendicular to the radial direction. The Bloch-type Skyrmions are thought to be characteristic for materials like FeGe [75] and MnSi [76]. It can be shown using the methodology of Ref. [71] that individual Skyrmions are metastable only in certain areas of the parameter space as illustrated in Fig. 2(a). The metastability regions of individual Skyrmions in the model of Eq. (11) must be distinguished from the phases that characterize the absolute minimum of the energy functional. By extending our numerical analysis to search for a ground state [71], we obtain the phase diagram depicted in Fig. 2(b). The diagram consists of three phases: (i) the homogeneous ferromagnetic order phase denoted by points, (ii) the Skyrmion crystal phase (a crystal of elliptic Skyrmons) denoted by circles, and (iii) the stripe crystal phase (periodic stripes in the direction of light polarization) denoted by vertical lines. Anisotropy induced by pumping distorts the symmetry of the Skyrmion crystal from the equilateral triangular at γ = 0 to an isosceles triangular at nonzero γ [71]. The stripe crystal phase is analogous to the helical phase discussed, e. g., in Ref. [77], although, in contrast to the conventional helical phases, the stripe phase arising for the model (11) is the Néel type, with spins rotating in the radial direction, as opposed to a helix. The orientation of the stripe phase depends on the direction of the induced anisotropy of the DMI. This provides a control over the orientation of the stripe phase by changing the polarization of the applied radiation -a property which may be employed in future light-controlled magnetic logic gates. Interestingly, the range of metastability of individual Skyrmions does not generally coincide with the phase boundaries. However, we find that metastable Skyrmions generally do not exist in a stripe crystal phase that is dominant at low values of the magnetic field. In this region, individual Skyrmions quickly become unstable with respect to stretching in the y direction to form a stripe. For intense light with the parameter γ exceeding the first zero of J 0 (γ), i. e., for γ 2.4, the phase diagram is dominated by the stripe phase at small magnetic fields. The Skyrmion crystal phase is limited to a moderate light intensity as shown in Fig. 2(b). The numerical studies of Skyrmion dynamics in the absence of the field γ = 0 were performed in Refs. [77][78][79][80][81]. In the absence of light, i. e., for γ = 0, the phase diagram of Fig. 2(b) reproduces these known results. Indeed, the obtained values of the critical fields for the transition between the stripe and Skyrmion-crystal phases (H c1 = 0.0072J) and between the Skyrmion-crystal and ferromagnetic phases (H c2 = 0.026J) are very close to those given in Ref. [78]. To simplify the comparison with Ref. [78], we have used the same parameter values of the DMI strength D x = D y = 0.18J at γ = 0. In conclusion, the field of magnetic Skyrmions has attracted considerable attention due to the potential applications of Skyrmions in information processing. The major advantage of such noncollinear spin configurations as compared to domain walls is the possibility to make the Skyrmion size as small as a few nanometers without losing its stability. In this Letter, we employ the s-d-like exchange model for a weak two-dimensional ferromagnet with strong spin-orbit coupling to show that the off-resonant linearly polarized light can be used to tune the strength of the DMI and induce a large DMI anisotropy in the two orthogonal directions. This effect leads to the appearance of novel anisotropic phasesan elliptic Skyrmion crystal phase and a stripe phaseand can provide a new tool to control the stability, size, and shape of individual Skyrmions, as well as a control over the stripe phases by changing the light polarization direction. The predicted effects may be observed in thin films such as Co/Pt using femtosecond laser pulses [34]. The light pulses must be sufficiently long to drive the structure into a nonequilibrium state that can be considered quasistationary. The typical experimental facility with the pump fluence 2-3 mJ/cm 2 should be sufficient to test the theoretical results. We also expect that qualitatively the same physics persists at room temperature, helping to create a controlled set of Skyrmions that can be used to make the concept of Skyrmion racetrack memory viable [82]. We thank Alexey Kimel for helpful discussions. The support from the Russian Science Foundation under Project No. 17-12-01359, from the Dutch Science Foundation NWO/FOM 13PR3118, and the EU Network FP7-PEOPLE-2013-IRSES Grant No. 612624 "Inter-NoM" is gratefully acknowledged. II. DERIVATION OF THE DMI STRENGTH In this section we present the derivation of antisymmetric exchange interaction which is linear in magnetization gradients and can be identified as the DMI. To analyze the polarization operator (8) we compute the quantity where ε ± q = 2 q 2 /(2m) ± α qg(θ q ), g(θ) = [sin 2 θ + J 2 0 (γ) cos 2 θ] 1/2 , f (ε) is the Fermi-Dirac distribution,ê x =x, e y =ŷ, and n q = q/|q|. Furthermore, one can show that where we kept the notations of the main text. Taking the integral in Eq. (s6) we conclude that the quantities N (1) i (k) in the linear order with respect to the momentum k are given by and Thus, the second order correction δS[m] to the bare action S 0 [m] reads which coincides with Eq. (9) of the main text with renormalized DMI strength D x , D y and Lifshitz invariants Λ III. DETAILS OF THE NUMERICAL SIMULATIONS In our numerical calculations we use the lattice model given by the Eq. (11). To find the stationary states minimizing the total energy (11) we evolve the overdamped Landau-Lifshitz-Gilbert (LLG) equation until the stationary state is reached. Here, the effective field H ef f is defined by the functional derivative of the energy (11) over the local magnetic moment, The stationary state corresponds to the minimum of the total energy (11). To ensure that a particular state is a global minimum rather than the local minimum we compare the energies of essentially different solutions obtained for each known nontrivial phase (skyrmion crystal, stripe phase or a ferromagnetic phase). To solve numerically the LLG equation (s11) we use an explicit finite element method implemented in C and parallelized with OpenMP. Below we discuss how each of the two figures presented in Fig. 2 of the main text was obtained. The Fig. 2a representing stability of a skyrmion is obtained as follows. We start from a seed approximate solution in the form of a skyrmion and evolve the LLG equation (s11) for sufficiently long time to obtain the numerically exact stationary solution for an isolated skyrmion. On the second stage, the stationary skyrmion solution is distorted by adding a random noise and by varying the parameters γ and H ext . The solution has been accepted as a stable if, upon addition of the noise and variation of the parameters it settles down and does not decay during sufficiently long evolution time of about ∼ 3000 /J. To calculate the phase diagram in Fig. 2b we proceed as follows. Configurations minimizing (11) in a periodic rectangular domain of N x × N y cites are found as stationary solutions of the Eq. (s11) obtained by evolving (s11) from an initial seed solution for all known nontrivial phases (skyrmion crystal or stripe phase). In case of the modulated phases (skyrmion and stripe phases) the periodicity of the system for an infinite system is not known beforehand and should be found by minimizing the lattice parameters. To perform this task numerically we analyze a rectangular region of the skyrmion lattice made of N x × N y spins, which contains two skyrmions. The energy configuration minimizing the energy density E/N x N y is then found by the coordinate descent method in the configuration space (N x ,N y ). The minimal energy configurations obtained for different values of the parameter γ and for a fixed magnetic field H ext = 0.015J are shown in Fig. s1 where a part of the skyrmion lattice with 210 × 210 spins is shown. As seen from the Figure, the increase of γ deforms the skyrmion configuration from the equilateral triangular lattice (at γ = 0.0) to a isosceles triangular lattice (at γ = 0.5 and γ = 1.0). Similar procedure is implemented for the stripe crystal. In this case the problem is simplified since the optimization is required only along one direction (due to the tendency of stripes to align along the "easy" direction defined by the polarization of the applied radiation). The energies of the stationary solutions obtained are, then, compared to the minimal energy identified as a ground state.
2017-10-07T16:49:38.000Z
2017-05-05T00:00:00.000
{ "year": 2017, "sha1": "48825cb0d3c2d171fb649a799558d893104d4220", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1705.02261", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "48825cb0d3c2d171fb649a799558d893104d4220", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics", "Medicine" ] }
117012836
pes2o/s2orc
v3-fos-license
Search for $ZH$ Production at D0 in $p\bar{p}\to\ell^+\ell^-b\bar{b}$ Events at $\sqrt{s}=1.96$ TeV We present a search for a low-mass standard model Higgs boson produced in association with a $Z$ boson decaying to charged leptons at a center-of-mass energy of $\sqrt{s}=$1.96 TeV with the D0 detector at the Fermilab Tevatron collider. The search is performed in a large data set of events containing two opposite-sign leptons (electron, muon, tau) and one or two b-tagged jets. Recent improvements to the sensitivity, from increased lepton acceptance to optimized signal-to-background discrimination, will be discussed. Introduction The mass of a standard model Higgs boson is constrained by direct searches performed at LEP and measurements of the top quark and W boson masses [1]. Combining these results, the Higgs mass m H must be less than 186 GeV at 95% confidence level. If m H < 135 GeV, then Higgs bosons are expected to decay primarily to bb. At hadron colliders, the inclusive bb cross section is six orders of magnitude larger than the cross section for Higgs production, so it is not feasible to find evidence for low-mass Higgs bosons produced alone. Instead, we search for associated production of vector (W , Z) and Higgs bosons. Requiring leptonic decay of the W or Z dramatically reduces the multijet background and increases our sensitivity to the Higgs signal. The analysis discussed here is concerned exclusively with ZH production in the ℓ + ℓ − bb final state; the D0 collaboration has also completed similar analyses using ℓνbb [2] and ννbb [3] final states. The irreducible backgrounds in this search are Z production with heavy-flavor jets, top quark pair production, and diboson final states. Instrumental backgrounds include jets faking charged leptons and light jets faking heavy-flavor jets. A more detailed description of this analysis may be found in [4]. The Tevatron and the D0 detector The Tevatron is a proton-antiproton collider located at Fermilab near Chicago, IL. Collisions have a center-of-mass energy of √ s = 1.96 TeV. Fermilab's Accelerator Division continues to optimize the integrated luminosity produced by the Tevatron, and currently the accelerator is performing better than ever before. The total integrated luminosity delivered from RunII is over 7 fb −1 ; results discussed here use up to 4.1 fb −1 of data. D0 is a multi-purpose particle detector, one of two detectors located at collision points around the Tevatron. We have taken data with 90% average efficiency since the start of RunII. In this search, we employ every major component of D0 in order to identify muons, electrons, and heavy-flavor jets [5]. Leptons We strive for maximum Higgs signal acceptance, so our event selection is very loose. For muons, we require central track matches, p T > 10 GeV, and |η| < 2. Electron requirements are p T > 15 GeV and |η| < 2.5. All identified leptons are also required to pass various tracking and calorimeter isolation criteria, and the invariant mass of the dilepton pair must match the Z boson resonance: 70 < m ℓℓ < 130 GeV. To be sure we accept as many Higgs events as possible, we select some electrons and muons that are not initially identified as such. In the inter-cryostat region (ICR) where there is little calorimeter coverage, we look for electrons that have been reconstructed as taus. In the various gaps in muon coverage, we look for isolated tracks. These additions improve our signal acceptance by 15%. Jets We select events with at least 2 jets, leading jet p T > 20 GeV and second jet p T > 15 GeV. Before tagging, S/B = 0.0003. Using D0's neural net b-tagging algorithm [6], we require either two loose tags (inclusive) or one tight tag (exclusive), which improves S/B by factors of 20 or 10, respectively. The optimization of our final multivariate discriminant depends significantly on b-tag criteria, as do the expected signal and background yields, so we can improve our Higgs sensitivity by analyzing these orthogonal b-tag samples separately. With the two highest-p T tagged jets or the one tagged jet and highest-p T untagged jet, we compute the invariant mass of the dijet system, which is the kinematical variable most sensitive to low-mass Higgs production. Kinematic fit With an ideal detector, we would have very little missing E T in ZH → ℓ + ℓ − bb events. Thus, events with a large p T imbalance must result from either background processes or mismeasurement. Given our knowledge of the jet and lepton energy resolutions, we can make our measurements more precise and discriminate against backgrounds such as multijet production and tt → ℓ + ℓ − ννbb. To do this, we perform a constrained multi-dimensional fit on the p T , η, and azimuthal angle φ of the two leptons and two candidate jets. We constrain the dilepton invariant mass to 91.2 ± 2.5 GeV and the vector sum of p T to 0.0 ± 7.0 GeV. Subsequently, we remove events with high kinematic fit χ 2 values to reduce instrumental background. As a result of the fit, the dijet mass resonance in Higgs events is more prominent, which translates directly to a 6-11% increase in Higgs sensitivity. Results We use boosted decision trees (BDT) [7] to combine the discrimination power of several kinematical variables: the dijet mass and p T , the dilepton p T and colinearity, and many others. No evidence for ZH production is seen, so we compute upper cross section limits based on the shape of the BDT output, using a modified frequentist approach [8,9]. The leading sources of systematic uncertainty are the Z+heavy-flavor cross section (20%), the jet energy scale (10%), and b-tagging efficiencies (10%). Assuming a Higgs mass of 115 GeV, we exclude ZH production above 9.1 times the standard model expectation at 95% confidence level. Limits assuming other Higgs masses are shown in Fig. 1. Upon comparison to previous results [10], our limits have improved roughly 12% beyond what is expected with more data. This is due to our use of more optimal selection criteria and more sophisticated analysis techniques. We are currently investigating further improvements to the analysis, including the use of matrix-element discriminants, improved b-tagging, and further optimization of our multivariate discriminant.
2010-06-07T18:39:49.000Z
2010-06-04T00:00:00.000
{ "year": 2010, "sha1": "4a4d8fe59846aae6b9199d6b657af3c550893b4e", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "4a4d8fe59846aae6b9199d6b657af3c550893b4e", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
222348746
pes2o/s2orc
v3-fos-license
Surface Electromyography in Clinical Practice. A Perspective From a Developing Country Surface electromyography (sEMG) has long been used in research, health care, and other fields such as ergonomics and brain-machine interfaces. In health care, sEMG has been employed to diagnose as well as to treat musculoskeletal disorders, pelvic floor dysfunction, and post-stroke motor deficits, among others. Despite the extensive literature on sEMG, the clinical community has not widely adopted it. We believe that in developing countries, such as Chile, this phenomenon may be explained by several interacting barriers. First, the socioeconomics of the country creates an environment where only high cost-effective treatments are routinely applied. Second, the majority of the sEMG literature on clinical applications has not extensively translated into decisive outcomes, which interferes with its applicability in low-income contexts. Third, clinical training on rehabilitation provides inadequate instruction on sEMG. And fourth, accessibility to equipment (i.e., affordability, availability, portability) may constitute another barrier, especially among developing countries. Here, we analyze socio-economic indicators of health care in Chile and comment on current literature about the use of sEMG in rehabilitation. Then we analyze the curricula of several physical therapy schools in Chile and report some estimations of the training on sEMG. Finally, we analyze the accessibility of some available sEMG devices and show that several match predefined criteria. We conclude that in developing countries, the insufficient use of sEMG in health might be explained by a shortage of evidence showing a crucial role in specific outcomes and the lack of training in rehabilitation-related careers, which interact with local socioeconomic factors that limit the application of these techniques. Surface electromyography (sEMG) has long been used in research, health care, and other fields such as ergonomics and brain-machine interfaces. In health care, sEMG has been employed to diagnose as well as to treat musculoskeletal disorders, pelvic floor dysfunction, and post-stroke motor deficits, among others. Despite the extensive literature on sEMG, the clinical community has not widely adopted it. We believe that in developing countries, such as Chile, this phenomenon may be explained by several interacting barriers. First, the socioeconomics of the country creates an environment where only high cost-effective treatments are routinely applied. Second, the majority of the sEMG literature on clinical applications has not extensively translated into decisive outcomes, which interferes with its applicability in low-income contexts. Third, clinical training on rehabilitation provides inadequate instruction on sEMG. And fourth, accessibility to equipment (i.e., affordability, availability, portability) may constitute another barrier, especially among developing countries. Here, we analyze socio-economic indicators of health care in Chile and comment on current literature about the use of sEMG in rehabilitation. Then we analyze the curricula of several physical therapy schools in Chile and report some estimations of the training on sEMG. Finally, we analyze the accessibility of some available sEMG devices and show that several match predefined criteria. We conclude that in developing countries, the insufficient use of sEMG in health might be explained by a shortage of evidence showing a crucial role in specific outcomes and the lack of training in rehabilitation-related careers, which interact with local socioeconomic factors that limit the application of these techniques. INTRODUCTION Since the ′ 40s, surface electromyography (sEMG) has been used in a variety of settings, including motor-control research, education, health care, rehabilitation, ergonomics, and human-computer interfaces, among others (1). Unlike needle EMG, which has long been used in the assessment of neuromuscular disorders, sEMG is rarely employed in clinical and rehabilitation practice. The literature on sEMG is extensive, and despite the total publications counting in the thousands, the number of papers devoted to clinical applications is considerably fewer. Adding terms such as "physiotherapy, " "stroke, " "gait, " or "back pain" leads to 60, 376, 497, 124, and 521 records, respectively. This suggests an important gap between the total literature on sEMG and the part of it dedicated to clinical applications. This gap prevents the inclusion of sEMG applications into clinical practice guidelines (2)(3)(4)(5)(6)(7). In developing countries, such as Chile, high inequities in per-capita income determine the quality and opportunity of health care. Combined with a centralized distribution of high-complexity health centers and specialists, these factors create a scenario where only high-impact and cost-effective interventions are applied. Therefore, potentially useful but non-critical tools-such as sEMG-are usually left outside the clinical armamentarium. The underrepresentation of sEMG in clinical guidelines determines that in rehabilitation careers, these topics are either not routinely taught, or maybe included in theoretical courses (i.e., movement control, muscle physiology) but not in clinical internships. These might be substantial reasons that explain why rehabilitation professionals do not routinely use sEMG in their practice (8). Another barrier to the widespread use of sEMG in developing countries may simply be the accessibility (i.e., cost, availability, portability) of the current EMG devices. Thus, we explore and compare the characteristics of several devices to shed some light on this topic. SOCIOECONOMIC ASPECTS In developing countries, such as Chile, there are substantial barriers that impact health care access. The high inequity in income distribution determines the opportunity and quality of health care. For example, the Gini index, which measures income-distribution inequality (0: perfect equity, to 100: perfect inequity), shows that Chile scores 44.4 (9), placing it at the percentile 84 worldwide. Public spending on health is also low. The Domestic General Government Health Expenditure index (10) shows that in 2017 Chile spent 4.5% of its GDP in health, placing it in the percentile 74 worldwide. Other countries in the region such as Argentina and Brazil spent 6.6 and 4%, respectively. Compared to Sweden (9.2%), Finland (7.1%), and Norway (8.9%), our expenditure places us far from developed countries. Additionally, high-complexity health care centers and specialists are located mostly in the capital (11). According to the health department of the Chilean government (www.minsal. cl), there are currently 23 high-complexity (type 1), and 37 medium-high (type 2) hospitals in the country. According to our 2017 census, the current population is 17.5 million people. Considering that roughly 80% of this population is cared for by the public system, this implies that each of these centers has to take care of about 230,000 people. These complex and intermingled factors create a scenario where health policies favor mostly high-impact and cost-effective interventions. For example, for 80 high-prevalence pathologies, the chilean government warrants access to diagnostic tools and treatments for which there is sufficient evidence of costeffectiveness (12). As a particular case, for the treatment of acute ischemic stroke our public health system provides state of the art treatment (2) in many centers. Although expensive, there is enough evidence supporting the investment of our limited resources in such cost-effective interventions. TRANSLATION OF sEMG LITERATURE INTO CLINICAL GUIDELINES During the last three decades, evidence-based medicine (EBM) has encouraged the testing of many procedures and interventions that were routinely prescribed but not clinically proven (13). Based on the best available evidence, many clinical societies have produced guidelines that establish levels of recommendations for different interventions. Therefore those tests, protocols, or treatments that reliably lead to good outcomes are highly recommended over those whose applications do not provide a benefit or even harm (2,14). In the case of sEMG, a growing number of publications have explored its use in gait analysis (15,16), muscle fatigue (17,18), low-back pain (19,20), muscle activity onset latency (21,22), ankle instability (21), and techniques of analysis (23), just to name a few. Although sEMG is essential for the understanding of neuromuscular physiology and dysfunction, the scarcity of literature demonstrating that it is instrumental for reaching favorable clinical outcomes has prevented its general inclusion in EBM guidelines and might be one of the main barriers for its widespread use in clinical practice. As we discussed earlier, developing countries, such as Chile, favor high cost-effective approaches, which poses considerable obstacles in applying potentially effective but unproven tools. sEMG as a Tool for Therapy: Pearls and Pitfalls As we previously discussed, sEMG has been an essential tool to understand the neuromuscular system; nevertheless, it has also been long employed as a tool for therapy in the form of biofeedback (24) to treat a number of conditions. In dysphagia, it has been used as an adjunctive treatment to standard therapy, where it increases the displacement of hyoid and the laryngeal elevation, increases myoelectrical activity, and improves swallowing (25,26). In post-stroke motor deficits, it has been employed in the rehabilitation of upper and lower extremities. In the upper extremity, when compared to standard therapy, it improves motor scores, but not independence scores (FIM) (27). In the lower extremity, it improves the range of motion and clinical scores of impairment, although it is not clearly superior to standard therapy alone (28). In cervical and shoulder pain, the telerehabilitation treatment with EMG-BF has been shown to be at least as effective as conventional therapy in reducing pain scores (29). It also has been explored in the context 1 | Summary results of a simple analysis of the presence of electromyography-related keywords ("Electromiografía," "Instrumentación," "bioinstrumentación," "EMG," and "sEMG") in the curricula of eight PT careers. Number of PT schools where any of the keywords are mentioned in the curriculum 5 Number of PT students from the sampled schools exposed to any EMG-related content/Number of PT students in the analyzed schools of sleep bruxism, where it showed that using EMG-BF during the day produced a decrease in the amplitude of myoelectrical activity of masticatory muscles during sleep, although the impact of this finding in clinical outcomes is not clear (30). In a pelvic floor musculature training and education program, the group receiving EMG-BF training was found to have a better quality of life. Nevertheless, the control group did not receive any therapy, which prevents obtaining stronger conclusions (31). In spinal cord injury, the use of EMG-BF as part of the rehabilitation protocol, leads to higher levels of muscle activation when requesting an elbow flexion. Also, patients reported higher levels of motivation during therapy and considered it as a useful and valuable tool (32). One of the main difficulties with the application of sEMG in this context is that the pooled evidence does not offer reliable supporting results, which has been reflected in clinical guidelines and, therefore, in clinical practice. For example, a 2007 meta-analysis of 13 studies on EMG-BF for post-stroke rehabilitation (33) showed that the analyzed evidence was not sufficient to conclude that it provided an extra benefit over standard therapy in the recovery of stroke patients. A 2014 meta-analysis examined the evidence of several physical-therapy interventions in the recovery of stroke (34). Among those interventions, it assessed EMG-BF in the context of upper and lower limb function and gait. Although there is a tendency for a positive effect, the pooled analysis revealed that it does not add to the standard therapy. These data have crystallized into clinical guidelines such as the Canadian "Evidence-Based Review of Stroke" (4) or the American Heart Association guidelines on stroke rehabilitation (2), which do not offer strong recommendations for the use of EMG-BF for stroke rehabilitation. Reasons for the failure of these meta-analysis are explained by high study heterogeneity (33), small sample sizes (26)(27)(28), lack of electrode placement description, which may not correspond to current standards (35,36); and finally, an inability to reach a certain level on therapy intensity, which is decisive for obtaining significant outcomes (37)(38)(39)(40). Despite being successfully used in several fields, there has been limited pooling of data or systematic reviews of EMG-BF for interventions. The lack of demonstrated effect size resulting from this is a barrier for the implementation of this tool in clinical use. sEMG TRAINING IN CHILEAN PHYSICAL THERAPY SCHOOLS Surface electromyography offers several benefits to rehabilitation professionals, nevertheless, its lack of widespread use may also be explained by insufficient training. To approach this question, we contacted 17 physical therapy (PT) schools, and eight sent us the curricula from their career. These account for 21.1% of the PT students of the country (4,292 of 20,306). As a means to approach the level of influence these schools exert in the local educational landscape, we report the national ranking of the schools' Universities ( Table 1) (41). By using a python script, we searched for the keywords "electromiografía, " "instrumentación, " "EMG, " and "sEMG." Candidate courses were manually checked by the authors and were only included if the keywords appeared in the contents but not under different headings, such as "bibliography" or "suggested readings." None of the collected course programs mentioned the number of hours or credits devoted to each of the contents, thus, approaching the time spent on electromyography-related content was not possible. We first counted the number of curricula in which at least one course mentioned any of the keywords. This simple approach revealed that five out of eight curricula met these criteria ( Table 1). These curricula account for 3,780 students in the country. The total number of courses where EMG contents are present might be a loose proxy for the amount of training on this technique. Accordingly, for each curriculum with at least one EMG course, we counted the number of courses in which any of the keywords above were present. This approach resulted in a mode of one course per career mentioning any of the keywords ( Table 1). Finally, during the internship, the PT student is placed in a real clinical environment that shapes the repertoire of techniques that he or she will use as a professional therapist. Hence, the presence of sEMG content on these clinical internships might be crucial for the use of this technique as a future PT. We found that none of the internships and clinical oriented courses mentioned any EMG-related content in their description (Table 1). Thus, sEMG training is not provided in all the PT schools, and is taught only in courses that take place during the first 3 years, but not during clinical internships ( Table 1). EMG DEVICES Considering all the barriers to successfully applying sEMG to the clinical practice, a shortage of accessible EMG devices may add another barrier to its use. Here, we consider accessibility as the combination of portability, affordability, and ease of use of a particular EMG device. We define these criteria as follows: • Portability: the device has a small size (pocket size, or handheld device size), can be easily carried to different locations, has internal batteries, and does not require a computer for operation. • Affordable: considering that one of the main end-users of these systems may be the physical therapist, we defined this term based on the average monthly income before taxes (AMI) of a Chilean PT. We chose this parameter because, in their clinical practice, many PTs have to purchase their own equipment. The official statistics indicate that the 1st year after school, the AMI is U$740, and during the 5th year after graduation, it rises to U$1,330 (42). Therefore USD 1,000 seemed like a reasonable threshold. • Ease of use: all the necessary elements (hardware, software) are provided, and is compatible with smartphones or tablets (obtained from brochures or website descriptions). The results are described in Table 2. Some of the devices found are expensive and more suited for research (BTS, Noraxon, Delsys, and Bioelettronica). On the other hand, the most inexpensive one, the Myowave, requires buying additional hardware (i.e., an Arduino board) and programming skills. Thus it is not suited for immediate clinical use. We were pleased to find that at least four devices met all the predefined criteria, which provides the technical means to use sEMG directly in the office. This finding suggests that EMG device accessibility would not necessarily mean a barrier for the use of sEMG. Finally, another issue could be related to the cost of electrodes, which may impose another barrier. Nevertheless, for most of the applications, either disposable (∼U$0.15/piece) or reusable (∼U$0.40/piece) electrodes do not constitute a substantial obstacle for using sEMG (reference prices obtained from amazon.com). CONCLUSIONS Based on available information and personal insights, we have discussed some of the barriers to the use of sEMG that might be relevant in a developing country such as Chile. Several socioeconomic, and political aspects of our country determine that health policies favor those interventions that are highly costeffective. The failure of the literature to translate into decisive outcomes has kept sEMG restricted mostly to research. To open a path for the inclusion of sEMG into clinical guidelines that recommend it as a necessity and not only as a complementary tool, it will be necessary to produce well designed and outcomeguided studies that also attain clinical standards. This will lead to results suitable for pooling into a meta-analysis, which may influence contexts that favor cost-effectiveness. Who should generate this research? We think that this type of literature can arise more easily from transdisciplinary teams of clinical professionals (physicians, physical therapists, speech therapists, etc.) and developers (engineers, designers, etc.), in which the patients are at the very center of their activity. Clinicians may perfectly understand patients' problems, but without help from engineers, they will not be able to solve them. On the other hand, engineers and designers that are not connected to a clinical setting may create solutions that are either too complex or too difficult to implement and do not necessarily solve practical problems. In either case, the patients' particular needs are left unmet. We think that these types of interactions are probably the best remedy to transform problems and needs into meaningful solutions and to advance the research on sEMG. In a different vein, our analysis of the curricula from Chilean PT schools, confirmed our suspicion of a lack of training in this area. In the schools with EMG training, this is taught mainly in one or two courses during the entire career, and the training takes place at the beginning of the career but not in the clinical internships, which may explain why PTs do not regularly use sEMG in their practice. Regarding the accessibility of sEMG devices, we found that at least four devices met predefined criteria of being portable, affordable, and easy to operate. This suggests that accessibility or price does not constitute in itself a barrier for the use of sEMG in the clinical practice and that the main limitations arise from political and economic characteristics of the country, from a paucity of compelling clinical indications, and from an insufficient amount of training. Finally, some of the barriers for the use of sEMG in the clinical domain may interact with each other in a circular manner, for example, the paucity of clinical evidence and the view of sEMG only as biofeedback may create insufficient pressure for both the development of health policies and for the training of rehabilitation professionals in sEMG. This insufficient training leads to an insufficient mass of professionals using sEMG, which leads to fewer grant applications and less research in the area, which leads us to the starting point. Also, there could be insufficient teaching dedicated to instrumentation or on technologies for rehabilitation. These types of courses could broaden the view on interventions and techniques that could enrich the clinical practice of rehabilitation professionals. Therefore, we think there is a need for more high-quality clinical evidence that presents an unavoidable pressure to employ this technique. There is a need for advancing the training not only in PT schools, but also in speech therapy and occupational therapy as well. A critical mass of professionals trained on these techniques backed up by sufficient clinical evidence may create the perfect scenario for the massive use of surface electromyography.
2020-10-15T13:11:19.405Z
2020-10-15T00:00:00.000
{ "year": 2020, "sha1": "55a3d9faa000aa283986d02ef233009bb7524610", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fneur.2020.578829/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "55a3d9faa000aa283986d02ef233009bb7524610", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine", "Psychology" ] }
235371222
pes2o/s2orc
v3-fos-license
ALDH1 expression predicts progression of premalignant lesions to cancer in Type I endometrial carcinomas In type 1 endometrial cancer, unopposed estrogen stimulation is thought to lead to endometrial hyperplasia which precedes malignant progression. Recent data from our group and others suggest that ALDH activity mediates stemness in endometrial cancer, but while aldehyde dehydrogenase 1 (ALDH1) has been suggested as a putative cancer stem cell marker in several cancer types, its clinical and prognostic value in endometrial cancer remains debated. The aim of this study was to investigate the clinical value of ALDH1 expression in endometrial hyperplasia and to determine its ability to predict progression to endometrial cancer. Interrogation of the TCGA database revealed upregulation of several isoforms in endometrial cancer, of which the ALDH1 isoforms collectively constituted the largest group. To translate its expression, a tissue microarray was previously constructed which contained a wide sampling of benign and malignant endometrial samples. The array contained a metachronous cohort of samples from individuals who either developed or did not develop endometrial cancer. Immunohistochemical staining was used to determine the intensity and frequency of ALDH1 expression. While benign proliferative and secretory endometrium showed very low levels of ALDH1, slightly higher expression was observed within the stratum basalis. In disease progression, cytoplasmic ALDH1 expression showed a step-wise increase between endometrial hyperplasia, atypical hyperplasia, and endometrial cancer. ALDH1 was also shown to be an early predictor of EC development, suggesting that it can serve as an independent prognostic indicator of patients with endometrial hyperplasia with or without atypia who would progress to cancer (p = 0.012). www.nature.com/scientificreports/ subpopulation of cells, referred to as cancer stem cells (CSC), that have the capability of transforming from an epithelial to a mesenchymal phenotype 4 . Aldehyde dehydrogenase 1 (ALDH1) has been identified as a putative CSC marker in several cancer types 5 and is expressed in endometrial cancer 6,7 . While the body of evidence for ALDH1 expression having clinicopathological and prognostic value in a number of cancers grows, including colorectal cancer 8,9 , bladder and prostate cancer 10,11 , and breast cancer 12,13 , less is known about its ability to predict the progression of pre-cancerous lesions to cancer. Therefore, the aim of this study is to investigate the clinical utility of ALDH1 expression in pre-cancerous endometrial lesions. We test the hypothesis that ALDH1 expression can predict progression of pre-cancerous endometrial lesions to cancer. Materials and methods TCGA analysis. The expression of all 17 ALDH isoforms was analyzed through the CBioPortal for Cancer Genomics (http:// cbiop ortal. org) 14 . 526 cases of uterus corpus endometrial carcinoma (PanCancer Atlas) were evaluated for EMP2 expression (mRNA expression z-scores relative to diploid samples). Data is summarized as an OncoPrint for multiple genes across a set of tumor samples (columns) 14 as well as survival data dividing groups into altered versus unaltered mRNA. Cell culture. HEC1A and HEC1B cells (ATCC, Manassas, VA, USA) were cultured in McCoys or DMEM media supplemented with 10% fetal bovine serum, 1% l-glutamine, 1% sodium pyruvate and 1% penicillinstreptomycin at 37 °C in a humidified 5% CO 2 . Experiments were performed on cell lines within 3 months after resuscitation of frozen aliquots and were authenticated based on viability, recovery, growth, and morphology. Cell lines were tested monthly for mycoplasma (Lonza, Walkersville, MD, USA). Flow sort and cell proliferation. Semi-confluent cells were harvested using a 0.5% EDTA solution in HBSS. ALDH high and ALDH low subsets were isolated using an ARIA III flow cytometric machine (BD Biosciences) using the ALDEFLUOR assay kit (StemCell Technologies, Vancouver, BC) according the manufacturer's guidelines. Briefly, 1 × 10 6 cells were incubated in ALDEFLUOR assay buffer containing ALDH substrate or under identical conditions with 50 mmol/L of diethylaminobenzaldehyde, an ALDH inhibitor, as a negative control. In order to compare the rate of proliferation between ALDH high versus ALDH low cells, measurements were made according to manufacturer's instructions. Briefly, plates were removed from the incubator and allowed to equilibrate at room temperature for 20 min, and equal volume of CellTiter-Glo Luminescent Cell Viability Assay reagent was added directly to the wells. Plates were incubated at room temperature for 30 min on a shaker and luminescence was measured on an Envision reader (PerkinElmer; 570 nm). Tissue micro-array construction. Tissue microarrays (TMA) were constructed as previously described to represent endometrial cancer progression 15 . Archival formalin-fixed, paraffin embedded endometrial tissue samples were obtained with UCLA Institutional Review Board (IRB) approval from the David Geffen School of Medicine at UCLA, and the studies conducted in the laboratory were performed under approved guidelines. The IRB waived the need for informed consent as this was a retrospective analysis using de-identified samples. The study cohort consisted of 226 randomly selected patients who underwent endometrial sampling through a variety of biopsy, curettage or resection procedures from 1982 to 2002. Each histologic sample was represented by at least three 1 mm cores that were taken from donor paraffin embedded tissue blocks. Of the 226 patients, 207 individuals had multiple samples representing those who did or did not develop endometrial cancer over the 20 year time period ("metachronous"). The array contained 1879 cores representing the following histologies: (1) benign endometrium (n = 231); (2) simple hyperplasia and complex hyperplasia (n = 141); (3) simple and complex atypical hyperplasia (n = 54); and (4) primary endometrial adenocarcinoma (n = 109). Atrophic, weakly proliferative, proliferative, secretory, disordered proliferative, progestational effects due to hormone therapy, and polypoid endometrium were all grouped as "benign" endometrium. Endometrial hyperplasia was classified based on glandular complexity and nuclear atypia with both simple and complex grouped as "hyperplasia" or "atypical hyperplasia". All endometrial tumors were staged according to the TNM staging system endorsed by the American Joint Committee on Cancer (AJCC) and the International Union Against Cancer (UICC) and limited to endometrioid types 16 . Representative hematoxylin and eosin images are provided in Fig. S1. Low grade adenocarcinomas were identified based on architectural evidence of stromal invasion, usually in the form of stromal disappearance, desmoplasia, necrosis, or a combination of these findings between adjacent glands. Type I endometrial cancer variants such as ciliated, secretory, papillary (villoglandular), adenoacanthoma, and adenosquamous were included. For this particular analysis, 453 samples from 158 patients contained adequate ALDH1 immunostaining information for evaluation. Of these, 33 patients had metachronous samples with disease progression to cancer. Table 1 summarizes clinical variables and patient groups as well as ALDH1 expression in each subgroup. All the cases were reviewed using the World Health Organization histological criteria for the diagnosis of endometrial carcinoma and hyperplasia 2 . We evaluated the hematoxylin and eosin stained slides for gland-tostroma ratio (glandular crowding), architectural abnormalities (gland confluency, cribriforming, and papillary architecture). We also evaluated the sections for cytological atypia which includes nucleus-to-cytoplasm ratio, presence and prominence of nucleoli, nuclear chromatin quality, and mitotic activity by light microscopy. If there was an increase in gland-to stroma ratio or architectural atypia with or without cytological atypia, those cases were considered as no response to therapy 2 . Distinction of well differentiated endometrioid carcinoma Immunohistochemistry. Five-micrometer thick TMA sections were de-paraffinized in three washes of xylene and then rehydrated in serial dilutions of ethanol. Antigen retrieval was performed by placing the slides in a container of 0.1 mol/L citrate, pH 6.0, at 95 °C for 20 min. ALDH1 expression was detected using clone 44/ ALDH (1:100; cat #611194, BD Biosciences, San Jose, CA, USA). Staining was visualized using DAKO EnVision + System, HRP (Agilent, Santa Clara, CA). Counterstaining was performed with hematoxylin. Slides were then placed in distilled water, dehydrated and mounted. An isotype control (MAB002, R&D Systems) was used for the negative control slides. Menopause (years) Median ( www.nature.com/scientificreports/ Results were analyzed by a pathologist (Y.E.) who performed scoring of the samples by rating the intensity from 0 to 3 (0 = below the level of detection, 1 = weak, 2 = moderate, 3 = strong) and percentage of cells staining at each intensity. A histologic score (H-score) was calculated for each sample by multiplying the percentage of positive cells by the intensity score. For cytoplasmic and nuclear expression, positive ALDH1 expression was defined as an H-score being larger than 0. For the purpose of reproducibility, two pathologists including one gynecological pathologist (N.A.M) and a general surgical pathologist (Y.E), both of who was blinded to the clinical data, have reviewed each slide and scored the morphology and location of ALDH1 + cells on each slide. Statistics. Statistical analyses were performed using R (http:// www.R-proje ct. org) including the 'survival' and 'survminer' packages. Pooling criteria were as previously described 15 . To examine differences in ALDH1 expression between samples, the Kruskal-Wallis test, Mann-Whitney U test and Spearman correlation were employed. The Kruskal-Wallis test was used to examine differences in ALDH1 expression in relation to the development of cancer. A mean pooled H-score for ALDH1 was calculated across the three cores used in each histologic sample. Since intensity of staining did not seem to affect the outcomes of the results, we used percentage of positivity alone for our calculations. The dependence between categorical variables was tested using the Fisher's exact test. Estimation curves for probability of cancer-free survival were generated using the Kaplan-Meier method, and comparisons made using the log-rank test. Hazard ratios and prognostic significance of ALDH1 expression were estimated using the Cox proportional hazards model. For barplots, data is presented as the mean expression ± standard error of the mean (SEM). For all results, p < 0.05 was considered significant. ALDH expression in endometrial cancer. High ALDH activity has been associated with self-renewal in a variety of normal and tumor tissues including the prostate, breast, lung, colon, cervix, and ovary 17 , but little is known about its expression in the endometrium. As a starting point, all 18 isoforms were queried using the The Cancer Genome Atlas (TCGA) PanCancer Atlas in 527 patients with uterine Corpus Endometrial Carcinoma 18 . Within all endometrial cancer subtypes, ALDH was present in 57% of patients, with ALDH1 representing the cumulative dominant isoform (Fig. 1A). ALDH isoforms were present in all subtypes of endometrial cancer including serous, papillary, and endometrioid (Fig. 1B). Analysis revealed upregulation of ALDH mRNA in 62.4% of serous/papillary tumors, 57.1% of mixed, and 53.7% of endometrioid endometrial cancers. To understand the significance of this, we evaluated all ALDH isoforms as well as only ALDH1 for potential correlation with survival. To perform this analysis, we utilized The Cancer Genome Atlas (TCGA) and found a significant correlation between overall survival and high expression of the ALDH gene signatures (Fig. 1C). Within the TCGA endometrial cancer cohort, ALDH1 also correlated with survival (Fig. 1D). The ALDH high subpopulation of endometrial cancer cells have CSC properties. To next verify that ALDHhigh versus ALDHlow cells show biological differences, we determined whether ALDH activity could enrich for cells with a higher proliferative capacity in vitro. HEC1A and HEC1B endometrial cancer cell lines were sorted by FACS and designated as ALDH high or ALDH low cell population. To compare the biological behaviour of these two sorted subpopulations, we evaluated their growth curves in vitro. Compared to ALDH low cells, ALDH high cells grew faster over a 5 day incubation period (Fig. 2). Clinical characteristics. While ALDH activity has been implicated with cancer stem properties, little is known about its role in tumor progression. To address this gap, endometrial tissue from 158 patients was analyzed for ALDH1 expression, with clinical characteristics listed in Table 1. The median BMI was 27 (range 16-57), with 40% of the patients meeting the CDC criteria for obesity (BMI > 30). At some time, 20% of patients developed diabetes mellitus. Caucasians represented the major ethnic group examined (81%) while Hispanic/ Latinas and African Americans comprised 8% and 5% of the array cohort respectively. 35% of women smoked. In this cohort of women, the median age for cessation of menstruation occurred at 50 year old (range 31-59). The first biopsy occurred at 46 years old (median age; range 24-83). ALDH1 expression in normal endometrium. Initially, we evaluated the cytoplasmic expression of ALDH1 in benign endometrium (Fig. 3). Expression within the stratum basalis was scored independently from the functional layer, with representative images presented in Fig. 3A. In the stratum basalis, weak to moderate cytoplasmic expression occurred in 26.8 ± 3.2% of endometrial stromal cells while benign proliferative and secretory glands showed negligible levels (6.2 ± 1.2 and 9.1 ± 2.4%, respectively) of ALDH1 expression ( Fig. 3B; Kruskal-Wallis, p value = 1.1e −14). Several papers have suggested that outside of the cytoplasm, ALDH1 expression can be present in the nucleus 19,20 , and thus, this site was independently scored. Very low levels of nuclear staining were observed, with generally less than 1% of cores showing any expression (Fig. S2A). ALDH1 expression in glandular epithelium predicts malignant progression. ALDH1 expression was next evaluated for each spot on the TMA and analyzed relative to each histology during malignant progression. Figure 4 illustrates representative images of immunohistochemical staining across histologic groups. Within the epithelia, benign endometrium and hyperplasia showed similar, very low levels of ALDH1 expression. However, progression from hyperplasia to endometrial cancer revealed a step-wise augmentation in cytoplasmic ALDH1 expression ( Fig. 5A; Spearman correlation, rho = 0.25, p = 2.7e−21). The mean cytoplasmic positivity rose more than two-fold in atypia (from 7.0% ± 0.9 to 16.2% ± 1.8%) and further increased in endometrial www.nature.com/scientificreports/ carcinoma (20.7% ± 1.8%). Nuclear staining for ALDH1 remained very low in all histologies and showed only weak correlation with progression of hyperplasia to malignancy ( Fig. S2B; rho = 0.06, p = 0.03). Interestingly, ALDH1 positivity at the time of first biopsy did not correlate with many variables commonly associated with endometrial cancer development including age or BMI, and its expression did not associate with an early onset of menopause (Table 1). In the patient cohort examined, 83% of patients were on hormone replacement therapy, with 45% on estrogen and progesterone at some point during their follow-up. When examining each hormone independently, no correlation between ALDH1 positivity and the use of estrogen and/or progesterone were observed. However, ALDH1 expression did correlate with follow-up time or the number of months from the date of the first informative surgical pathology report to the date of the last surgical intervention. ALDH1 positive tumors correlated with a shorter follow-up time with patients requiring surgical intervention 23 months earlier than those with negative tumors ( Table 1). The results thus far suggested that cytoplasmic ALDH1 positively correlated with malignant progression. Within each histological group, however, there was heterogeneity in ALDH1 expression with some individuals showing higher ALDH1 levels than others. We therefore examined whether ALDH1 provided information regarding future tumor development. First, we dichotomized ALDH1 expression from patients in each histologic group into those who developed or did not develop cancer. In normal tissue, ALDH1 largely resides in epithelia found in the stratum basalis. During premalignant progression, an increase occurred in cytoplasmic ALDH1 expression in patients who went on to develop cancer compared to those who did not (Fig. 6A-C). In all cases, amplified ALDH1 trended in patients where disease progressed. Histologically benign tissue from patients who ultimately went on to develop cancer showed statistically significant increased cytoplasmic ALDH1 positivity compared to those who did not ( Fig. 6A; 5.6 ± 1.3% compared to 23 ± 8.8%, p = 0.002). Similarly, comparing patients with endometrial hyperplasia but without atypia, higher ALDH1 positively was observed in women who went on to develop cancer compared to those who did not ( Fig. 6B; 13.5 ± 4.5% compared to 4.6 ± 1.3%, p = 0.02) Higher mean cytoplasmic ALDH1 expression in patients who developed endometrial cancer when initially presenting with atypical hyperplasia was also observed, but this trend was not statistically significant ( Fig. 6C; www.nature.com/scientificreports/ p = 0.71). Nonetheless, the results collectively suggest that positive ALDH1 in the epithelium may enhance the malignant potential of hyperplastic endometrium. We further analyzed this data using the Cox proportional hazards model with cytoplasmic ALDH1 as a continuous variable predictor (Table 2). In patients with hyperplasia or atypical hyperplasia, higher ALDH1 epithelial expression was associated with increased risk for progression to carcinoma (p = 1.4e−3). A Kaplan-Meier plot (Fig. 6D) for patients with endometrial hyperplasia with or without atypia showed a shorter time interval for progression to cancer in those with ALDH1 positive cells than those that were completely negative (median time 57 months vs 144 months, hazard ratio (HR) = 2.66, 95% CI 1.27-5.58; p = 7.1e−03). ALDH1 stromal staining. An emerging issue in tumor progression centers on the interactions between cancer cells and the microenvironment, and several studies have suggested a role for stroma in inhibiting tumor growth 21,22 . As prominent stromal staining occurred in several cores, its relative expression was assessed during stages of malignant progression. ALDH1 levels were highest in benign stroma and hyperplasia without atypia, then steadily decreased during progression to malignancy, with the lowest levels observed in endometrial cancer (Fig. 7A, rho = − 0.28, p = 5.9e−25). When we next examined patients with endometrial hyperplasia, stromal expression of ALDH1 did not predict patients who developed cancer. However, for patients with atypical hyperplasia, lower levels were significantly associated with increased risk of progression to cancer (p = 0.002). Mean stromal positivity was 58.2 ± 5.9% in those who did not develop cancer and 30.1 ± 5.8% in those who did. Applying the continuous Cox proportional hazard model, higher percentage of stromal cells positive for ALDH1 conferred protection, with a hazard ratio of 0.98 (95% CI 0.974-0.996) and p = 6.9e−3 ( Table 2). Stromal ALDH1 expression was next assessed using Kaplan-Meier estimates for disease progression. Patients with higher stromal expression showed a longer median cancer free interval compared to patients with lower levels (130 vs. 70 months respectively; p = 0.012) (Fig. 7B). When the results were merged, any glandular cytoplasmic positivity along with lower levels of stromal staining (cut point again defined by 50% expression level for stroma) suggested significantly poorer outcomes (Fig. 7C). Bivariate analysis showed a difference in median cancer free interval of 130 months vs 6 months (p = 5.67e−06). Collectively, our results reveal an independent effect of ALDH1 within the stroma and epithelium. www.nature.com/scientificreports/ Discussion Given the recent interest in cancer stem cells as therapeutic targets for decreasing potential of cancer metastasis and relapse, it is important to investigate the expression of cancer stem cell markers in pre-cancerous tissues and to understand the significance of the expression of these markers within a clinical context of predicting outcomes. High cytoplasmic ALDH1 expression predicts poor prognosis and/or increased tumor aggressiveness in various other cancer types [23][24][25] , and TCGA analysis suggests that ALDH isoforms correlate with poor survival and enhanced proliferation in this study on patients with endometrial cancer. However, few studies to date have used ALDH1 to not just predict cancer aggressiveness but to even predict the development of cancer from potentially pre-neoplastic tissue. These studies, generally limited to the development of oral squamous cell carcinoma 26,27 , have shown ALDH1 expression associated with a three-fold increased risk for the development of cancer 25 . Our study is the first to our knowledge to demonstrate the utility of ALDH1 to predict endometrial cancer development. In addition, our study is the first to demonstrate that in normal endometrial glands cytoplasmic ALDH1 expression largely resides within the basalis. This distinction may have important pathophysiologic implication as others have shown that high ALDH activity has been associated with stem and progenitor cells in various tissues 28,29 . ALDH1, in particular, is highly expressed in hematopoietic progenitors, in intestinal crypt cells, as well as in normal mammary stem cells 17,30,31 . In addition, high expression has been observed in hematopoietic stem cells (HSCs) where its effects influence retinoid metabolism 32 . Functionally, given that the ALDH superfamily of enzymes have been shown to catalyze the formation of retinoic acid and to be critical for the detoxification of endogenous and exogenous aldehydes 17 , we hypothesize that ALDH1 + cells within the basalis may identify the endogenous stem cell population of the endometrial glands. Notably, epithelial cell staining of ALDH1 did not correlate with stromal expression. Instead, ALDH1 positive stromal staining was highest in benign endometrium and lowest in endometrial cancer. While its expression in the stroma has been well documented in multiple cancer types, the clinical implications of its expression have not been well established. Within the tumor microenvironment, multiple cell types including fibroblasts, endothelial cells, and immune cells shape epithelial cell maintenance and regeneration 33 . Significantly lower expression of ALDH1 was observed in the proliferative phase (6.15 ± 1.17, n = 264 cores) and in the secretory phase (9.08 ± 2.43, n = 104). www.nature.com/scientificreports/ the breast, pancreas, colon and prostate [33][34][35] . Our results suggest, similar to recent studies in the breast 21,36 , that ALDH1 positive stroma offers a potential protective effect in the endometrium. Given these findings and its putative roles in cell differentiation, ALDH1 positivity within endometrial epithelia seems to be a biologically important marker of cancer stem cell activity. We have demonstrated that ALDH1 expression can predict cancer development, in addition to predicting patient survival as was demonstrated by others 37,38 . Though it remains unknown whether ALDH1 plays a causative role in the transition of hyperplasia/ atypia to malignancy, it is clear that the reported actions of ALDH1 may contribute to several of the behavioral properties of CSCs and potentially predict responsiveness to therapy. For example, ALDH enzymes scavenge reactive oxygen species, protecting these cells from apoptosis, perhaps contributing to aggressive CSC behavior and resistance from targeted and chemical therapies 9, 39 . www.nature.com/scientificreports/ In conclusion, we found in our study that while ALDH1 expression in endometrial epithelia predicts progression from hyperplasia and atypia to cancer, within the stroma it offers a protective effect. Additional studies will be needed to determine if there is any cross-talk between ALDH enzymes within these two compartments or if each is regulated independently.
2021-06-09T06:18:29.816Z
2021-06-07T00:00:00.000
{ "year": 2021, "sha1": "b7712c970ee1c82722902c955f4ebc49c86624ca", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/s41598-021-90570-3.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "aa695ac6a6ddd4e5b6c12f92d78d43eaf9295bc7", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
118441178
pes2o/s2orc
v3-fos-license
Fluid dynamics with saturated minijet initial conditions in ultrarelativistic heavy-ion collisions Using next-to-leading order perturbative QCD and a conjecture of saturation to suppress the production of low-energy partons, we calculate the initial energy densities and formation times for the dissipative fluid dynamical evolution of the quark-gluon plasma produced in ultrarelativistic heavy-ion collisions. We identify the framework uncertainties and demonstrate the predictive power of the approach by a good global agreement with the measured centrality dependence of charged particle multiplicities, transverse momentum spectra and elliptic flow simultaneously for the Pb+Pb collisions at the LHC and Au+Au at RHIC. In particular, the shear viscosity in the different phases of QCD matter is constrained in this new framework simultaneously by all these data. The main goal of ultrarelativistic heavy-ion collisions at the Large Hadron Collider (LHC) and the Relativistic Heavy-Ion Collider (RHIC) is to determine the thermodynamic and kinetic properties of strongly interacting matter. The measured hadronic transverse momentum (p T ) spectra at the LHC and RHIC provide convincing evidence for a formation of a strongly collective system and a nearly thermalized quark-gluon plasma (QGP) [1]. In particular, the observed systematics of the Fourier harmonics v n = cos(nφ) of the azimuth-angle distributions, are remarkably consistent with a low-viscosity QCD matter whose expansion and cooling are describable with dissipative relativistic fluid dynamics [2][3][4][5][6][7][8][9][10][11][12]. The essential inputs to the fluid dynamics are the initial energy density and flow of the matter created in the collision. However, the final state observables like multiplicities, p T spectra and v n , are also strongly affected through the fluid dynamical expansion by the viscosity and the equation of state (EoS). Thus the entire spacetime evolution, including partons in the colliding nuclei, the primary production and thermalization of QCD matter and the subsequent fluid dynamical evolution, becomes highly convoluted. Description of all these dynamics in a coherent way, leading to quantitative predictions and a meaningful determination of the QCD matter properties from the measurements, provides an ultimate challenge in the field. As discussed in this paper, the determination of e.g. the temperature dependence of the shear viscosity-to-entropy ratio η/s(T ) calls for a simultaneous theory analysis of all possible bulk (low-p T ) observables at the LHC and RHIC. Parton saturation is a viable mechanism to control the otherwise unsuppressed production of soft small-p T quanta in hadronic and nuclear collisions [13][14][15][16]. In essence saturation means that there exists a semihard scale controlling the particle production in the collision. In the perturbative QCD (pQCD) + saturation framework we consider here, the primary particle production in A+A collisions is computed in collinear factorization by approaching the saturation at semi-hard scales from the perturbatively controllable high-p T side [17,18]. Perturbative QCD provides an excellent description of hard processes in hadronic and nuclear collisions at interaction scales Q 1 GeV [19]. Moreover, this framework allows for a quantification of the particle production uncertainties, and their propagation through the fluid dynamical evolution in nuclear collisions [18]. In addition to the internal consistency of the pQCD-based approach, it should be noted that perturbative primary gluon production in heavy-ion collisions is complementary to the Color-Glass Condensate models [20] which build on soft gluon fields. If these different high-energy QCD approaches produce similarly successful heavy-ion phenomenology, the overall uncertainty in determining the QCD matter properties can be dramatically reduced. The present work has roots in the so-called EKRT saturation model [17], which successfully predicted the multiplicities and p T spectra in central A+A collisions at RHIC and LHC [21][22][23][24], and also the centrality dependence at RHIC [25] (cf. Fig. 23(a) in [26]). Here we use the next-to-leading-order (NLO)-improved pQCD + saturation framework of [18] to calculate the initial QGP energy density profiles and formation times, and combine these with viscous fluid dynamics. We analyse the centrality dependence of charged particle multiplicities, p T spectra and elliptic flow (v 2 ) at the LHC and RHIC in terms of the few physical key-parameters of the framework. We show that a good simultaneous description of all these observables can indeed be obtained without retuning the framework from one collision system (cmsenergy, nuclei, centrality) to another. This results in the robust predictive power of the approach, originating from the pQCD calculation of the QGP initial conditions. Most importantly, this predictive power enables us to study and restrict the ratio η/s(T ) in the different QCD-matter phases more consistently in a simultaneous multiobservable analysis of the LHC and RHIC data. Let us then discuss the details of our framework [18]. The rigorously calculable part is the minijet E T production in an A+A collision, in a rapidity interval ∆y and above a p T scale p 0 , where s = (x, y) is the transverse location, b the impact parameter, and T A (s) the standard nuclear thickness function with the Woods-Saxon nuclear density profile. The first E T -moment of the minijet E T distribution, σ E T p0,∆y,β , [18,27] is in NLO where dσ 2→n are the collinearly factorized minijet production cross sections and [DPS] n denote the phase-space differentials for the 2 → 2 and 2 → 3 cases [18,28]. We apply the CTEQ6M parton distribution functions (PDFs) [29] with the EPS09s impact-parameter dependent nuclear PDFs [30]. The measurement functionsS 2 andS 3 define the hard scattering in terms of the minijet transverse momenta p T,i and the cut-off scale p 0 , as well as the total minijet E T produced in ∆y: where E T,n = n i=1 Θ(y i ∈ ∆y)p T,i and Θ is the step function. These functions, analogous to the jet definitions [31], are constructed so that σ E T p0,∆y,β is a welldefined, infrared-and collinear-safe, quantity to compute. The hardness-parameter β defines the minimum E T in the interval ∆y. As discussed in [18], any β ∈ [0, 1] is acceptable for the rigorous NLO computation. Following the new angle in formulating the minijet saturation [18], the E T production is expected to cease when the 3 → 2 and higher-order partonic processes start to dominate over the conventional 2 → 2 processes. For a central collision of identical nuclei of radii R A this leads to a transversally averaged saturation criterion with an unknown, α s -independent, proportionality constant K sat ∼ 1. Generalizing to non-zero impact parameters and localizing in the transverse coordinate plane gives where the l.h.s. is the p 0 -dependent NLO pQCD calculation defined in Eq. (1). For given K sat and β, we solve the above equation for , and obtain the total dE T /d 2 s in a mid-rapidity unit ∆y = 1 at saturation from the r.h.s. as K sat p 3 sat /π. Once the solution p sat is known, the local energy density is obtained [21,23] as where the local formation time is τ s = 1/p sat . Fig. 1 shows examples of p sat ( √ s N N , A, s, b; K sat , β) as a function of T A T A , calculated for fixed values of K sat , β and with b = 0 and three other fixed impact parameters corresponding to the centrality classes 0-5%, 20-30% and 40-50% in √ s N N = 2.76 TeV Pb+Pb collisions at the LHC and 200 GeV Au+Au at RHIC. To a very good approximation, the b and s dependence of p sat comes only through T A T A . This is due to the weak s dependence of the nPDFs near the centres of the nuclei [30]. The approximate power-law scaling behaviour seen at large T A T A can then be understood as expained in [32]. We identify two main uncertainties in mapping the pQCD + saturation calculation to an initial state for fluid dynamics: (i) The energy density given by Eq. (5) is at a time τ s = 1/p sat , i.e. different at each transverse point s, while for fluid dynamics we need the initial condition at a fixed time τ 0 . (ii) We cannot trust the pQCD calculation down to p sat → 0, but we need to set a minimum scale p min sat ≫ Λ QCD . Wherever p sat ≥ p min sat we can use the pQCD calculation, but the other regions, i.e. low density edges, need to be treated separately. We fix a minimum saturation scale as p min sat = 1 GeV. Correspondingly, the maximum formation time in our framework is τ 0 = 1/p min sat . Then, we evaluate the energy densities from τ s (s) to τ 0 using either the Bjorken free streaming ε(τ 0 ) = ε(τ s )(τ s /τ 0 ) (FS) or the Bjorken hydrodynamic scaling solution ε(τ 0 ) = ε(τ s )(τ s /τ 0 ) (4/3) (BJ). We take these two limits to represent the uncertainty in the early pre-thermalization evolution: In the free streaming case the transverse energy is preserved, while the other limit corresponds to the case where a maximum amount of the transverse energy is reduced by the longitudinal pressure. To obtain the energy density ε(s, τ 0 ) in the transverse region where p sat < p min sat , we use an interpolation ε = C(T A T A ) n , where the power n = 1 2 [(k + 1) + (k − 1) tanh({σ N N T A T A − g}/δ)] with the total inelastic nucleon-nucleon cross-section σ N N , and g = δ = 0.5 fm −2 . This smoothly connects the FS/BJevolved pQCD energy density ε(p min sat ) = C(T A T A ) k to the binary profile ε ∝ T A T A at the dilute edge. For the fluid-dynamical evolution, we use the state-ofthe art 2+1 D setup previously employed in Ref. [11,12,33], assuming longitudinal boost invariance, a zero netbaryon density and thermalization at τ 0 . The equations of motion are given by the conservation laws for energy and momentum, ∂ µ T µν = 0. The evolution equation of the shear-stress tensor π µν = T µν is given by transient relativistic fluid dynamics [34][35][36], where the co-moving time derivative u µ ∂ µ is denoted by the dot, η is the shear viscosity coefficient, σ µν = ∂ µ u ν is the shear tensor, θ = ∂ µ u µ is the expansion rate, and the angular brackets denote the symmetrized and traceless projection, orthogonal to the fluid four-velocity u µ . The coefficients of the non-linear terms are taken to be c 1 = 4τ π /3, c 2 = 10τ π /7 and c 3 = 9/(70p), where p is the thermodynamic pressure and τ π = 5η/(ε + p). For details of the numerical algorithm, see Refs. [11,37]. The hadron spectra are calculated with the Cooper-Frye freeze-out procedure [38] by using Israel's and Stewart's 14-moment ansatz for the dissipative correction to the local equilibrium distribution function, ing different hadron species and p µ i the 4-momentum of the corresponding hadron. The freeze-out temperature is here always T dec = 100 MeV. After calculating the thermal spectra, we include the contribution from all 2-and 3-particle decays of unstable resonances in the EoS. We use the lattice QCD and hadron resonance gas (HRG) based EoS s95p-PCE-v1 [39] with a chemical freeze-out temperature T chem = 175 MeV. Although the rather high T chem leads to an overabundance of protons, it however reproduces the low-p T region of the p T -spectra much better than e.g. T chem = 150 MeV. For a rough but realistic (non-constant [40]) shear viscosity description, we assume the ratio η/s to decrease linearly as a function of temperature in the hadronic phase, be in a minimum at the matching-temperature 180 MeV of the HRG/QGP phases in the used EoS, and either to increase or stay constant vs. T in the QGP phase [11,12]. Fig 2 shows the η/s(T ) which in our framework best reproduce the v 2 coefficients simultaneously at RHIC and LHC. At this point, we have a fixed framework with four correlated unknowns, {K sat , β, BJ/FS, η/s(T )}, to be de-termined using the LHC and RHIC data on the centrality dependence of the charged particle multiplicities, p T spectra and v 2 . We proceed by scanning the parameters K sat = O(1), β ∈ [0, 1] and η/s(T ). In particular, we vary the minimum value and slopes of η/s(T ), keeping its general shape as in Fig. 2. Both the BJ and FS prethermal evolutions are considered. In practice, for each fixed {β, BJ/FS, η/s(T )}, the remaining parameter K sat is always tuned such that the multiplicity in the 0 − 5 % most central collisions at the LHC is reproduced. In Fig. 3a we show the computed centrality dependence of the charged hadron multiplicity in Pb+Pb collisions at √ s N N = 2.76 TeV compared with the AL-ICE data [41]. As demonstrated here, several sets {K sat , β, BJ/FS, η/s(T )} give a good agreement with the measurement. However, the data clearly favours β ∼ 1 and slightly the FS scenario over the BJ. For comparison, we also show the results obtained with the usual (non-saturation) eBC and eWN Glauber model initial states [42]. In Fig. 3b we show the multiplicities for Au+Au collisions at √ s N N = 200 GeV, using the same parameter sets {K sat , β, BJ/FS, η/s(T )} as in panel 3a, and compare with the PHENIX [43] and STAR [26] data. We note that although the RHIC data would seem to favour a slightly smaller β and the BJ case, the overall simultaneous agreement at RHIC and LHC is rather good. As long as the centrality dependence of the multiplicity is described, all the scenarios studied here give a very good description of the charged hadron p T -spectra. More relevant parameters in this case are T chem and T dec which here are kept unchanged from RHIC to LHC. The obtained p T spectra are shown in Fig. 3c for the LHC and in Fig. 3d for RHIC. The data are from Refs. [44] and Ref. [45,46], correspondingly. In Figs. 3e and 3f we show the elliptic flow coefficients v 2 (p T ) at the LHC and RHIC, respectively. The data are from ALICE [47] and STAR [48]. The v 2 (p T ) coefficients depend strongly on the η/s parametrization, and e.g. an ideal fluid description (not shown) does not give a correct v 2 (p T ). By scanning the η/s(T ) as explained above, while keeping K sat of order 1, we observed that a good simultaneous agreement with the measurements is obtained with the cases shown in Fig. 2. We emphasize that at RHIC, where the flow gradients are larger, one should require the agreement in particular in the smallp T region, where the dissipative corrections to the particle distributions do not grow unphysically large. Note especially that since η/s(T ) is considered as a material property, it must not be changed between different collision systems. To conclude, we computed the energy density profiles and formation times of the produced QGP at the LHC and RHIC in a new NLO-improved pQCD + local saturation framework of considerable predictive power. The subsequent evolution of these initial conditions was described with dissipative fluid dynamics. Identifying the framework uncertainties, a good global agreement with the measured centrality dependence of the low-p T bulk observables was obtained simultaneously at the LHC and RHIC. In particular, we were able to constrain the η/s(T ) parametrization simultaneously by all these data. In the future, we will extend this analysis to include event-byevent fluctuations.
2013-10-11T12:30:43.000Z
2013-10-11T00:00:00.000
{ "year": 2014, "sha1": "b35ccdfc643a0cb4886c95b7613b3c93c2703989", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1016/j.physletb.2014.02.018", "oa_status": "GOLD", "pdf_src": "Arxiv", "pdf_hash": "b35ccdfc643a0cb4886c95b7613b3c93c2703989", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
258293894
pes2o/s2orc
v3-fos-license
Identifying strategies for implementing a clinical guideline for cancer-related fatigue: a qualitative study Background Clinical practice guidelines assist health professionals’ (HPs) decisions. Costly to develop, many guidelines are not implemented in clinical settings. This paper describes an evaluation of contextual factors to inform clinical guideline implementation strategies for the common and distressing problem of cancer-related fatigue (CRF) at an Australian cancer hospital. Methods A qualitative inquiry involving interviews and focus groups with consumers and multidisciplinary HPs explored key Canadian CRF guideline recommendations. Four HP focus groups examined the feasibility of a specific recommendation, while a consumer focus group examined experiences and preferences for managing CRF. Audio recordings were analysed using a rapid method of content analysis designed to accelerate implementation research. Strategies for implementation were guided by the Consolidated Framework for Implementation Research. Results Five consumers and 31 multidisciplinary HPs participated in eight interviews and five focus groups. Key HP barriers to fatigue management were insufficient knowledge and time; and lack of accessible screening and management tools or referral pathways. Consumer barriers included priority for cancer control during short health consultations, limited stamina for extended or extra visits addressing fatigue, and HP attitudes towards fatigue. Enablers of optimal fatigue management were alignment with existing healthcare practices, increased HP knowledge of CRF guidelines and tools, and improved referral pathways. Consumers valued their HPs addressing fatigue as part of treatment, with a personal fatigue prevention or management plan including self-monitoring. Consumers preferred fatigue management outside clinic appointments and use of telehealth consultations. Conclusions Strategies that reduce barriers and leverage enablers to guideline use should be trialled. Approaches should include (1) accessible knowledge and practice resources for busy HPs, (2) time efficient processes for patients and their HPs and (3) alignment of processes with existing practice. Funding for cancer care must enable best practice supportive care. Supplementary Information The online version contains supplementary material available at 10.1186/s12913-023-09377-9. Introduction / Aim Clinical practice guidelines are designed to assist health care professionals (HPs) to provide the most appropriate care for their patients [1]. Using best available evidence, guideline development is slow and expensive [2]. Yet, many guidelines are not implemented [3], with potential benefits for patients lost. Reasons why effective interventions recommended in guidelines are not used include lack of leadership on treatment policy, lack of awareness of or trust in the guideline and complexity of guideline recommendations [4]. The feasibility of guideline use in real clinical settings has rarely been explored in any condition. Guidelines often include several recommendations that were developed by stakeholders (including content experts, health professionals, consumers) based on evidence, but their practicability is not commonly tested [5]. Strategic approaches are needed to overcome barriers and facilitate enablers to guideline use [6,7]. According to the Consolidated Framework for Implementation Research (CFIR), characteristics of the guideline (intervention), health practitioners, organisations (internal setting), external context (e.g. consumer needs and policy) and implementation strategies should all be considered [8]. Fatigue is recognised as a common, distressing symptom during and after cancer treatment [9,10]. Approximately 50% of patients during treatment, and 30% of survivors after treatment experience prolonged and debilitating fatigue [11]. Persistent fatigue at moderate to severe levels is associated with disability, psychological disorders [12,13] and poor performance status [11,14]. International guidelines for cancer-related fatigue (CRF) recommend systematic screening for fatigue, assessment of contributing factors, and interventions including symptom management, exercise, fatigue education and sleep enhancement [15]. Many Australian cancer care professionals do not routinely use CRF guidelines [9,16], and the feasibility of their use is not known. A study of clinician and consumer perceptions about applying a Canadian CRF guideline [13] highlighted insufficient detail to perform guideline recommendations, and a lack of clinical tools including assessments and consumer education [9]. A Canadian study reported lack of HP knowledge, resources and system barriers, together with inconsistent provider-patient communication of fatigue accounted for their CRF practice gap [17]. These and other barriers need to be addressed to enable CRF guideline use and optimal care for cancer survivors [15]. Currently, the best approaches to implementing CRF guidelines in complex "real-world" cancer practices are unknown [15], with scant data related to patient outcomes. This paper describes the development of strategies for implementing a clinical guideline for CRF at an Australian specialist cancer hospital, using the lens of the CFIR [8]. The research question was 'What strategies could increase the feasibility of CRF guideline use?' Study aims were (1) to assess barriers and enablers to implementing a fatigue guideline at a Comprehensive Cancer Centre, and (2) to identify strategies to support guideline implementation at the Centre. Methods This qualitative study applied a content analysis approach to key informant interviews, focus groups and field notes to explore the context for implementing the Canadian Association for Psychosocial Oncology (CAPO) fatigue guideline [13]. A previous systematic guideline search and appraisal using the AGREE-II instrument identified the CAPO guideline as suitable for CRF assessment and management worldwide [18]. Our study was conducted during 2018-19 and procedures were approved by the Peter MacCallum Cancer Centre Human Research Ethics Committee (LNR/18/PMCC/205). A project steering committee including the authors, a consumer, a senior oncologist, director of allied health, physiotherapy department head and an implementation researcher provided advice and oversight to the study. Participants and recruitment Multidisciplinary HPs in relevant senior organisational roles with knowledge of current health care delivery, systems and processes at an Australian cancer centre were invited purposively by email to participate in key informant interviews. Focus group participants were invited via emails to discipline heads, presentations at team meetings and advertising in internal communications i.e., convenience sampling. Interested participants were sent an online poll to indicate their availability. Many HPs at the cancer centre were former colleagues of the lead researcher (EP), with knowledge of their CRF research and oncology practice background. Consumers with experience of CRF were invited to participate in a focus group or interview via information leaflets in outpatient waiting areas, and e-newsletter to a consumer register. Specific culturally and linguistically diverse groups were also approached by a cancer centre consumer liaison officer. Inclusion criteria 1. Health professionals Registered medical, nursing or allied HPs at Peter MacCallum Cancer Centre (Peter Mac) with skills and experience in delivery of cancer care. Health administrators Professionals in administration, management or policy roles within Peter Mac. Consumers Aged 18 + , with any cancer diagnosis, treatment stage or comorbidities, who identified as having experienced CRF. Exclusion criterion: Inability to complete study tasks including consent due to cognitive barriers. Interview and focus group participants signed informed consent before data was collected. Participants completed demographic details on a study registration form. This included phone or email, year of birth, sex, education, current occupation, cancer diagnosis, treatments, experience with CRF, professional discipline and practice experience, as relevant. The HPs could attend more than one session and were allocated to sessions according to their availability and topic relevance. See Additional Material 1 for optimal group membership. Additionally, individuals and teams from occupational therapy, clinical psychology, speech pathology, nutrition, day chemotherapy, radiotherapy nursing, haematology and palliative care were consulted informally about their current practice and ways to implement the fatigue guideline. Project team members recorded field notes from these informal meetings or observation. Existing documents related to internal systems and processes, or screening, assessment and interventions for CRF were included as field notes. In this qualitative study, our pragmatic target sample size was approximately 50 diverse participants including consumers, due to project funding and time constraints. With purposeful sampling and iterative processes, this number was considered sufficient to identify critical issues [19]. An ideal focus group size is six to eight participants [20]. To allow for dropouts, eight to 10 participants from relevant disciplines and services were recruited for each focus group. Formal individual interviews and informal discussions at team meetings provided additional perspectives and increased data richness [21]. Interview and focus group procedures Four HP focus groups explored existing practice, barriers and facilitators for different key CRF guideline recommendations: fatigue screening, patient education, fatigue assessment and physical activity. A consumer focus group discussed their experience of, and preferences for CRF management. Interviews and focus groups were led by one female post-doctoral researcher (EP), previously employed as a senior occupational therapist at Peter Mac for over 15 years, with extensive interview experience. This prior relationship facilitated participant trust and engagement with the project. The lead researcher was aware of potential biases due to familiarity with the work setting. She checked assumptions and interpretation routinely with the research team and other stakeholders. A female allied healthqualified research assistant co-facilitated the focus groups and recorded field notes. One research assistant was a doctoral candidate and experienced physiotherapist at Peter Mac and the second, a recently graduated nutritionist. Interviews and focus groups were up to 60 and 75 min in duration respectively and were audiorecorded. A core semi-structured interview schedule was developed, guided by the CFIR domains: characteristics of the intervention (guideline), the inner (organisation) and outer (consumers, society, health policy) settings, individual health professionals' characteristics and implementation strategies [8]. The schedule was adjusted to match each interviewee's practice area, and focus group topics. Background information including the CAPO fatigue guideline algorithm and recommendations was provided and used during the sessions. See Additional file 1 for the interview guides, including development information. Data analysis Descriptive statistics were used to summarise participant demographics. To accelerate the progress of the project, a rapid content analysis approach was used for interview and focus group data. Rapid analysis is a method of qualitative investigation that is particularly suited to designing implementation strategies when there is a time constraint [22]. Three concepts characterise rapid analysis: (1) a system perspective, (2) triangulation of data, and (3) iterative data collection and analysis [22]. The rapid analyses used methods described by McNall and Foster-Fishman [23]. To reduce potential researcher bias and increase the study's validity, two researchers made detailed notes from recordings of each focus group and interview. Concurrently, the notes were coded as direct quotes, paraphrasing or researcher hypothesis. Three researchers cross-checked notes and resolved discrepancies through discussion [23]. Findings were discussed in depth within the team and with some individual stakeholders. To balance the research team's interpretation, other steering committee members provided practical and theoretical appraisal of results. Summary statements for each question or topic were created i.e., deductive content analysis. These were triangulated with field notes to increase validity and identify barriers, facilitators and potential strategies for CRF guideline implementation. Data saturation was tested via continual feedback and discussion with key participants and teams throughout the project to identify critical factors that would support or prevent guideline use. Reporting adheres to the COREQ checklist [24]. Results Five consumers and 31 multidisciplinary HPs participated in eight individual key informant interviews and five focus groups. One HP and one consumer participated via telephone; all others participated in person at the cancer centre. Two students observed one HP interview. Seven HPs and one consumer expressed interest in attending a focus group but were unavailable for a scheduled session. The HP participants were predominantly female (84%), had a median age 43 years (range 24-68) and 10.5 years cancer services experience (range 1.5-32). Most were nurses (55%). Two male and three female consumers participated, having a median age of 59 years (range 43-65) and 5 years (range 2-9) since their Composition of the interviews and focus groups is detailed in Additional file 1. Nurses included chemotherapy, specialist, palliative care and research nurses. Doctors included a pain specialist, one medical and two radiation oncologists. Four HPs participated in more than one session. Barriers and enablers to implementing the CAPO fatigue guideline in ambulatory care Key barriers to address, and enablers to optimise use of the CAPO fatigue guideline are depicted in Fig. 1. A critical barrier to consistent and comprehensive fatigue management for all HPs was limited time. Most HPs other than occupational therapists felt they lacked adequate fatigue knowledge and resources to support practice. Enablers were alignment of fatigue management with current care processes such as symptom screening, accessible education and simplified practice tools. A further practice barrier was fatigue management not being considered part of the care pathway: 'It's not usual care, it's Unusual care, while it should be usual care' (Nurse, interview). For consumers, reduced stamina and cognitive abilities, coupled with a perception that HPs did not recognise or prioritise their fatigue were barriers to self-advocacy. Addressing fatigue prevention and management by integrating with treatment, tailoring to individuals' needs, involving caregivers and using telehealth were consumer-enabling approaches. Table 2 summarises the barriers and enablers to implementing the CAPO fatigue guideline reported by study participants, aligned to four CFIR domains [8]. The fifth CFIR domain-Implementation strategies -includes potential actions to reduce barriers and harness enablers to implementation. Staff perceptions of fatigue management in practice Application of the CAPO recommendations for cancer fatigue screening, assessment and management was inconsistent across Peter Mac, with a lack of clarity around whose role it was. Occupational therapists provided fatigue management in their routine practice, and received most referrals from palliative care staff. Fatigue was recognised by HPs as a problem, but during medical encounters there was barely sufficient time to address key disease concerns; while nurses were screening and managing multiple physical and psychosocial issues. Despite recognising fatigue as a key issue, doctors and nurses were often hesitant to screen or ask about fatigue, due to lack of time to follow up or because they were uncertain what to do next e.g., where to refer. This was explained in the fatigue education focus group: 'The thing is, if you ask the question and then you get the answer -you've gotta do something about it. So sometimes … I won't ask the question because I don't know if I can do anything about it' (Nurse, focus group). Fatigue management education available for HPs comprised a little known 3-page fatigue management guide 'Follow up of survivors with cancer-related fatigue' on the Peter Mac external website, and bespoke online training for occupational therapists. Policy and procedure documents were limited to a precinct document that included management of fatigue in terminal care. Consumers and HPs stated existing patient information was often too long or detailed for people with significant fatigue. Details of how current fatigue management practice aligned with the CAPO fatigue recommendations are shown Additional file 2. Consumer experience of fatigue management Consumers with fatigue had low stamina for travel, waiting and lengthy consultations. When fatigue is a problem, extra questions or tests could be overwhelming due to cognitive changes and exhaustion. 'When fatigue was high it caused an inability to multitask and follow long discussions -caused me a lot of distress. What is wrong with me? … A lot of my cognitive skills just shut down' (consumer #5, female age 43). Screening and acknowledgement of fatigue was welcomed by consumers, but it was unusual: 'Fatigue isn't usually mentioned, it's like it's a given and that's that. ' (consumer #1, female age 59); 'this is not imaginary, this is debilitating, and it needs attention. ' (consumer #3, male age 65). However, screening could also be distressing, with some people under-rating their fatigue. One participant reported under-rating her fatigue due to feeling inadequate in self-management. 'Whenever I've got a zero to ten, I tend to under rate. I don't ever want to be a 10. Cause that means to me that I'm sort of not coping and I Discussion Implementing guidelines for fatigue screening, assessment and management needs careful planning with consideration of consumer, practitioner and system/ organisational perspectives [15]. Our investigation identified a range of barriers and enablers to implementing the CAPO fatigue guideline at an Australian comprehensive cancer centre. Predominant HP barriers to fatigue management related to lack of knowledge, time and practice resources, such as standard screening methods. These caused HPs to avoid asking about fatigue. Surprisingly, 87% of HPs rated their CRF expertise as 'limited' or 'moderate'-despite having a median of 10.5 years' oncology experience. This suggests access to fatigue training or opportunity to use the knowledge are lacking. When consumers did experience fatigue screening, lack of follow up discouraged them from raising the topic again, or to downgrade their fatigue severity rating. These findings are not unique to Australia, with knowledge and system barriers coupled with poor fatigue communication previously reported [9,16,17,[25][26][27][28]. A recent Canadian study found major themes which accounted for the CRF knowledge-practice gap were "a perfect storm" and "a breakdown in communication" [17], both themes strikingly congruent with our findings. "A perfect storm" characterised inadequate HP knowledge of CRF guidelines in a setting of system barriers and limited funding; while "a breakdown in communication" involved HPs avoiding or normalising fatigue, leaving patients feeling helpless and dismissed [17]. The interrelationship of the barriers identified highlights the complexity of implementation. For the HP, it begins with awareness and knowledge of the guideline [29]. Clearly HP education is required, but acquisition of knowledge and expertise takes time, a limited commodity in today's health care context. Adding new tasks for managing fatigue on top of education in an already time-poor context increases pressure on HPs. Further, guidelines are often sparse in detail and notoriously lacking in practice resources to assist delivery to the particular patient [30], leaving interpretation and decisions to the HPwho may lack adequate knowledge [31]. The current and earlier studies [9,18] reveal the CAPO fatigue algorithm has limited clinical utility in its current format and practice resources such as screening tools, assessment and management guides are needed. Lack of both time and knowledge about CRF then results in HPs avoiding the issue and leaving patients to manage on their own [17]. Multiple strategies are needed to overcome these barriers. Along with these barriers, we identified opportunities to implement CRF guidelines. It is noteworthy that both HPs and consumers in our study described time constraints in the clinic as barriers to fatigue management. Therefore, time-efficient strategies for fatigue screening and assessment should be prioritised. Documentation prompts such as an electronic medical record field for fatigue could contain a link to screening, assessment and management resources. Occupational therapists and other allied HPs with specialist knowledge and holistic assessment practices could lead comprehensive fatigue management. Because such cancer specialist HPs are limited in number, accessible education enabling local HPs to provide cancer fatigue management is essential for equitable cancer care. Translation of CRF guidelines into practice has typically been slow globally [15]. This may be in part because prevention and management of fatigue is not commonly identified as a priority at organisational and policy levels, remaining forever in competition with other initiatives for cancer treatment [15]. Government supportive care policies without adequate funding or resources to provide care cannot be implemented equitably. We endorse Berger and colleagues' proposition that innovative care models such as telehealth and harnessing e-health records are needed to adequately assess and manage cancer fatigue and other symptoms [15]. It is time to shift research focus to effectiveness-implementation hybrid trials [32] exploring novel ways to implement cancer fatigue guidelines that are sustainable for both provider and consumer. Approaches already being evaluated include stepped-care approaches [33][34][35], telehealth or online home based programs [36][37][38] and comprehensive assessment clinics [39]. This study had several strengths and limitations. Including a broad range of end-users in preliminary scoping work enabled a 360-degree approach to barriers and enablers to guideline implementation, and 'ownership' in the process of practice change [40]. However, some groups such as senior managers and doctors were under-represented and may have offered different views. Additional consumer input may have strengthened findings however our consumer findings were congruent with previous studies [9,17]. Stakeholders' insights into barriers and enablers in four CFIR domains point to strategies to enhance the success of guideline implementation efforts [41]. Rapid analysis of qualitative data is an established practice in crisis situations, that provided information in real-time [23] to accelerate the next phase of guideline implementation. Traditional or inductive qualitative analysis may have produced different results. Only some of our findings such as documentation are site-specific. Lack of time is a global issue that needs to be addressed at national policy level. However, lack of practice resources and knowledge previously reported in local and international studies [9,16,17,42] remain barriers to best practice fatigue management and these could be developed to be broadly applicable. The current study used the CFIR to enrich knowledge by identifying strategies to overcome implementation barriers related to the intervention (CRF guideline), individual, inner and outer contexts [8]. For successful implementation, a holistic approach is needed to tackle barriers in all CFIR domains within a local political context. Conclusions Our results underscore a critical need for practice tools and health professional education to support CRF guideline implementation. Cancer fatigue training and/or management should be accessible and time efficient for both HP and consumer and guideline processes should be integrated with existing processes. Strategies for guideline implementation focusing on sustainability should be trialled. To achieve adequate and equitable management of CRF, the scope of funding for cancer care must reflect best practice supportive care guidelines.
2023-04-24T13:47:11.475Z
2023-04-24T00:00:00.000
{ "year": 2023, "sha1": "f8ef3cfd457c74f736b08ed275e064ac3c50ad32", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Springer", "pdf_hash": "f8ef3cfd457c74f736b08ed275e064ac3c50ad32", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
461193
pes2o/s2orc
v3-fos-license
Vision-Based Navigation III: Pose and Motion from Omnidirectional Optical Flow and a Digital Terrain Map An algorithm for pose and motion estimation using corresponding features in omnidirectional images and a digital terrain map is proposed. In previous paper, such algorithm for regular camera was considered. Using a Digital Terrain (or Digital Elevation) Map (DTM/DEM) as a global reference enables recovering the absolute position and orientation of the camera. In order to do this, the DTM is used to formulate a constraint between corresponding features in two consecutive frames. In this paper, these constraints are extended to handle non-central projection, as is the case with many omnidirectional systems. The utilization of omnidirectional data is shown to improve the robustness and accuracy of the navigation algorithm. The feasibility of this algorithm is established through lab experimentation with two kinds of omnidirectional acquisition systems. The first one is polydioptric cameras while the second is catadioptric camera. I. INTRODUCTION Vision-based navigation algorithms has been a major research issue during the past decades. Two common approaches for the navigation problem are: landmarks and ego-motion integration. In the landmarks approach several features are located on the image-plane and matched to their known 3D location. Using the 2D and 3D data the camera's pose can be derived. Few examples for such algorithms are [1], [2]. Once the landmarks were found, the pose derivation is simple and can achieve quite accurate estimates. The main difficulty is the detection of the features and their correct matching to the landmarks set. In ego-motion integration approach the motion of the camera with respect to itself is estimated. The ego-motion can be derived from the optical-flow field, or from instruments such as accelerometers and gyroscopes. Once the ego-motion was obtained, one can integrate this motion to derive the camera's path. One of the factors that make this approach attractive is that no specific features need to be detected, unlike the previous approach. Several ego-motion estimation algorithms can be found in [3], [4], [5], [6]. The weakness of ego-motion integration comes from the fact that small errors are accumulated during the integration process. Hence, the estimated camera's path is drifted and the pose estimation accuracy decrease along time. If such approach is used it would be desirable to reduce the drift by activating, once in a while, an additional algorithm that estimates the pose directly. In [7] such navigation-system is being suggested. In that work, like in this work, the drift is being corrected using a Digital Terrain Map (DTM). The DTM is a discrete representation of the observed ground's topography. It contains the altitude over the sea level of the terrain for each geographical location. In [7] a segment of the ground was reconstructed using 'structurefrom-motion' (SFM) algorithm and was matched to the DTM in order to derive the camera's pose. Using SFM algorithm, which does not make any use of the information obtained from the DTM but bases its estimate on the flow-field alone, positions their technique under the same critique that applies for SFM algorithms [8]. The algorithm presented in the previous work [9] does not require an intermediate explicit reconstruction of the 3D world. By combining the DTM information directly with the images information it is claimed that the algorithm is well-conditioned and generates accurate estimates for reasonable scenarios with reasonable error sources. Recently, an increasing interest in omnidirectional vision for applications in robotics could be noted. Technically, omnidirectional vision, sometimes also called panoramic vision, can be achieved in various ways. Examples include camera with extreme wide angle lenses ("fish-eye"), cameras with hyperbolic or parabolic mirrors mounted in front of a standard lens (catadioptric imaging), sets of cameras mounted in a ringlike or sphere-like configuration (polydioptric imaging), or an ordinary camera that rotates around an axis and takes a sequence of images that covers a field of view of 360 degrees [10], [11], [12], [13], [14], [15], [16], [17], [18], [19]. Omnidirectional vision provides a very large field of view, which has some useful properties. For instance, it enables the tracking of objects which are placed in different directions in the surrounding scene. It is well established that such variety of features facilitates the obtainment of a robust and accurate estimate of the camera pose. On the other hand, vision algorithms have to account for the specific properties of the particular omnidirectional imaging sensor setup in use. This may comprise theoretical and methodological challenges, as is the case for catadioptric vision. Here, the extreme geometrical distortions of the images caused by the parabolic or hyperbolic camera Fig. 1. When using an omnidirectional vision system a wide area of the terrain is visible (see the red area) even when the camera approaches a mountainside. When using a regular camera in similar scenario only small patch that is almost planar is observed (see the blue area). mirror require a suitable adaptation of image interpretation methods. The projection induced by an omnidirectional camera is the transformation from the 3D space to the image(s) plane. The least restrictive assumption that can be made about any camera model is that the inverse image of a point is a line in space. For many omnidirectional cameras, all such lines do not necessarily intersect in a single point. Their envelope is called a dia-caustic and represents a locus of viewpoints. If all the lines intersect in a single point, then the system has a single effective viewpoint and it is a central projection. In [20] a theorem is presented stating that a catadioptric camera has a single effective viewpoint if and only if the mirrors cross-section is a conic section. In any other case, including multiple cameras configurations, rotating camera systems and other shapes of mirrors, there is no single center of projection. The data acquired by such omnidirectional systems cannot be processed by vision algorithms that were developed under the single effective viewpoint assumption. In this paper the navigation algorithm that was presented in [9] is extended to handle omnidirectional data. The most general case of non-central projection ("multi-optical center") is analyzed. The single center of projection case that was previously analyzed becomes a particular case of this general formulation when all optical centers are located in a single point. As was shown in [9], one of the most important factors that influence the robustness and the accuracy of the navigation algorithm is the complexity of the observed terrain. The extreme case, where only a planar segment of the terrain is visible, results in an ill-conditioned system which may lead to the failure of the algorithm. Whenever the navigating platform comes close to a mountainside in the terrain, such an ill-conditioned scenario might arise if a regular camera (not omnidirectional one) is used. However, when using an omnidirectional vision system, the rest of the terrain will still be visible even if the platform approaches one of the mountainsides (see Fig. 1). Therefore, more robust and accurate results can be achieved when using omnidirectional vision. The paper continues as follows: Section II formally define the navigation problem. Section III derive the constraint for any corresponding features coming from two consecutive images along the trajectory. Experimental results are presented in section IV, and conclusions are drawn in section V. II. PROBLEM DEFINITION AND NOTATIONS The problem can be briefly described as follows: At any given time instance t, a coordinates system C(t) is fixed to an omnidirectional camera. At that time instance the camera is located at some geographical location p(t) -a 3D vector, and has a given orientation R(t) -an orthonormal rotation matrix, with respect to a global coordinates system W . p(t) and R(t) define the transformation from the camera's frame Considering two sequential time instances t 1 and t 2 : the transformation from C(t 1 ) to C(t 2 ) is given by the translation vector Δp(t 1 , t 2 ) and the rotation matrix ΔR(t 1 , t 2 ), such that . A rough estimates of the camera's pose at t 1 and of the ego-motion between the two time instances - and ΔR E (t 1 , t 2 ) -are assumed to be known (the subscript letter "E" denotes that this is an estimated quantity). Such estimates can be obtained from dead-reckoning navigation system. Also supplied is the optical-flow field. No special assumption is made on the omnidirectional acquisition system. It is assumed, however, that the system was fully calibrated. As a result, for each visible feature it is possible to compute its line of sight with respect to the camera system -C, which can be defined by a source point -C S i and a unit-vector -C q i , oriented from the source point to the observed feature. Using the above notations, the objective of the proposed algorithm is to estimate the true camera's pose and ego-motion: p(t 1 ), R(t 1 ), Δp(t 1 , t 2 ) and ΔR(t 1 , t 2 ), using n corresponding features from the optical-flow field 2), the DTM and the initial-guess: III. THE NAVIGATION ALGORITHM The following section describes a navigation algorithm which estimate the above mentioned parameters. The pose and ego-motion of the camera are derived using a DTM and the optical-flow field of two consecutive frames. Unlike the landmarks approach no specific features should be detected and matched. Only the correspondence between the two consecutive images should be found in order to derive the opticalflow field. As was mentioned in the previous section, a rough estimate of the required parameters is supplied as an input. Nevertheless, since the algorithm only use this input as an initial guess and re-calculate the pose and ego-motion directly, no integration of previous errors will take place and accuracy will be preserved. The new approach is founded on the following observation. Since the DTM supplies information about the structure of the observed terrain, depth of observed features is being dictated by the camera's pose. Hence, given the pose and ego-motion of the camera, the optical-flow field can be uniquely determined. The objective of the algorithm will be finding the pose and ego-motion which lead to an optical-flow field as close as possible to the given flow field. A single vector from the optical-flow field will be used to define a constraint for the camera's pose and ego-motion. Let W G ∈ R 3 be a location of a ground feature point in the 3D world. At two different time instances t 1 and t 2 , this feature point is detected in the omnidirectional images and its lines of sight - Using an initial-guess of the pose of the camera at t 1 , the line passing through C S(t 1 ) in direction of C q(t 1 ) can be intersected with the DTM. Any ray-tracing style algorithm can be used for this purpose. The location of this intersection is denoted as W G E . The subscript letter "E" highlights the fact that this ground-point is the estimated location for the feature point, that in general will be different from the true ground-feature location W G. The difference between the true and estimated locations is due to two main sources: the error in the initial guess for the pose and the errors in the determination of W G E caused by DTM discretization and intrinsic errors. For a reasonable initial-guess and DTM-related errors, the two points W G E and W G will be close enough so as to allow the linearization of the DTM around W G E . Denoting by N the normal of the plane tangent to the DTM at the point W G E , one can write: The true ground feature W G can be described using true pose parameters: Here, λ denotes the distance between W S(t 1 ) and the feature point W G. In the aforementioned equation we use the feature's transformed source point: Replacing (2) in (1) we get: (4) From this expression, the distance of the true feature can be computed using the estimated feature location: In order to simplify notations, R(t i ) will be replaced by R i and likewise for p(t i ), S(t i ) and q(t i ) (i = 1, 2). ΔR(t 1 , t 2 ) and Δp(t 1 , t 2 ) will be replaced by R 12 and p 12 respectively. The superscript describing the coordinate frame in which the vector is given will also be omitted, except for the cases were special attention needs to be drawn to the frames. Normally, p 12 , S i s and q i s are in camera's frame while the rest of the vectors are given in the world's frame. Using the simplified notations, (5) can be assigned into (2) and after reorganization we get: Geometrical description of expression (9) using the projection operator (7) In order to obtain simpler expressions, define the following projection operator: This operator projects a vector onto the subspace normal to n, along the direction of u. As an illustration, it is easy to verify that n T · P(u, n)v ≡ 0 and P(u, n)u ≡ 0. By adding and subtracting G E to (6), and after reordering: Using the projection operator, (8) becomes: The above expression has a clear geometric interpretation (see Fig.2). The vector from G E to W S 1 is being projected onto the tangent plane. The projection is along the direction R 1 q 1 . Our next step will be transferring G from the global coordinates frame -W into the first camera's frame C 1 and then to the second camera's frame C 2 . Since p 1 and R 1 describe the transformation from C 1 into W , we will use the inverse transformation: Assigning (9) into (10) gives: L in the above expression represents: q 2 is a unit-vector pointing to the true ground-feature G. Thus, the vectors q 2 and ( C 2 G − C 2 S 2 ) should coincide. This observation can be expressed mathematically by projecting ( C 2 G − C 2 S 2 ) on the ray continuation of q 2 : The DTM was constructed by using a laser-based 3D-scanner. The spatial grid was 1mm (the one in the figure has a coarser grid for visualization purposes). In expression (13), q T 2 · ( C 2 G − C 2 S 2 ) is the magnitude of ( C 2 G − C 2 S 2 )'s projection on q 2 . By reorganizing (13) and using the projection operator, we obtain: where: is being projected on the orthogonal complement of q 2 . Since ( C 2 G − C 2 S 2 ) and q 2 should coincide, this projection should yield the zero-vector. Plugging (11) into (14) yields our final constraint: This constraint involves the position, orientation and the egomotion defining the two frames of the camera. Although it involves 3D vectors, it is clear that its rank can not exceed two due to the usage of P which projects R 3 on a two-dimensional subspace. Such constraint can be established for each vector in the optical-flow field, until a non-singular system is obtained. Since twelve parameters need to be estimated (six for pose and six for the ego-motion), at least six optical-flow vectors are required for the system solution. Usually, more vectors will be used in order to define an over-determined system, which will lead to more robust solution. The reader attention is drawn to the fact that a non-linear constraint was obtained. Thus, an iterative scheme will be used in order to solve this system. For example, Newton-iterations which start from the rough estimate of the pose and motion parameters and iteratively converge to the least square solution can be performed. As was suggested in [21], M-estimator can be integrated into this scheme to increase its robustness in the presence of outliers. IV. EXPERIMENTAL RESULTS Lab experimentation was performed using a real 3D model of a terrain and images from an omnidirectional acquisition system. The dimensions of the model were 115 × 95 cm with elevation variations as high as 32cm (see Fig.3(a)). A laserbased 3D-scanner was used to capture the terrain and build a DTM with a 1mm spatial grid (see Fig.3(b)). Two types of omnidirectional acquisition systems were tested: a configuration of three regular cameras heading to different directions, and a catadioptric system with a parabolic mirror. A. Three Cameras Configuration Three cameras with a wide field of view (80 • each) were firmly attached to a robotic arm. Each camera was posed in a different orientation (see Fig. 4). Their internal parameters and relative pose parameters were accurately estimated as part of the system calibration phase. In each experiment the cameras configuration was moved along a different trajectory. The robotic arm allowed moving of the cameras in a controlled manner while also providing true measurements for the pose of the cameras at all time instances. Fig.5 shows examples of two of the trajectories evaluated. The first trajectory (a in the figure) contains constant translational motion with the orientation held constant. In the second trajectory (b in the figure) position and orientation of the cameras were changed significantly. Although highly accurate "ground-truth" data for the trajectory of the cameras was obtained from the robotic manipulator, this trajectory was corrupted using a simulated error model so that the "true" and the a priori trajectories drifted away with time. The error model drifted the trajectory position and orientation by 1 mm/sec and 0.7 • /sec, respectively. In order to compensate for this drift, the proposed algorithm was called at 1 Hz rate. Whenever activated, this algorithm was supplied with the latest 3 images (one from each camera) and a previous image triplet that was captured 20mm away. The a priori information was derived from the available drifted pose at these two frames. Since 20mm baseline was desired, the algorithm was activated for the first time only after 3 seconds of movement. Later, it was periodically activated in 1 second gaps. During the experiments, gray-level images of 640 × 480 pixels were obtained from each of the three cameras. Correspondence between about 100 features per camera (300 features all together) was derived using the Lucas-Kanade tracking method [22], [23]. Features were not selected using an image-dependent algorithm, but rather, by using a regular grid spanned over the image-plane. As shown in Figure 5, the algorithm converged to reasonable estimates for the navigation parameters along the two trajectories described above. The figure shows the "ground-truth" together with two trajectories computed using the error model: the first contains no updates while the second was updated periodically by using the proposed algorithm, at a 1 Hz rate. The figure clearly show that the corrected-path remains close to the true-path along the whole trajectory. Figure 6 shows the position and orientation errors of the drifted and corrected paths for the two trajectories. It can be seen that the errors of the corrected path are kept small while the errors in the uncompensated path increase gradually. The saw-tooth shaped graph of the corrected path is characteristic: the orientation errors accumulate between updates but are strongly reduced each time the algorithm is applied. In order to demonstrate the importance of the omnidirectional vision usage, the two trajectories were also reconstructed using 300 features coming from only one of the cameras, while the data from the other two cameras were ignored. Fig. 7(a) compares the translational accuracies that were obtained when using one vs. three cameras while reconstructing trajectory b. A clear advantage can be observed for the utilization of the omnidirectional configuration. In [9], the sensitivities of the proposed algorithm were studied. It was found that the obtained accuracy is highly related to the complexity of the observed terrain. The extreme case, where only a planar segment of the terrain is visible, results in an ill-conditioned system which leads to the failure of the algorithm. Whenever the navigating platform comes close to one of the mountainsides of the terrain, such an ill-conditioned scenario might happen if a regular camera (not omnidirectional one) is used. However, if using an omnidirectional vision system, then the rest of the terrain will still be visible even when approaching one of the mountainsides. Therefore, more robust and accurate results can be expected when using omnidirectional vision, as confirmed by Fig. 7(a). Note the blue dot in this figure. At that time instance, the algorithm performance was relatively poor for the single camera scenario since only small segment of the terrain was visible to that camera - Fig. 7(b). B. Catadioptric System In the second experiment the three regular cameras were replaced by a single catadioptric system which is constructed of a parabolic mirror mounted in front of an orthographic camera (see Fig. 8(a)). Images of 1024 × 768 pixels were captured by this camera and 300 feature correspondences between two consecutive images were computed for the algorithm using the Lucas-Kanade method (see Fig. 8(b)). It should be noted that this tracking method is not optimal for catadioptric images due to the nature of the distortion of this kind of images. However, since the catadioptric system was first calibrated, these distortions can be computed and then cancelled. For each feature, a warped images can be rendered from the original images such that the local area of the feature appears as if it would be in a regular perspective camera. Next the Lucas-Kanade tracking method can be activated on these warped images with no special difficulty. The translational and angular accuracies that were obtained during the two examined trajectories are presented in Figure 9. The slight deterioration in the algorithm performance (compared to its performance with the three cameras configuration) (a) (b) Fig. 8. (a) The catadioptric system that was used for omnidirectional vision in the second experiment. (b) An example for optical-flow field that was extracted for the algorithm. Each small blue arrow shows a corresponding couple. is probably due to the low resolution at the periphery of catadioptric images and due to the usage of the Lucas-Kanade tracking method directly on the distorted images. V. CONCLUSIONS An algorithm for pose and motion estimation using corresponding features in omnidirectional images and a DTM was presented. The DTM served as a global reference and its data was used for recovering the absolute position and orientation of the camera. The derived constraint eliminates the requirement for the commonly used assumption of single effective viewpoint. As a result, the presented algorithm is applicable for all omnidirectional acquisition systems. The performance of the presented algorithm was demonstrated using both polydioptric cameras and catadioptric camera. Both position and orientation estimates were found to be sufficiently accurate in order to bound the accumulated errors and to prevent trajectory drifts. Moreover, the utilization of omnidirectional data was shown to improve the robustness and accuracy of the navigation algorithm, compared to its counterpart algorithm for regular cameras. The improvement is attributed to the wide segment of the visible terrain. Such a segment tends to include much higher complexity than smaller segments which might be observed when using a regular camera.
2011-08-16T02:22:33.000Z
2006-10-01T00:00:00.000
{ "year": 2011, "sha1": "88ef1522e4b477cd1984058305ec575bb12a5214", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1106.6341", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "d9db4ac14d97982e57acc8bb0dbe7145364edf54", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Geology", "Computer Science" ] }
233862448
pes2o/s2orc
v3-fos-license
Adaptive Geological Modelling and Its Application for Petroleum Reservoir Conditions The article discusses the difference between the adaptive geological model and the traditional deterministic one, which is based on the manual work of geologists in the pre-computer era. Today, the deterministic approach to modelling has outlived its usefulness. Computer simulations use other methods. Therefore, it is necessary to break away from the deterministic tradition in order to move on. The adaptive geological model is a mathematical function, the purpose of which is not at all to display actual information about the structure of a petroleum reservoir as accurately as possible, but to predict its structure in undrilled zones. The adaptive approach proves that it is not necessary to obey the facts for a successful forecast. This is often even harmful, since the geological structure of the petroleum reservoir at the points where the wells are drilled is known without modelling. Just as in the regression analysis, it is not at all necessary that the function passes exactly through all the actual points, it is not necessary for the geological model of the petroleum reservoir to fully reproduce the actual data. This is the fundamental feature of the adaptive geological model. Introduction A digital geological model of any petroleum reservoir is a set of two-dimensional maps and threedimensional grids that represent the geometry, distribution of petrophysical parameters, and the oil original in place in the studied object. In this regard, it may seem that there should not be large differences between the deterministic and adaptive versions of the geological model. However, these differences exist and, above all, in the gross thickness and number of vertical layers of the model as well as the model's reproduction of actual data. Main differences between deterministic and adaptive geological model options The deterministic models are being developed extensively towards thinning vertical layers up to 0.4 m and, in connection with this, increasing the total number of models' cells to hundreds of millions [1]. However, to build such a cumbersome model, there is no required amount of initial information, so the deterministic model only seems detailed. Certainly, in addition to well data, seismic data can be used to build a model, but the vertical resolution of these data is much coarser than 0.4 m. Slicing thin vertical layers is the weakest point of the deterministic model. Each layer of the model extends over its entire area, and from this, it should be understood that it is correlated everywhere. That is why, thin layers set a false structure of the petroleum reservoir space, since it is known from practice that it is impossible to perform a detailed correlation of layers with a step of 0.4 m. After all, any layer usually has a lenticular structure and even between adjacent wells some lenses can pinch out, and some -appear. A significant difference between the deterministic geological model and the adaptive one is also the fact that the adaptive model generally has fewer vertical layers [2][3]. As a rule, the number of such layers does not exceed 6 -8. In this case, the average gross thickness of a vertical layer is 5 -10 m. If the layer thickness in the deterministic model tends to 0.4 m, then the cell area still remains large and does not correspond to this thickness. When the layer thickness is 5 -10 m, then it corresponds to the horizontal cell size. In the adaptive model, even for large petroleum reservoirs, its grid is set with a step of no more than 25 m. When a petroleum reservoir is divided into layers, it is assumed that each layer is a genetically single cycle of sedimentation, with its inherent patterns. The point of the detailed correlation used in the adaptive model is the correlation of the permeable sublayers. We would like the layer to be characterized by coherent permeable sublayers, but separated from the adjacent layers by fluid seals. Thus, each layer would represent a quasi-single isolated reservoir. Therefore, an arbitrary number of vertical layers cannot be specified in the adaptive model, but exactly as many -how many of them can be identified from actual data. An even more controversial stage in the construction of the deterministic model is its upscaling. There is no reasonable explanation as to why it is impossible to immediately build a grid of a hydrodynamic model from well data, but it is necessary to enlarge it from the geological model grid. The detail of any model is determined not by the size of its grid, but by the amount of input information of wells and seismic surveys. Another way, justified by practice, looks more reasonableto go from the general to the particular, i.e. first to build a coarse hydrodynamic grid and then grind it down to geological. There is no upscaling in the adaptive model, since it is not built for visualization, but for the needs of the computing system itself, as the basis of the hydrodynamic model. Another difference between the deterministic model and the adaptive one is that the adaptive model in some cases does not coincide with the actual well data. It seems unacceptable, but the model is needed not so that it coincides with the wells, but to predict the geological structure of the petroleum reservoir in its undrilled zones. Consider, for example, some regression function, for example, a polynomial of the first or second degree. Its calculated values also do not coincide with the original points and this is normal. If we take a polynomial of a higher degree, then it is possible to achieve that its calculated values completely coincide with the actual ones, but such a polynomial cannot be used for forecasting. Now let us look at the adaptive geological model. It does not coincide with wells in cases where closely spaced wells have significantly different values of the interpolated parameter. Suppose the well passes through some cell and the well parameter is assigned to this cell, for example, the absolute depth of the reservoir top. This cell has an area of 2500 m 2 and its edge zones are affected by data from the adjacent well. Let the influence of the offset well be only 20 %, but this may be significant for the weighted average parameter assigned to the cell to differ from the value of the parameter in the well intersecting this cell. One can discard some of the wells in which the greatest differences from the model are noted, which is often done, or one can correct the parameters of these wells to make them closer to the model. But in both cases, the initial information will simply be lost and nothing will be obtained in return, except for the smoothness of the surface. If we approach the model as a function, then variations in the actual parameters in the wells do not interfere with anything. The adaptive model averages and smoothes the well data. Reproduction of parameters of an oil field in its adaptive geological model The geological space of the petroleum reservoir is highly anisotropic, because it is formed layer by layer over millions of years. The adaptive model is created for each selected layer of the reservoir, and then these models are summarized into a general multi-layer model of the reservoir. This logic of creating and calculating models of individual layers is justified by the fact that the layers were distinguished as single genetic formations, with their own internal laws, which can be traced in the change in the gross thickness or the proportion of permeable layers (the net to gross ratio). When calculating the adaptive models of individual layers, seismic data are used as much as possible, which are defined for all cells. The calculation requires at least three structural surfaces constructed from seismic data. It does not matter on what reflective horizons the structural surfaces are built. It is desirable that they be as close to the studied reservoir as possible. The set of structural surfaces reflects the history of the reservoir formation. The distances between them show the rates of sedimentation. By themselves, they show tectonic activity at different periods of time. It is known from practice that the foundation surface is always most informative. If there are three structural surfaces, then three fields of dispersion of these surfaces can be obtained from them. They reflect the curvature and possible activity of post-depositional processes that affect the formation of fracturing and reservoir properties. Further, there are two thickness fields between these three surfaces. Thus, there are already eight parameters. To them are added two fields of the total derivatives along the X and Y axes, as well as a relief surface that displays new tectonic movements. This is how a vector of eleven parameters is formed for each cell of the model. If there are more than three texture surfaces available, let us say six, then we can form a vector of twenty parameters. Next, a training sample is compiled from those cells through which the wells pass and on this sample a cascade of fuzzy-logic matrices is trained [5]. At the same time, the system has some freedom of choice. It does not take all the components of the vector, but selects from them only nine parameters that have the greatest correlation with the target and the least among themselves. Nine parameters allow to compose a matrix cascade of 36 paired layers (mean and variance). It is a powerful system with an average of 180,000 coefficients and is capable of displaying the complexity of the parameter distribution within the model. Such matrices are compiled for five main parameters of the geological model: gross thickness, net pay thickness, porosity, permeability, and oil saturation. The base fields of these parameters are calculated from them. From Figure 1, it follows that the matrix quite adequately reflects the distribution patterns of the gross thickness, revealing details that are not directly related to wells. In principle, one could simply use the base fields for geological modeling, but they are refined with well data. To do this, the data of these wells is entered into the cells through which the wells passed, and further along the additional grid, the nodes of which are located at distances of at least 150 m from the wells, the data of the base field are entered. After that, interpolation is performed in a manner reminiscent of cellular automata (Figure 2). It uses a trend that guides the movement of cells. It can be obtained from seismic parameters or calculated from well data. In addition, grids are created along which weights move. At the first stage, the cells through which the wells pass are initiated. They are given the values of the interpolated parameter that is assigned to the well. These cells are given an initial weight of one. Active cells, in which the parameters are initiated, transfer it to the adjacent six cells and thus activate them. At the same time, they also transmit weight parameters, the values of which are already less than one. Moreover, the weight parameter decreases nonlinearly and its value is influenced by the trend. More weight is transferred when the trend values in the transmitting and receiving cells are close. If the difference is large, then the transferred weight is reduced. It directs the flow. Preferably, it goes where conditions are close. For example, if a structural surface is taken as a trend, then the flow is preferable to where the absolute depths of the receiving and transmitting cells are close. If a map of the gross thickness is interpolated, then it can be assumed that the depositional conditions on the flank and in the dome of the structure were different. Therefore, the original active cells located on the flank will propagate along the flank and not move towards the structure's dome. A cell that has passed its parameter to an adjacent cell loses its activity. But the cell to which the parameter and weight were transferred becomes active. The process continues until all the cells of the model receive their parameter and weight. Note that the initially activated well cells do not change further in the course of one calculation. This is too harsh a condition. In order to reduce this rigidity, the sample of wells is sorted by parameter value and divided into three or four parts. Further interpolation is performed separately for each part of the sample. At the same time, interpolated values are entered to the places of the missing wells, which will differ from the parameters of these wells. In this case, interpolation does not take place over the entire area of the model, but only for the first 15 -20 steps, so that only the cells of adjacent wells are influenced. As a result, the adaptive model never exactly coincides with the wells, but more objectively reflects the distribution patterns of geological parameters. Having the base maps, it is possible to calculate the fields of parameters necessary for the hydrodynamic model: pore volume per square meter, hydraulic conductivity and density of the original oil in place per square meter. The parameter of hydraulic conductivity is normalized in the range from 0 to 1, since there is no need for its absolute value, but only for its relative value. However, the conductivity calculated from petrophysical data alone is not sufficient. Therefore, it is refined by two additional fields built according to well parameters -production wells liquid rates and injection wells liquid rates, which are more informative than petrophysical parameters. The first of them is the field of wells interaction coefficients, which are calculated from the time series data of liquid rates of the production and injection wells. Further, in a similar way, the field of the coefficients of wells interaction among the production wells are calculated. The next step is related to the construction of the logarithm field of the maximum liquid rates of the production wells. This is done using a cascade of fuzzy-logic matrices with nine input parameters, which has already been built up to this point in the geological model. For this, a part of the liquid rate of the well related to a specific layer is taken. This proportion is calculated in the main multilayer model at the moment when all geological models of the layers are already available according to the petrophysical parameters of these layers. Having received the field of maximum liquid production rates, the field of hydraulic conductivity is modified once again and in the end it turns out to be more meaningful than if it had been calculated only from petrophysical data. Here, as it were, the geological model is tuned to history, which, when using the deterministic approach, is done only at the stage of hydrodynamic modeling. The adaptive approach immediately takes into account the liquid rates when calculating the hydraulic conductivity field. General scheme and results of calculating the adaptive geological model The adaptive geological model is calculated in five stages. The first stage prepares the data, corrects stratigraphic markers if necessary, and splits the petroleum reservoir into layers. On the second, auxiliary submodels are calculated according to the main five parameters. On the third, these models are summed up into multilayer ones, geological sections of auxiliary nodes are calculated, oil and fluid production, and injection is divided into layers and all this is distributed according to submodels. At the fourth stage, the submodels calculate the fields of wells interactions and maximum liquid rates. At the fifth stage, these fields are summed up into a multilayer model, the hydraulic conductivity field is corrected and the oil-water and gas-oil contacts are calculated, as well as the density field of the original oil in place. This completes the calculation of the adaptive geological model of any studied reservoir. The summation of submodels in itself does not present any difficulties, but at this stage there is also the task of forming geological sections of auxiliary nodes. Although there are few layers in the adaptive geological model, this is compensated by auxiliary nodes [5][6][7]. They are located on a grid of 150 meters and cover the entire area of the model, while they are located no closer than 150 meters from the actual or new wells for drilling. At the third stage of the calculation, when the main parameters of the multilayer model are already obtained, geological sections of auxiliary nodes (including new wells for drilling) are calculated. For this, the sections of these nodes are divided into intervals with an average thickness of 0.4 m. The division is done separately for each layer of the model, and the number of intervals is the same for all auxiliary nodes. In essence, it resembles the division into layers of the deterministic geological model, but with a relatively large grid spacing. And at the same time, there is no general bulky thin-layered grid. Node data is stored in binary files, from where it is uploaded if necessary. Such a need arises when the geological section is drawn through the reservoir, which can be obtained in real time simply by drawing a line of this section on the map (Figure 3). The cuts of the auxiliary nodes are calculated by the interpolation method using the cellular automata mechanism. To do this, first the same sections are created for the actual wells, and then interpolation is done. The most difficult moment here is associated with the identification of intervals of permeable and impermeable sublayers. Interpolation is done over layers of the model with a thickness of 0.4 m. For each such sublayer, the probability that it is a permeable sublayer is calculated. Further, according to the known layer net pay thickness at the point of any auxiliary node, this thickness can be compiled from sublayers with the highest probability that this is a permeable sublayer. The sections of the auxiliary nodes are subsequently transferred to the submodels and are used to calculate the coefficients of well interactions. The adaptive geologic model grid is directly used without any upscaling for the hydrodynamic simulation. All impermeable cells are removed only from the grid of the hydrodynamic model in order to reduce the consumption of random access memory, which, as a rule, is very significant in hydrodynamic simulation. Conclusion It cannot be argued that the proposed methodology is fully developed and there is nothing to add to it. On the contrary, it is most likely just an idea of how to create the adaptive geological model. However, such idea has been implemented in a workable software package. And although, it can be further strengthened, the idea already shows that there are no complexity limits for the novel geological modeling approach, therefore it is suitable for almost any petroleum reservoir including ones with unconventional reserves, which are characterized by serious deficit of geological information not allowed using any deterministic options for their geological and hydrodynamic calculations.
2021-05-07T00:04:30.617Z
2021-03-01T00:00:00.000
{ "year": 2021, "sha1": "4069026be5e221f8be641796f19539c48b477f1d", "oa_license": null, "oa_url": "https://doi.org/10.1088/1755-1315/666/2/022065", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "cf61755e46a4c553db5bc42447d11355169159a5", "s2fieldsofstudy": [ "Geology", "Engineering", "Computer Science" ], "extfieldsofstudy": [ "Geology", "Physics" ] }
218605206
pes2o/s2orc
v3-fos-license
Chondromesenchymal hamartomas in a 24-year-old male mimicking a posterior mediastinal tumor and a 5-month-old boy with postoperative disseminated intravascular coagulation: two case reports. Background Chondromesenchymal hamartoma of the chest wall is a rare, benign disease that usually presents at birth or in early infancy. It typically involves one or more ribs, forming a unilateral or bilateral extrapleural mass. Patients may be asymptomatic or complain of mild respiratory distress depending on tumor size and location. To the best of our knowledge, only two of the approximately 100 cases reported so far are adults. Case presentation We present two cases of chondromesenchymal hamartoma. The first case involved the left fifth rib in a 24-year-old male, in close proximity to the fifth vertebral body in the left posterior mediastinum, mimicking a posterior mediastinal tumor on imaging. The tumor was excised via thoracoscopy and the patient had an uneventful postoperative course. The second case was that of a 5-month-old boy, who had a tumor involving the left fifth and sixth ribs which caused thoracic cage collapse. Following en bloc resection of the tumor and the involved rib segments, the patient was transferred to the intensive care unit for treatment of pulmonary infection and disseminated intravascular coagulation (DIC). He was discharged from the hospital in stable condition 11 days later. On histopathology, the tumor was found to be a chondromesenchymal hamartoma with immature spindle-shaped mesenchymal cells, plate-like hyaline cartilage, areas of woven bone formation, endochondral ossification and calcification, osteoclastic giant cells, and secondary aneurysmal bone cysts. Conclusions Although the presently reported cases have morphological characteristics similar to previously reported ones, they had distinct radiological and clinical characteristics. Patient 1 is only the third report of an adult with chondromesenchymal hamartoma. His case was characterized by its radiological appearance mimicking a posterior mediastinal tumor. Patient 2 represents the first documentation of DIC as a postoperative complication following excision of a chondromesenchymal hamartoma. We present these two cases to provide clinicopathological insights regarding this extremely rare tumor that are relevant to both pathologists and clinicians. (Continued from previous page) Conclusions: Although the presently reported cases have morphological characteristics similar to previously reported ones, they had distinct radiological and clinical characteristics. Patient 1 is only the third report of an adult with chondromesenchymal hamartoma. His case was characterized by its radiological appearance mimicking a posterior mediastinal tumor. Patient 2 represents the first documentation of DIC as a postoperative complication following excision of a chondromesenchymal hamartoma. We present these two cases to provide clinicopathological insights regarding this extremely rare tumor that are relevant to both pathologists and clinicians. Keywords: Chondromesenchymal hamartoma, Ribs, Adult, Infant, Pathological diagnosis Background Chondromesenchymal hamartoma of the chest wall presents at birth or in early infancy as an intraosseous expansile mass involving the ribs. It has an incidence of about 0.03% among primary bone tumors and shows male predominance; approximately 100 cases have been reported worldwide [1]. The tumor is composed of a disorganized admixture of cartilaginous components, spindle cell fascicles, woven bone, and hemorrhagic cysts. Surgical resection is the appropriate treatment and careful follow-up is necessary for early recognition of complications. In some cases, the aggressive appearance of the tumor may prompt unnecessary extended surgery with chest wall reconstruction, which may lead to complications such as trunk deformity and scoliosis [2]. There is no specific immunohistochemical (IHC) marker for this unusual disease. Histologic examination is generally adequate to establish the diagnosis, given the unique morphologic features. Imaging studies offer important clues; however, image-based diagnosis may be difficult, especially with atypical patient age or tumor location. The present case reports illustrate the need for a high suspicion index for chondromesenchymal hamartoma in similar cases. Case presentations Case 1: Chondromesenchymal hamartoma in a 24-yearold male mimicking a posterior mediastinal tumor A chondromesenchymal hamartoma of the chest wall was incidentally discovered on the imaging studies of a 24-yearold male who presented with complaints of persistent cough in May 2019. Digital radiography (DR) was suggestive of a left posterosuperior mediastinal mass with bronchial changes ( Fig. 1a-b). Computerized tomography (CT) revealed a benign expansile lesion in the posterior part of the left fifth rib with interior punctate calcifications, suggestive of an enchondroma (Fig. 1c). Magnetic resonance imaging (MRI) revealed a well-defined dumbbell shaped lesion with equal T1 and long T2 signals. The lesion measured approximately 32 mm × 25 mm. The expansile heterogeneous soft tissue lesion arising from the left fifth rib closely adjoined the fifth vertebral body in the left posterior mediastinum. The mass was characterized by substantially restricted diffusion and progressive heterogeneous enhancement (Fig. 1d-f). We suspected a chondrogenic or a neurogenic tumor of the left posterior mediastinum. Following preoperative optimization, the mass was thoracoscopically excised; intercostal nerve block and T4-6 pedicle internal fixation were performed. The patient had an uneventful recovery and was discharged in stable condition on the third postoperative day. Case 2: Chondromesenchymal hamartoma in a 5-monthold boy with postoperative disseminated intravascular coagulation A 5-month-old boy was admitted to the hospital with an asymptomatic, progressively enlarging painless mass in the left infra-axillary area of the lateral chest wall in August 2015. DR revealed a well-circumscribed soft tissue mass in the left middle lung field, measuring approximately 47 mm × 39 mm, accompanied by collapse of the adjacent thoracic cage and deformation of the left fifth and sixth ribs. The lesion was suspected to be a benign chondrogenic tumor ( Fig. 2a-b). CT revealed a benign tumor or tumor-like lesion involving the axillary segments of the left fifth and sixth ribs (Fig. 2c). The corresponding cortical and medullary rib cavities were involved and the mass was solid-cystic with several speckled and cord-like high-density internal shadows (Fig. 2d). There was mild enhancement in the solid areas and lack of enhancement in the cystic areas. Localized emphysema in the left lung field was also observed. Based on these radiographic characteristics and the patient's age, a preoperative diagnosis of mesenchymal hamartoma was made. Two weeks later, the infant underwent en bloc resection of the tumor and the involved rib segments. The marrow cavity was sealed using bone wax and a thoracic tube drain was placed. Postoperatively, the infant developed fever (maximum temperature, 39.7°C) with marked elevation of C-reactive protein and procalcitonin levels and white blood cell count. He was transferred to the intensive care unit and started on vancomycin and ceftazidime for a presumptive diagnosis of pulmonary infection. Coagulation function tests were suggestive of disseminated intravascular coagulation (DIC): D-dimer, 5.36 mg/L; antithrombin III, 67.4%; fibrinogen degradation products, 11.3 μg/mL; prothrombin time, 19.4 s (11.0-4.0 s); activated partial thromboplastin time, 62.6 s (25.0-35.0 s); and fibrinogen, 5.38 g/L (2.00-4.00 g/L). Heparin sodium was administered as an anticoagulant and fresh frozen plasma was transfused to correct coagulation disorders. Subsequently, the infant's condition improved, the thoracic tube drain was removed 7 days after surgery, and the infant was discharged from the hospital on the 11th postoperative day. Case 1 Macroscopically, the tumor was multilocular and measured approximately 3.5 cm × 2.5 cm × 1.5 cm. Microscopically, the solid area consisted of hyaline cartilage with endochondral calcification (Fig. 3a-b) and ossification and fascicles of mesenchymal spindle-shaped cells ( Fig. 3c-f). The cystic portion was composed of aneurysmal bone cyst (ABC)-like structures (i.e. hemorrhagic spaces enclosed by fibrous connective cyst walls with scattered osteoid trabeculae and osteoclastlike giant cells). Moreover, ossification could be frequently observed within the fibrous walls and in the cartilage background ( Fig. 3g-h). As the name "chondromesenchymal hamartoma" implies, there are typically several different histological components mixed together in a non-malignant pattern: mesenchymal spindle cells, frequent ossification, and secondary changes such as multinucleated osteoclastic giant cells and hemorrhagic spaces with fibrous cystic walls ( Fig. 3i-j). In addition, fluorescence in situ hybridization (FISH) detection using USP6 break-apart probes was conducted for case 1. The USP6 break-apart FISH result was negative ( Fig. 4a-b), which strongly rules out primary ABC. Case 2 The tumor measured approximately 5 cm in diameter with focal cystic changes. Microscopically, most of the solid section's area consisted of fascicles of mesenchymal spindle cells interwoven with multilobulated hypercellular hyaline cartilage (Fig. 5a-b). Scattered woven bones and osteoclastic giant cells were mixed with spindle cells and surrounded by cartilage (Fig. 5c-d). Some myeloid tissues could be observed among the lobulated cartilage ( Fig. 5ef), which confirmed the CT findings of tumor involvement of the corresponding cortical and medullary rib cavities. The cystic area comprised various-sized blood-filled spaces enclosed mostly by fibrous connective tissue, identified as secondary ABCs (Fig. 5g-h). Similarly to case 1, ossification could also be found in the form of spindleshaped fibroblasts and cartilage in this case. Collectively, the histological features of both cases were consistent with the diagnosis of chondromesenchymal hamartoma of the chest wall. Follow-up information The 24-year-old male (case 1) had DR examinations at 1 month and 3 months postoperatively, which showed no sign of recurrence, T4-6 pedicle internal fixation could still be observed and the remnant lung had a normal appearance. For case 2, the patient had DR examinations every year since the surgery in 2015, which showed no sign of recurrence and a normal lung appearance. Both patients had neither respiratory issues nor any other newly discovered tumor at the last follow-up on January 30th, 2020. Discussion and conclusions The term "mesenchymal hamartoma" was first proposed in 1979 by McLeod and Dahlin, as it best reflected the benign nature of this lesion composed of disordered but non-neoplastic skeletal tissues [3]. Odell and Benjamin were the first to use the term "mesenchymal hamartoma of the chest wall" in 1986 [4]. Its incidence is estimated to be 1 in 3000 among primary bone tumors or less than 1 per million in the general population [5]. Approximately 100 cases have been described to date, most occurring prenatally or within the first 6 months of life [6]. To the best of our knowledge, only two cases of chondromesenchymal hamartoma have been reported in adults. Bilateral multifocal lesions (CT revealed three masses on the right side and two masses on the left side) were discovered in a 47-year-old man who did not undergo treatment until 13 years later when he developed chest pain [7]. Another asymptomatic left chest wall tumor was discovered incidentally during a complete medical checkup in a 39-year-old woman. The tumor was excised en bloc with segments of the 7th and 8th ribs [8]. The age of onset in our case (24 years old) is also quite uncommon, making this only the third adult patient with chondromesenchymal hamartoma reported worldwide. Chondromesenchymal hamartomas are usually unilateral and are commonly seen on the right side, with a male-to-female ratio of 1.6:1 [9]. A few cases of bilateral lesions have also been reported [1,7,10]. Typically, these lesions arise from one or several ribs and their size may range from a few to a dozen centimeters. In most cases, these lesions occur in isolation; however, they are occasionally multifocal [7,11,12]. Patients may present with respiratory distress or be asymptomatic. Less common manifestations include scoliosis, chest wall deformity, cough, and fever [1]. One infant died 14 days after birth of severe sepsis secondary to Pseudomonas aeruginosa infection and pulmonary insufficiency [13]. Patient 2 in this report is the first documented instance of DIC as a postoperative complication following excision of chondromesenchymal hamartoma. Although this complication may not be directly related to this condition or its surgical treatment, it remains a possibility; therefore, we believe that assessing coagulation function preoperatively is important. Imaging studies are helpful in determining the site of origin, tumor density, enlargement, and effect on adjacent structures; however, imaging is not considered diagnostic and may be misleading if the tumor location or patient age is atypical [14,15]. In patient 1, the imaging finding of a paravertebral mass in an adult appeared to mimic a posterior mediastinal tumor and the radiologist suggested the possibility of a neurogenic tumor. Malignant lesions such as congenital neuroblastoma, Ewing's sarcoma, malignant teratoma, osteosarcoma, or chondrosarcoma cannot be excluded in the presence of cortical erosion, rib destruction, or deformation of adjacent ribs as seen on imaging [16,17]. Biopsy of the lesion can be complicated by severe bleeding because of disruption of the vascular spaces; therefore, needle biopsy should be performed cautiously [1,2]. Microscopically, chondromesenchymal hamartomas have immature spindle-shaped mesenchymal cells, platelike hyaline cartilage, woven bone formation, endochondral ossification and calcification, osteoclastic giant cells, and secondary ABC changes; abnormal mitoses and atypia are not present [18]. Woven trabeculae containing hematopoietic marrow are common, as observed in patient 2. Areas resembling ABC, with osteoclast-like giant cells, blood-filled spaces, hemosiderin-laden macrophages, and fibromembranous septa, are specific for chondromesenchymal hamartoma. It has been proposed that the formation of an ABC is secondary to intraosseous arteriovenous fistula formation [19]. IHC staining may demonstrate the presence of S-100 protein in cartilaginous areas [4]. To the best of our knowledge, no current molecular genetic tests are available to assist in the diagnosis of chondromesenchymal hamartoma [10]. The differential diagnosis of chondromesenchymal hamartoma includes tumoral and non-tumoral lesions involving the ribs that are common in infants and children, including primary ABC, chondrosarcoma, enchondroma, osteochondroma, fibrous dysplasia, and osteofibrous dysplasia (OFD) [14]. Primary ABC and chondromesenchymal hamartoma both show cystic areas; however, they lack solid cartilage nodules and component diversity. It is noteworthy that ABC could be secondary to various bone tumors, including giant cell tumors, chondroblastomas, fibrous histiocytomas, chondromyxoid fibromas, fibrous dysplasia, and osteosarcoma [20]. Chondrosarcoma is primarily a tumor of adulthood and older age [21]. It is characterized by high cellularity, presence of host bone Fig. 4 USP6 rearrangement detection by a break-apart probe was negative in case 1. a, b Two different fields showing how the red and green signals did not break apart but rather appear in the same position or in close positions within the nuclei of the spindle-cell population, which should be interpreted as negative according to the manufacturer's criteria (× 100) entrapment, and absence of host bone encasement. Chondrosarcoma is characterized by mild-to-moderate atypical chondrocytes, varying in size and shape, and containing enlarged, hyperchromatic nuclei. Myxoid changes or chondroid matrix liquefaction is a common feature of chondrosarcomas [22]. Enchondroma is a benign hyaline cartilage neoplasm arising within the medullary bone cavity; normal bone marrow elements may also be observed between its nodules, as seen in patient 2. Noticeably, enchondroma often appears as pale blue on hematoxylin and eosin staining owing to its high matrix proteoglycan content and it is less diverse in terms of histological components than chondromesenchymal hamartoma. Osteochondroma originates from the bone surface and possesses a distinctive three-layer structure of perichondrium, cartilage, and bone. The outer layer is a fibrous perichondrium that is continuous with the periosteum, below which is a hyaline cartilage cap with endochondral ossification. Similar to chondromesenchymal hamartoma, fibrous dysplasia can occur in the ribs, contain a cartilaginous component with endochondral ossification, and have secondary changes including ABC-like areas and multinucleated osteoclastic giant cells. However, fibrous dysplasia is mainly composed of bland fibroblastic cells and irregular trabeculae of woven bone; mesenchymal cells and plate-like hyaline cartilage are not its main components [22]. OFD mostly involves cortical bone of the anterior mid-shaft of the tibia during infancy and childhood; it is composed of fragments of woven bone rimmed by lamellar bone layers laid down by well-defined osteoblasts [22]. Although secondary ABC and multinucleated giant cells may be seen in OFD, there is an absence of cartilage. Lung hamartoma, which is the most common cartilage-containing benign lung tumor, should also be taken into consideration in the differential diagnosis. Since this tumor is composed of tissues that are normally present in the lung, the presence of normal bronchial epithelium could be a valuable clue [23]. Unlike chondromesenchymal hamartoma, lung hamartoma is mostly found in the lung parenchyma or within the bronchus and may only secondarily involve the ribs [24]. Molecular diagnostic techniques have recently emerged as independent diagnostic tools to improve diagnostic accuracy and reduce interobserver variability; many characteristic genetic alterations have been identified in bone tumors [25]. USP6 and/or CDH11 rearrangements are found in 69% of primary ABCs, but not in secondary ABC [26]. IDH1 (R132C; R132H) or IDH2 (R172S) mutations may occur in enchondromas, atypical cartilaginous tumor/grade 1 central chondrosarcomas, grade 2/3 central chondrosarcomas, and dedifferentiated chondrosarcomas; however, they are absent in osteochondromas [27,28]. MDM2 and CDK4 are amplified in low-grade central osteosarcomas and periosteal osteosarcomas, as demonstrated by FISH/IHC [29]. K36M mutations in H3F3B appear in about 95% of chondroblastomas, while G34W/L mutations in H3F3A are found in 92% of giant cell tumors of the bone [30]. It is likely that molecular markers will increasingly play an important role in improving the diagnosis and treatment of bone tumors. The treatment strategy for chondromesenchymal hamartoma involves one of two main approaches: conservative management for asymptomatic patients and surgical treatment for patients with respiratory distress caused by mass compression. A case of spontaneous regression of a chest wall hamartoma in an infant was reported [31]. In most cases, surgical resection is chosen irrespective of symptoms. Secondary surgery may be required following incomplete resections [9]. Patients with significant upper airway obstruction may need permanent tracheotomy [32]. A third management option, radiofrequency thermoablation (RFT), a relatively noninvasive technique performed under CT guidance, was performed in a 6-monthold girl [33]. RFT causes coagulative necrosis in the lesion, which is gradually reabsorbed. This method avoids damage to the adjacent normal bone and is well tolerated in children, thereby decreasing the risk of severe postoperative complications [34]. Herein, we reported two extremely rare cases of chondromesenchymal hamartoma. Although the lesions in these cases were morphologically similar to previously reported cases, they had distinct radiological and clinical characteristics. To the best of our knowledge, case 1 is only the third report of an adult patient with chondromesenchymal hamartoma. This patient was suspected of having a posterior mediastinal tumor on radiology. Case 2 is the first documentation of DIC as a postoperative complication of chondromesenchymal hamartoma. This report may raise awareness regarding the presentation, diagnosis, and management of chondromesenchymal hamartoma among pathologists, radiologists, and clinicians.
2020-05-13T14:49:14.751Z
2020-05-12T00:00:00.000
{ "year": 2020, "sha1": "a108cb317200e0eeccd88b0c82e6db1c70792a2a", "oa_license": "CCBY", "oa_url": "https://diagnosticpathology.biomedcentral.com/track/pdf/10.1186/s13000-020-00940-0", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "a108cb317200e0eeccd88b0c82e6db1c70792a2a", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
234341500
pes2o/s2orc
v3-fos-license
Finite element simulations of rail milling based on the modified Johnson-Cook constitutive model Rail milling plays an indispensable role to remove surface defects and irregularity in the railway maintenance industry. In this paper, an appropriate 2D orthogonal cutting model of rail milling process was established. Finite element simulations of rail milling were conducted in detail based on a modified Johnson-Cook constitutive model. The simulation results show that a large amount of cutting heat is generated in the tool-chip contact region, and the highest temperature of the milling insert appears somewhere on the rake face with a certain distance away from the cutting edge. With the increase of cutting thickness and milling speed, the milling temperature displays an upward trend. Introduction Rail is one of the extraordinarily significant infrastructures in the railway network, which directly impacts the smooth and safe operation of trains. With the rapid development of railway network throughout the world, the train speed becomes faster and the axle load becomes heavier, thus more defects are generated under the intricate wheel/rail contact conditions, such as rolling contact fatigue cracks [1], spalling [2] and squats [3]. To guarantee the safety and stability of railway network, it is crucial to adopt rail maintenance technologies to effectively eliminate the surface defects and unevenness, which can be divided into rail grinding and rail milling. Compared with rail grinding, rail milling technology is a kind of a rather new application in the rail maintenance field, which is originated from LINSINGER company in Austria. During the process of rail milling, a combined milling cutter with the milling inserts distributed along its circumferential surface is utilized to perfectly cover the profile of rail head, which can efficiently eradicate the surface flaws and irregularity in only one pass. In addition, rail milling possesses the advantages of free of sparks, environmentally-friendly and high machining precision. Up to date, many finite element researches on the milling process of composites, aluminum and other alloys can be available [4][5][6], however, there are very few finite element simulations addressing the rail milling process, which reveals that there is a lack of adequate research on the finite element modeling of rail milling process. Performing rail milling simulations is conducive to get a better understanding of the extremely complex metal cutting process. In this paper, firstly, a 2D orthogonal cutting model of rail milling process is established by simplifying the 3D rail peripheral milling process equivalently and reasonably. Then, the finite element simulations are carried out on the basis of a modified Johnson-Cook constitutive model. Finally, based on the simulation results, the temperature distribution and the chip formation process are discussed in detail. Currently, 2D orthogonal cutting finite element simulation is widely adopted for the analysis of machining process. The conversion of 3D rail peripheral milling process to 2D orthogonal cutting can effectively reduce the element numbers and improve the simulation efficiency. During the down milling process, the cutting thickness constantly shifts, the cutting area of a milling insert is the area enclosed by the motion trajectories of two adjacent milling inserts. Although the cutting thickness changes continuously, due to the high speed of the milling cutter and the very small feed per tooth, the change in the cutting thickness is also extremely small, therefore, the continuously changing cutting thickness can be regarded as equivalent uniform cutting thickness h e , the conversion of 3D rail peripheral milling process to 2D orthogonal cutting is shown in Figure 1. The equivalent uniform cutting thickness h e is calculated as where f z and r represent feed per tooth and the radius of the milling cutter, respectively. K c and A c indicate the contact angle and the cutting area, respectively, which can be calculated as 1 z c cos 2 It can be found form Equation (1) that the equivalent uniform cutting thickness h e is the function of the radius of the milling cutter and feed per tooth, when the radius of the milling cutter remains constant, h e is only related to feed per tooth f z . Figure.1 Conversion of 3D rail peripheral milling process to 2D orthogonal cutting ABAQUS/Explicit in the commercial finite element simulation software ABAQUS TM 6.14 is engaged to establish 2D orthogonal finite element model of rail milling. CPE4RT four-node plane strain thermal-mechanical coupling element is used to perform mesh division for the workpiece and the cutting tool. The cutting tool is set as an ideal rigid body, and the workpiece is set as a plastic body. Mesh refinement is performed in the tool/workpiece contact area, while the non-contact area is divided by relatively sparse meshes. The horizontal displacement of the left nodes and the horizontal and vertical Figure 2. The rake angle and clearance angle are -8 o and 8 o , respectively. In this study, the workpiece is U71Mn rail material made from high carbon steel to possess high fatigue toughness, which is extensively engaged in current Chinese railway network [7]. The chemical compositions of the studied material are listed in Table 1. In order to avoid serious mesh distortion which can result in difficulty in convergence, arbitrary Lagrange-Euler (ALE) and adaptive mesh technique are adopted. During the simulation, the cutting tool moves at the milling speed v c , while the workpiece is fixed. The initial ambient temperature is set as 25°C. The heat transfer coefficient between the workpiece material and the tool is set as 10 5 W· m -2 ·°C -1 . where σ represents von Mises stress; ε is the plastic strain; ̇is the true strain rate; T indicates the experimental temperature. The room-temperature physical properties of U71Mn rail material and the milling insert (Chinese brand: YC30S) are listed in Table 2. The temperature-dependent thermo-physical properties of U71Mn rail material and the milling insert are given in Table 3 and Table 4, respectively. Tool-chip friction model The highly versatile modified Coulomb friction model is taken as the tool-chip contact model. During the cutting process, the tool-chip contact region is composed of two parts, namely the slip zone and the stick zone. The chip stick-slip feature along the tool-chip interface relies on the normal stress between tool/chip surfaces. The modified Coulomb friction model used in the simulation is formulated as [8] , < , where τ f is the frictional stress; τ y and σ n are the shear yielding stress and normal stress, respectively; μ is the coefficient of coulomb friction. Professor T. Altan, a well-known expert in numerical analysis from Ohio State University, pointed out that when the friction coefficient is set as 0.6, high calculation accuracy can be obtained for finite element simulation under multiple materials and multiple changing parameters [9]. Therefore, μ=0.6 is applied in the study, and τ y =711.03 MPa. Damage model Cockroft-Latham criterion is adopted as the failure or damage model of the studied material, which is formulated as [10] p p 0 d f C   =  (6) where p f  is the equivalent plastic strain when the material fractures, C=481.98MPa is adopted in this study. Chip formation and temperature distribution analysis When the milling speed v c =200 m/min, the strip chip formation process and temperature distribution are shown in Figure 3. At the beginning of milling process, tower-like shape is formed at the chip top, and the temperature gradient in the first deformation zone is large. As the milling process proceeds, the cutting temperature continues to rise, and the cutting material is strongly squeezed and sheared by the cutting tool, during which the new machined surface is formed. Due to the strong extrusion and friction effects between the chip and the tool, a large amount of cutting heat is generated in the tool-chip contact area, which causes the temperature of the contact area to rise rapidly. The maximum temperature of the cutting tool appears somewhere on the rake face with a certain distance away from the cutting edge. The highest chip temperature all appears on the side which interacts with the rake face. It can also be found from Figure 3 that the internal temperature of the workpiece does not change significantly, and the machined surface temperature is also remarkably lower than that of the chip temperature. This is caused 5 by the reason that the chips take away most of the heat generated by the plastic deformation of the cutting layer, and a relatively smaller amount of heat is transmitted into the workpiece. Figure.3 Strip chip formation process and temperature distribution Cutting temperature distribution under different cutting thickness When the milling speed v c = 200 m/min, and the feed per tooth is 0.1 mm/z, 0.3 mm/z and 0.5 mm/z (namely, the corresponding equivalent uniform cutting thickness h e is 0.064 mm, 0.191 mm and 0.318 mm, respectively), the cutting temperature distribution is displayed in Figure 4. It can be observed that the temperature of the tool-chip contact area and the machined surface temperature increase as the cutting thickness increases. In the meanwhile, the chip thickness also increases as the cutting thickness increases. Figure 5. It can be found that the temperature of tool-chip contact area and the machined surface temperature increase with the increase of milling speed. Conclusions In this work, the finite element simulations of rail milling have been conducted in detail based on a modified Johnson-Cook constitutive model, and the following conclusions can be drawn: (1) Based on the equivalent uniform cutting thickness, 3D rail peripheral milling process is converted into the 2D orthogonal cutting process. An appropriate 2D orthogonal cutting model of rail milling process is established with a modified Johnson-Cook constitutive model. (2) The maximum temperature of the cutting tool appears somewhere on the rake face with a certain distance away from the cutting edge. The highest chip temperature all appears on the side which interacts with the rake face. The internal temperature of the workpiece does not change significantly, and the machined surface temperature is also remarkably lower than that of the chip temperature. (3) With the increase of milling speed and cutting thickness, the temperature of tool-chip contact area and the machined surface temperature show an upward trend.
2021-05-11T00:06:54.669Z
2021-01-01T00:00:00.000
{ "year": 2021, "sha1": "2e20aef6829ffd7f505c7eb98da552b7d874eb94", "oa_license": null, "oa_url": "https://doi.org/10.1088/1742-6596/1759/1/012025", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "f9aed368eaef440f586f3a73d0d01982282c04dc", "s2fieldsofstudy": [ "Engineering", "Materials Science" ], "extfieldsofstudy": [ "Physics", "Materials Science" ] }
239461570
pes2o/s2orc
v3-fos-license
Evaluation and Prediction of Status of Coastal Ecosystem Coastal zones are located at the intersection of the two major ecosystems of ocean and land. Three-fifths of the global population lives in coastal zones, which provide ecological services and are the most economically developed areas. However, the coastal ecosystem is seriously threatened because of long-term human disturbances. In this study, the Ningde coastal zone was selected as the study area. The Pressure-State-Response model and Analytic Hierarchy Process were used to evaluate the ecosystem of the Ningde coastal zone in 2000, 2009, and 2014. The Markov model was used to predict the status of the Ningde coastal zone ecosystem in 2024. Finally, several problems, corresponding countermeasures, and suggestions in the management of the Ningde coastal ecosystem were proposed. The ecosystem status in the coastal zone of Ningde improved continuously from 2000 to 2009. Although the status of the ecosystem declined slightly from 2009 to 2014, it currently remains fair. The status of the Ningde coastal ecosystem will be worse in 2024 than that in 2014, indicating that an increase in the area of forest and grassland and a decrease in the area of dry land and paddy field is not effective. The main problems affecting the coastal ecosystem of Ningde were that the area of aquaculture was blindly expanded and the role of water bodies in protecting the environment was ignored. The forest ecosystem in the coastal zone of Ningde was vulnerable to external disturbance owing to the single species of forest trees and the lack of a large area of land suitable for forests. from ecosystems on a continuous basis [7]. At present, keeping ecosystems healthy has become a widespread issue of social concern. The concept of a healthy ecosystem was first defined by Rapport et al. [3]. It usually refers to a certain type of ecosystem that can maintain a good organizational structure for a long time, meet human material life and ecological needs, and recover from stress. The integrity, stability, and sustainability of a healthy ecosystem are considered the ultimate environmental management goals [8]. Therefore, evaluating and predicting the status of the ecosystem can effectively identify the crisis of the ecosystem, which is greatly significant to sustainable development. Coastal zones are transition zones between the ocean and the land. They have become some of the most concentrated areas in the world owing to the rich natural resources and superior geographic location [9]. In recent years, the economic development of coastal areas resulted in structural damage and reduced the functions of coastal ecosystems [10]. In addition, the degradation of coastal ecosystems has led to the endangerment and extinction of some species in coastal zones [11]. Therefore, it is particularly important to evaluate and predict the status of coastal ecosystems that will help us achieve coordinated development of the society and economy of coastal areas and maintain the stability of coastal ecosystems. The coastal zone of Ningde is the most economically developed and dynamic frontier zone of Ningde City. The gross domestic product of this zone soared from 14.27 billion yuan in 2000 to 192.10 billion yuan in 2019, increasing nearly 14 times [12][13][14]. The population of the zone has proliferated from 1.93 million to 2.04 million. By the end of 2019, the permanent population in the coastal zone accounted for 70% of the total population in Ningde City. The population density has also increased by 33 persons/km 2 in 14 years. The rapid population growth is susceptible to destroying the ecosystem in the zone. The coastal zone of Ningde City was selected as the study region in this work. The Pressure-State-Response (PSR) model was used to construct an indicator system for ecosystem evaluation, and the Analytic Hierarchy Process (AHP) was used to determine the weights of different indicators. Remote sensing and a geographic information system were used to obtain the values of the evaluation indicators, which were then applied to the ecosystem evaluation. Finally, the Markov model was used to predict the ecosystem status of the Ningde coastal zone, and some countermeasures and suggestions were put forward. This study aimed to provide beneficial support to the restoration of this ecosystem and the rational development of the resources in the Ningde coastal zone. Also, this study provides a reference for ecosystem protection in other coastal areas. Overview of the Research Area The Ningde Coastal Zone is a part of the Fujian province, which is located on the southeastern coast of China. It consists of four cities, viz. Jiaocheng, Fu'an, Fuding, and Xiapu (Fig. 1). Furthermore, this zone is located in the middle of the three major economic zones of the Yangtze River Delta, the Pearl River Delta, and Taiwan in China, it covers 6,253 km 2 , and it has a mid-subtropical maritime monsoon climate. In Author Copy • Author Copy • Author Copy • Author Copy • Author Copy • Author Copy • Author Copy • Author Copy • Author Copy 2019, the annual average temperature was between 16.00ºC and 20.70ºC, the annual rainfall was between 1193.70 mm and 2018.80 mm, and the annual sunshine hours were between 1401.60 h and 1707.20 h [14]. It is dominated by mountains and hills, accounting for 73.3% of the total land area. The mainland coastline is 878.16 km long, accounting for one-third of the Fujian province. The sea area is 44,500 km 2 , accounting for one-third of that of the province [14]. The Ningde coastal zone is rich in aquatic resources, with more than 600 types of aquatic fauna. It includes more than 500 varieties of fish, 60 species of shrimps and crabs, 70 species of shellfish, and more than ten species of algae. It is rich in large yellow croaker, prawns, grouper, erdu cockles, and sword clams. The artificial propagation and nursery technology of large yellow croaker in this area has reached the leading international level. Data Sources and Data Processing The images of Landsat-5 TM (2000 and 2009) and Landsat-8 OLI (2014) of the Ningde coastal zone were used as the basic data. These remote sensing data were downloaded from the Geospatial Data Cloud (http:// www.gscloud.cn/). The steps for processing these data were as follows. First, ENVI 5.1, was used to process the remote sensing images based on a 1:50,000 topographic map. Second, interpretation signs of the coastal land-use types were established depending on the color tone, shape, texture, and field survey. According to "Classification of Land Use Status" (GB/T21020-2017) and the geographic characteristics of the Ningde coastal zone, the land-use types were divided into forest and grassland, rivers, lakes, construction land, reservoirs, aquaculture, dry land, paddy field, and other land using object-oriented classification methods. Third, the accuracy tests of the classification results in 2000, 2009, and 2014 were carried out. The overall accuracy was 89.3%, 94.5%, and 92.5%, and the Kappa coefficients were 0.82, 0.87, and 0.84, respectively. These values were higher than 0.80 (the minimum allowable accuracy); thus, the classification results met the accuracy requirements of land-use change monitoring and could be used to express the land-use status of the study area. Fourth, the small maps generated after classification were merged into adjacent large spots using the clustering statistics and removal analysis of ENVI. Finally, the area and transfer matrix of each land use type were obtained using ENVI. PSR Model The PSR model was initially developed by Rapport [15] and is one of the most widely used models in ecosystem evaluation. The specific index framework is depicted in Fig. 2. The model includes natural environmental factors and human influence factors. In this model, the multivariate methods were used to determine evaluation indicators. Therefore, the status of the ecosystem was evaluated scientifically, comprehensively, and meticulously. The model is divided into three types of indicators: pressure indicators (P), state indicators (S), and response indicators (R). These include the current Fig. 2. Framework of PSR model. state of the ecosystem and the natural environment, economic structure, etc., which can be regarded as the endogenous factors of ecosystem change. The other indicators are characterized by the regulation measures taken by humans for the sustainable development of the ecosystem after a change in the ecosystem state, regarded as response indicators. In this study, a model was constructed to evaluate the Ningde coastal ecosystem based on the PSR model, AHP, and the Ningde coastal zone. The model includes three levels: the target, guideline, and indicator levels ( Table 1). The description of each indicator factor is as follows. Population density (PD, I 1 ). The specific calculation formula of PD is as follows: ...where PD represents population density. A greater PD value represents greater pressure on the ecosystem caused by the population. S represents the number of people, and it was collected from the Ningde Statistical Yearbook; A represents the total area of the study area, and it was obtained from the remote sensing images. Human disturbance (I, I 2 ). The specific calculation formula of I is as follows. (2) ...where I represents human interference, and it refers to the ratio of the construction land area to the total land area in the study area [16]. A greater value of human disturbance indicates greater pressure on the ecosystem from the construction land used by humans. CNA represents the area of the construction land; A represents the total study area. The values of CNA and A were extracted from the remote sensing images. I = CNA/A Land reclamation rate (LRR, I 3 ). The specific calculation formula of LRR is as follows: ...where LRR is the land reclamation rate, which reflects the ability of the land to continuously provide resources required for human survival [17]. A greater value indicates greater pressure on the ecosystem caused by less available land. CDA is the cultivated land area; A is the total study area. CDA in this study is the sum of dry land and paddy field, which were obtained from the remote sensing images. Normalized vegetation index (NDVI, I 4 ). Studies have revealed that NDVI demonstrates a significant positive correlation with the production of vegetation [18][19]. Therefore, NDVI can be used to indicate the vitality of an ecosystem. The calculation formula for the NDVI is as follows: ...where NIR is the reflection value in the near-infrared band, and R is the reflection value in the red light band. The value of NDVI is between −1 and 1. The negative value indicates that the ground is covered by clouds, water, snow, etc., which is highly reflective to visible light; 0 indicates that there is rock or bare soil, etc., and NIR and R are approximately equal; a positive value indicates that there is vegetation coverage. The value of NDVI rises with the increase in vegetation coverage. Shannon Diversity Index (SHDI, I 5 ). SHDI reflects the complexity of ecosystem structure. It is one of the representative indexes of landscape heterogeneity [20], and its formula is as follows: ...where P i is the existence probability of patch type i in a landscape, and m is the total number of patch types. When the value of SHDI is 0, there is only one kind of patch in a landscape. When the value becomes greater, the number of patch types shows an increasing trend, or . E represents the degree of even distribution of patches in a landscape. It has a negative correlation with dominance [21], and its formula is as follows: ...where SHDI max is the maximum value of the Shannon diversity index. SHDI was obtained by importing the classification maps into the Fragstats software. Average patch area (APA, I 7 ). APA refers to the average area of all patches or a certain type of patch in a study area. A greater value indicates lower fragmentation of the landscape [22]. Its formula is as follows: ...where the unit of APA is km 2 , A represents the total land area of the study area, and NP represents the number of patches. This index was also obtained by importing the classification maps into Fragstats software. Resilience indicator (F, I 8 ). F reflects the ability of an ecosystem affected by pressure to maintain or restore the stability of its structure and function. This indicator is one of the most important indicators that can suggest the state of the ecosystem [23]. Its specific calculation formula is as follows:. ...where F is the resilience of the ecosystem, A i is the area of land use type I, F i is the resilience coefficient of land use type I, and A is the total study area. The resilience coefficients of each land-use type in this study were obtained from previous studies [23] ( Table 2). Ecosystem services value (V, I 9 ). Ecosystem services refer to the material and living environment provided by ecosystems to human society. The pressure on the ecosystem will affect its ability to provide ecosystem services to human society [24]. Therefore, ecosystem services value was selected as the response indicator of ecosystem evaluation. The formula for calculating the value of ecosystem services is as follows: ...where V is the ecosystem services value, A i is the area of the land use type I, and V i is the ecosystem service value per unit area of land use type i. The ecosystem service value per unit area of each land-use type in the Ningde coastal zone was obtained from previous studies [25][26] (Table 3). Then, the ecosystem services value of the Ningde coastal zone was calculated using the area of each land-use type. Indicator Weight of Ecosystem Evaluation The above indicators were assigned to reasonable weights owing to the different influences of these indicators on the ecosystem. This is of great significance for improving the accuracy of the evaluation. The AHP was used to determine the weights of these indicators. The AHP, which is a multi-level weight analysis method, was proposed by the American operations researcher Satty [27] in the late 1970s. It integrates people's subjective judgments and objective data, Table 2. Resilience coefficient of the land use types in Ningde coastal zone. Landscape types Coefficients Description Rivers, lakes, and reservoirs 1 Extremely important landscape types, which play a significant role in maintaining the stability and resilience of the ecosystem. Forestland and grassland 0.8 Aquaculture 0.6 Play an important role in maintaining the resilience of the ecosystem and can provide important material and activity sites for humans. The resilience of the ecosystem would decline if these landscape types cannot be properly protected and utilized. Dry land and paddy field 0.5 Construction land 0.3 Other land 0.1 Contributes relatively little to the resilience of the ecosystem. and it is a concise, systematic analysis and evaluation method that combines qualitative and quantitative analysis. The main principle of this method is to construct an analytic hierarchy model and compare each indicator pair by pair, calculate the weight of each evaluation indicator, and conduct a consistency test to analyze and judge the evaluation target. This method has high practicability and effectiveness when dealing with relatively complex and fuzzy problems. The weights of the evaluation indicators of the Ningde coastal ecosystem were determined by the AHP and are shown in Table 4. Comprehensive Index of Ecosystem Evaluation Owing to the differences in the indicator units, it is impossible to compare the indicators directly. Therefore, the extreme value normalization method was adopted to non-dimensionalize each indicator data (Eq. 10). ...where X is the value of each indicator after the normalization, X i represents the indicator value of item I, X max is the maximum value of item I, and X min is the minimum value of item i. The comprehensive index value of ecosystem evaluation was calculated using Eq. 11. ...where SHI is a comprehensive index of the ecosystem, X is the normalized indicator value, and W i is the weight value of item i. The smaller the SHI value, the worse is the condition of the ecosystem. In this study, the ecosystem evaluation criterion was divided into five levels based on the Ningde coastal zone (Table 5). Table 5. Criterion and description of the ecosystem evaluation in Ningde coastal zone. Comprehensive index Description Good >1.8 The landscape structure of the ecosystem is reasonable and shows a good state. The restoration ability of the ecosystem is strong. The ecosystem function is perfect and stable. It is suitable for the survival and development of humans. The ecosystem has a relatively reasonable landscape structure with strong resilience. It is still healthy, stable, and suitable for the survival and development of mankind. In this situation, the ecosystem is in a state of dynamic equilibrium. The landscape structure of the ecosystem is reasonable, but its resilience is general. The status of the ecosystem is close to the ecological threshold, but it is still healthy. It can perform the basic function of ecosystem services. Constraints factors unfit for humans' survival exists in the ecosystem. It can maintain the dynamic equilibrium state of the ecosystem. There are defects in the landscape structure with poor resilience in the ecosystem. The ecological function of the ecosystem cannot maintain its basic needs. There exist many factors restricting human survival. The dynamic balance of the ecosystem is seriously threatened. <1.2 The ecosystem has an extremely unreasonable landscape structure with poor resilience and serious fragmentation of vegetation patches. The dynamic balance of the ecosystem has been destroyed. Different Scenarios Forests and grasslands not only provide large amounts of materials and energy for humans but also have many ecological functions, such as maintaining water and soil, purifying the environment, maintaining biodiversity, degrading waste, and regulating climate. In addition, they provide social functions such as tourism, leisure, and medical care. The results shown in this paper suggest that the ecosystem services value of forest and grassland in the Ningde coastal zone is much greater than that of other land-use types. The increase in forest and grassland area is bound to increase the possibility of the healthy development of the ecosystem in the study area significantly. Additionally, the municipal government of Ningde will continue to increase the area of forest and grassland in the future based on the policy of returning farmland to forests promulgated by the central government. Furthermore, the transfer of forest and grassland mainly comes from dry land and paddy field. Therefore, the different added values of the transfer probability of these three different land-use types to forest and grassland were set as different scenarios to predict the status of the Ningde coastal zone ecosystem. The three different scenarios were as follows: Scenario 1: Assuming that the transition probability of dry land and paddy field to forest and grassland increases by 0%. Scenario 2: Assuming that the transition probability of dry land and paddy field to forest and grassland increases by 30%, whereas the transition probability of dry land and paddy field to other land use types remains unchanged. The forest and grassland do not transfer to any other land types. Scenario 3: Scenario 3 is similar to scenario 2, except that the increased value of the transition probability of dry land and paddy field to forest and grassland is set as 50%. Markov Model The Markov model is a special random process from one state to another in each time stage. The primary Markov model is a model system that the random distribution of the next state depends only on the current state, while it does not depend on the previous state. This feature of the Markov model is suitable for the study of the change of land use structure. It is necessary to clarify the initial transition probability matrix of land use type before using the Markov model. The mathematical expression of the initial transition probability matrix is as follows: ... ... ... ... ... ... ... (12) ...where P ij is the probability that the land type i is transformed into the land type j from the beginning to the end, and n is the number of land-use types of the study area. P ij also needs to meet the following conditions: ... ... The Markov model was obtained based on the nonfollow-up influence characteristic of the Markov process and the probability formula under Bayesian conditions. (14) ...where P (n) is the state probability at any time, P (n-1) is the initial state probability, and P ij is the probability that the land type i is transformed into the land type j. The Markov model relied entirely on the initial transition probability matrix to predict the state of future events. Therefore, the initial transition probability matrix of the land-use change in the Ningde coastal zone from 2009 to 2014 was obtained using the land use classification results of the study area in 2009 and 2014 with ENVI software. Then, the area of the land use types of the Ningde coastal zone in 2024 was obtained using the initial transition probability matrix and Markov model. The prediction area of each land-use type in 2024 was brought into the PSR model for calculation. Finally, the prediction results of the ecosystem status of the Ningde coastal zone was achieved in 2024. Area Changes of the Land Use Types in Ningde Coastal Zone As can be seen from Fig. 3, forest and grassland, dry land, and paddy field were the main types of land use in the coastal zone of Ningde; they accounted for 38.80%, 31.47%, and 18.52% of the entire study area, respectively, while other land-use types accounted for only 11. Pressure Indicator As shown in Table 6, the value of population density of the coastal zone of Ningde increased from 317 persons/km 2 in 2000 to 332 persons/km 2 in 2009, and then to 350 persons/km 2 in 2014. Therefore, the population density in the coastal zone of Ningde continued to increase in those 14 years. The value of the human disturbance dropped from 3.66% in 2000 to 2.57% in 2014, which indicated that the increase in population density did not lead to an increase in human disturbance. The value of the land reclamation rate decreased from 51.99% in 2000 to 40.29% in 2014; i.e., the occupation of cultivated land to the land of the study area gradually decreased, and the available land resources gradually increased. The pressure of land occupation on the ecosystem gradually decreased during the study period. As shown in Fig. 4, the value of the pressure indicator dropped rapidly from 1.624 in 2000 to 1.261 in 2009, before rising to 1.323 in 2014. During the 14 years, the pressure indicator showed a significant decline first and then a slight rise. This trend indicated that the pressure on the coastal ecosystem of Ningde decreased rapidly first and then tended to remain stable during the study period. Table 7 shows that the NDVI value of the Ningde coastal zone increased from 0.038 in 2000 to 0.084 in 2014. This indicated that the vegetation coverage of the Ningde coastal zone increased and the ecosystem vitality improved in the 14 years. The SHDI value of the study area dropped from 1.171 in 2000 to 1.158 Response Indicator As shown in Fig. 6, the ecosystem services of forest and grassland had the highest average value of 26277.76×10 6 yuan across 2000, 2009, and 2014. The three-year average value of ecosystem services of dry land, aquaculture, and paddy field was 9047.51×10 6 yuan, 4833.40×10 6 yuan, and 5324.84×10 6 yuan, respectively. The other land use types had a low value of ecosystem services. From 2000 to 2014, the ecosystem services value of forest and grassland, and aquaculture showed an upward trend. Their value increased by 11341.57×10 6 yuan and 2364.35×10 6 yuan, respectively. In contrast, the value of dry land and paddy field decreased by 1981.03×10 6 yuan and 2430.22×10 6 yuan, respectively. Fig. 7 shows that the value of the response indicator of the Ningde coastal ecosystem increased from 0.147 in 2000 to 0.645 in 2014, which showed that external disturbances did not damage the coastal ecosystem of Ningde. The ability of the Ningde coastal ecosystem to provide services to humans sharply improved and then plateaued during the study period. Comprehensive Index of Ecosystem Evaluation The evaluation result was obtained using the data of ecosystem evaluation indicators and Eq. 11. As shown in Fig. 8 The combination of the above results with those in Table 5 shows that the coastal ecosystem of the Ningde was in a poor state in 2000. By 2009, the ecosystem in this area rose to a good state and then fell to a fair state in 2014. In short, the coastal ecosystem of Ningde was in a good state after 2009. The reason is that at the beginning of the 21 st century, China joined the World Trade Organization, and China's economy entered background. Ningde is also one of the key production areas for commercial forests designated by the state. The purpose is to vigorously develop commercial forestry based on not destroying the ecological environment. In this context, Ningde's commercial forests continued to increase. Simultaneously, the implementation of the forest ecological benefit compensation system and the policy of returning farmland to forests have resulted in a continuous increase in forest and grassland. Therefore, the vegetation coverage of the Ningde coastal zone has increased, and the resilience of the Ningde coastal ecosystem has been enhanced. The fragmentation of the landscape structure of the ecosystem has decreased, and the value of ecosystem services of the ecosystem has increased. Overall, the ecosystem status of the Ningde coastal zone has continued to improve because of the above reasons. Fig. 9 shows that dry land, paddy field, forest, and grassland will still be the main land-use types of the Ningde coastal zone in 2024. Forest and grassland will be the land-use types with the largest area. In scenario 1, the area of forest and grassland is 3,183.54 km 2 and it increases by 466.47 km 2 compared with the area of forest and grassland in 2014. The areas of forest and grassland in scenarios 2 and 3 are 4043.25 km 2 and 4113.78 km 2 and they increase by 1326.18 km 2 and 1396.71 km 2 , respectively, compared with the area of forest and grassland in 2014. Predicted Area of Land Use Types in Different Scenarios The dry land areas in scenarios 1, 2, and 3 are 1708.12 km 2 , 1243.88 km 2 , and 1208.46 km 2 , respectively. Compared with the dry land area in 2014, they decrease by 163.09 km 2 , 627.33 km 2 , and 662.75 km 2 , respectively. The area of the paddy field is smaller than that of the forest and grassland and dry land in 2024. The paddy field areas of scenarios 1, 2, and 3 are 798.21 km 2 , 525.37 km 2 , and 497.90 km 2 and they decrease by 261.61 km 2 , 534.45 km 2 , and 561.92 km 2 , respectively, compared with the area of the paddy field in 2014. The remaining land use types have a small area; thus, they do not significantly impact the Ningde coastal ecosystem. Table 8, the values of the pressure indicator of scenarios 1, 2, and 3 are 1.690, 1.360, and 1.323, respectively. The increase in the area of forest and grassland and the decrease in the area of dry land and paddy field will gradually reduce the pressure on the coastal ecosystem of Ningde. The value of the state indicator increases from 1.577 in scenario 1 to 2.069 in scenario 3, indicating that the increase in the area of forest and grassland and the decrease in the area of dry land and paddy field will improve the state of the coastal ecosystem of Ningde. However, the state indicator in scenario 3 in 2024 is lower than that in 2014. The value of the response indicator increases from 0.405 in scenario 1 to 0.640 in scenario 3, indicating that scenarios 2 and 3 will increase the value of coastal ecosystem services in 2024. It can also be seen from Table 8 that the value of the comprehensive index rises from 1.395 in scenario 1 to 1.517 in scenario 2 and finally increases to 1.547 in scenario 3. The above results combined with those in Table 5 show that in 2024, the Ningde coastal ecosystem under scenario 1 will be in a poor status. The ecosystem under both scenario 2 and scenario 3 will be in general status. Therefore, the status of the Ningde coastal ecosystem in 2024 will be worse than that in 2014. Such results show that simply increasing the area of forest and grassland and decreasing the area of dry land and paddy field cannot improve the health level of the Ningde coastal ecosystem. The current problems should be raised and resolved to improve the status level of the Ningde coastal ecosystem. The main issues affecting the Ningde coastal ecosystem were summarized based on the previous research results of our research group [28][29]. (1) The decrease in the water area and increase in aquaculture area have led to uncoordinated development of the ecosystem services in the Ningde coastal zone. The water area of the Ningde coastal zone decreased from 277.25 km 2 in 2000 to 215.46 km 2 in 2014. In contrast, the area of aquaculture increased from 76.24 km 2 in 2000 to 137.04 km 2 in 2014. Therefore, this study found that in the 14 years, the water area in the coastal area of Ningde has gradually decreased and partially converted into aquaculture owing to the pursuit of economic benefits and neglect of ecological and environmental protection. Finally, the material production value has improved, while hydrological regulation has decreased. These findings directly resulted in an unbalanced development of the ecosystem services of the Ningde coastal zone. (2) The disorderly development of the aquaculture industry has destroyed the coastal ecosystem of Ningde. The Ningde coastal zone is an important production area for aquatic products in the Fujian Province. Ningde's aquaculture industry is booming, and has achieved good economic benefits. The revenue of aquaculture in the coastal zone of Ningde increased from 425.17 million yuan in 2000 to 1617.78 million yuan in 2014. However, owing to the lack of reasonable management and guidance, the aquaculture industry has fallen into a disorderly state, such as blindly occupying waters to expand the aquaculture area and lacking reasonable planning in layout. This has resulted in the waste of resources and the destruction of the ecological environment. (3) The land area suitable for the forest was small in the Ningde coastal zone. The forest species in the Ningde coastal ecosystem was unitary, so it was susceptible to external disturbance. Although forest and grassland were the main land use types in the coastal zone of Ningde, the land suitable for the forest was still scarce. Based on the research of Xing [30], it can be found that only 20% of the entire forest area of the coastal zone of Ningde was highly suitable for forest. The coastal forests have been dominated by Masson's pine and casuarina for a long time. At present, most of these tree species are close to physiological maturity. Due to the difficult regeneration of the second generation, the functions of water storage and soil conservation of these tree species were gradually reduced. In the 1970s, Spartina alterniflora was introduced from abroad to protect the tidal flats. However, it threatened the survival of mangroves that grew on the coastal zone of Ningde due to the prosperous reproduction of Spartina alterniflora. The following countermeasures and suggestions were proposed for the above problems in the Ningde coastal zone. (1) The aquaculture industry of the Ningde coastal zone should be reasonably planned. The management should be strengthened by law, an association for the regulation of farming industry should be established, and the supervision of practitioners should focus on strengthening. The construction of environmental regulations should be accelerated, and publicity should be increased to raise public awareness about environmental protection. The development and utilization of land resources should have a scientific basis and should be reasonable, and the law should be followed, which is one of the necessary conditions to achieve scientific land use and promote the transformation of the regional economic development model. It will also accelerate the improvement of the protection of the ecological environment and resource development, and improve the formation of regulations and systems for energy conservation, emission reduction, and circular economy. The local government should scientifically formulate the corresponding policies and regulations to regulate the behavior of economic activities. This would give full play to the fundamental role of the market in resource allocation and establish an effective mechanism for ecological construction and protection. It would establish a comprehensive decision-making body, strengthen regional and inter-departmental cooperation, and coordinate the participation of industry and commerce, land, environmental protection, taxation, urban and rural construction, and other departments in decision-making to ensure the enforceability of regulations and policies. Extensive and in-depth publicity and education work should be carried out, and comprehensive and in-depth popularization of the Marine Protection Law, the Sea Use Management Law, the Fisheries Law, and other relevant laws and regulations should be conducted. Publicity activities should be carried out for the protection of the marine ecological environment. Provision of full play to the role of public opinion supervision by the news media should be done to actively report and encourage the reporting of various acts that violate the ecological environmental protection law. Consciousness should be enhanced regarding the environment, and the use of environmentally friendly products should be increased; the use of public transportation or bicycles should be advocated; the construction of waste recycling and resource reuse systems should be strengthened; the public should be guided to develop habits of living and consumption that are conducive to environmental protection. (2) Spartina alterniflora should be controlled using a combination of physical, biological, and chemical methods. (3) The area of mangroves, known as coast guards should be expanded. Its afforestation rate should be increased, and its pest control should be strengthened. (4) Mixed forests should be developed to enhance the stability of the forest ecosystem of the Ningde coastal zone. Schima superba and Masson's pine can be cultivated in areas with strong winds and poor soils along the coast of Ningde to exert their ability to conserve water, fix sand, and prevent wind. Castanopsis carlesii, Castanea henryi, and Aleurites montana should be planted in the Ningde coastal zone because of their ability to promote forest vegetation succession. Myrica rubra can form a fireproof forest belt, so it should be cultivated to enhance the stability of the forest ecosystem in the coastal zone of Ningde. Forest closure can enable the forest to renew itself, with appropriate human interference to improve the quality of forest stands and biodiversity. On the premise of strengthening the protection of forest resources, we should strengthen the cultivation and management of forest resources and appropriately increase the construction of ecological public welfare forests, coastal protection forests, and reforestation projects to provide a strong guarantee for the health of the regional ecosystem. We should adjust and optimize the structure of tree species and tree age composition, improve the overall productivity of forest resources, and carry out the construction project of forest protection systems in the coastal zone. The role of forests in the ecosystem should be fully utilized, and the ecological efficacy of forests should be strengthened. We will implement a phased turnover of forest stands with a single structure to improve the self-renewal and repair capacity of the forest ecosystem, and eventually, the forest ecosystem stability will be enhanced primarily by a compound mixed forest model. Conclusion The main land use types in the coastal zone of Ningde are forestland, grassland, dry land, and aquaculture. During the study period, the areas of forestland, grassland, aquaculture, and reservoirs demonstrated an increasing trend, while the areas of dry land, paddy fields, lakes, construction land, rivers, and other lands demonstrated a decreasing trend. The population density of the Ningde coastal zone gradually increased, while the human disturbance and land reclamation rate gradually decreased. The pressure indicators of the Ningde coastal zone demonstrate a decreasing trend and tend to be stable. During the study period, NDVI values and resilience indicators of the Ningde coastal zone showed an increasing trend, while the evenness index and average patch area showed a decreasing trend, indicating that the ecosystem service function of the Ningde coastal zone gradually increased and its structure gradually stabilized, and the state of the Ningde coastal zone ecosystem gradually increased. The ecosystem service value of forestland and grassland was the highest, followed by dry land, aquaculture, and paddy fields. During the study period, the ecosystem service values of forestland, grassland, and aquaculture showed an increasing trend, while those of dry land and paddy fields showed a decreasing trend. The total ecosystem service value of the Ningde coastal zone showed an increasing trend, indicating that the ability of the Ningde coastal zone ecosystem to provide services to humans steadily increased during the study period. In conclusion, the state of the Ningde coastal zone ecosystem improved and stabilized. In 2024, the main land use types of the Ningde coastal zone are still dry land, paddy fields, forestland, and grassland. Among them, the area occupied by forestland and grassland was the largest. Increasing the area of forestland and grassland can gradually reduce the pressure on the Ningde coastal zone ecosystem and gradually improve the ecosystem status and service value; however, the status of the Ningde coastal zone ecosystem in 2024 is poorer than that in 2014, indicating that simply increasing the area of one land use type cannot improve the status of the whole ecosystem. Overall, improving the ecosystem status is a comprehensive and complex process.
2021-10-22T16:03:28.126Z
2021-09-01T00:00:00.000
{ "year": 2021, "sha1": "6a6b629ef5e955e414d20879db7cfa66692d2fff", "oa_license": null, "oa_url": "http://www.pjoes.com/pdf-136181-68150?filename=Evaluation%20and%20Prediction.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "b24948c9788eedf030fe52b92d1e3cfee4222bc4", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [] }
258456342
pes2o/s2orc
v3-fos-license
Real-Time Machine Learning-Based Driver Drowsiness Detection Using Visual Features Drowsiness-related car accidents continue to have a significant effect on road safety. Many of these accidents can be eliminated by alerting the drivers once they start feeling drowsy. This work presents a non-invasive system for real-time driver drowsiness detection using visual features. These features are extracted from videos obtained from a camera installed on the dashboard. The proposed system uses facial landmarks and face mesh detectors to locate the regions of interest where mouth aspect ratio, eye aspect ratio, and head pose features are extracted and fed to three different classifiers: random forest, sequential neural network, and linear support vector machine classifiers. Evaluations of the proposed system over the National Tsing Hua University driver drowsiness detection dataset showed that it can successfully detect and alarm drowsy drivers with an accuracy up to 99%. Introduction Drowsiness is a major concern with respect to road safety. Drivers' unconsciousness due to microsleep can frequently lead to destructive accidents. Falling asleep at the wheel is usually related to lack of sleep, exhaustion, or mental health problems. In the UAE, the ministry of interior recorded 2931 car crashes in 2020. The number increased in 2021 to 3488 records. The majority of these traffic accidents were caused by distracted driving due to drowsiness, sudden swerving, or failure to maintain a safe distance between vehicles [1]. In this situation, it is crucial to exploit new technologies to plan and design systems that can track drivers and estimate their level of attention while driving. As multiple countries are concerned regarding this issue, researchers worldwide worked on building Driver Drowsiness Detection (DDD) systems that are capable of detecting drivers' drowsiness signs in the early stages. According to the literature, drowsiness detection systems can be grouped into three categories based on the measures that are used to detect the drowsiness signs [2][3][4][5]: biologicalbased, vehicle-based, and image-based systems. In the first category, biological-based measures rely on monitoring the body's physiological signals including, ElectroEncephaloGraphy (EEG), ElectroCardioGraphy (ECG), ElectroMyoGraphy (EMG), Electro-OculoGraphy (EOG) signals, and blood pressure [6][7][8][9]. In this type of system, drowsiness is determined by detecting the signal's deviation from the standard state's characteristics and analyzing if the new signal indicates drowsiness. In the second category, vehicle-based measures depend on monitoring variations in the car's movement patterns through different sensors' installed to measure various vehicle and street parameters. To infer the drowsiness level, Related Work The problem of driver drowsiness detection has been studied by many researchers worldwide. The proposed approaches to tackle the problem can be mainly differentiated based on the drowsiness indicative features used [2]. Driver drowsiness indicative features obtained from body signs measurements (such as EEG, ECG, PPG, and EMG) are referred to as biological features, which, although accurate in detecting drowsiness, are inconvenient for the driver as they involve the use of sensors attached to the driver's body [2,[6][7][8][9]. Other widely used driver drowsiness indicative features are based on vehicle driving patterns where measurements such as the steering wheel angle and lane departure frequency are related to the driver drowsiness levels [2]. Although convenient for the driver, the literature shows that the accuracy of this method is not high [10,11]. The third drowsiness indicative features are image based. They are usually obtained from videos monitoring the driver's behavior to extract features relating to the driver's eye, mouth, and head movements [2]. They are more convenient for the driver than the biological-based ones as they do not involve attaching equipment or sensors to the driver's body. Image-based systems are the most commonly used techniques for detecting driver drowsiness. Facial parameters such as the eyes, mouth, and head can be used to identify many visual behaviors that fatigued people exhibit. Such drowsy behaviors can be recorded by cameras or visual sensors. Then, from these records, several features can be extracted, and by using computer vision techniques they are analyzed to visually observe the driver's physical condition in order to detect drowsiness in a non-invasive manner. Broadly, imagebased systems are categorized into three categories depending on the observation of the eyes, mouth, and head movements [2]. Various image-based features have been used in the literature. These include blink frequency, maximum duration of closure of the eyes [13], percentage of eyelid closure [18], eye aspect ratio [19], eyelids' curvature [17], yawning frequency [20], MAR [21], mouth opening time [22], head pose [23], head-nodding frequency [4], and head movement analysis [24]. Combinations of these features have been considered as well [20,21,25]. In this section, we provide a detailed explanation of the features that are used in our proposed system. The most common features used to detect drowsiness in image-based systems are extracted from the eye region. Several researchers proposed the EAR [26][27][28] as a simple metric to detect eye blinking using facial landmarks. It is utilized to estimate the eye openness degree. A sharp drop in the EAR value leads to a blink being recorded. Maior et al. [27] developed a drowsiness detection system based on the EAR metric. They calculated the EAR values for consecutive frames and used them as inputs for machine learning algorithms including the multilayer perceptron, RF, and SVM classification models. Their evaluation results showed that the SVM performed the best with 94.9% accuracy. The EAR metric was also used in [29], who explored drowsiness as an input for a binary SVM classifier. The model detected the driver's drowsiness state with 97.5% accuracy. Mouth behavior is a good indicator of drowsiness as it provides useful features for DDD. In [30], the authors proposed to track mouth movement to recognize yawning as a drowsiness indicator. In their experiment, they used a dataset of 20 yawning images and over 1000 normal images. The system used a cascade classifier to locate the driver's mouth from the face images, followed by an SVM classifier to identify yawning and alert the driver. The final results gave a yawning detection rate of 81%. Another mouth-based feature is the mouth opening ratio [29]. It is also referred to as the MAR [21]. It describes the opening degree of the mouth as an indicator for yawning. This feature was fed to an SVM classifier in [29], achieving an accuracy of 97.5%. Another useful parameter for detecting drowsiness in image-based systems is head movements which can signal drowsy behavior. Accordingly, they can be used to derive features that are useful for detecting drowsiness using machine learning. Such head features include head-nodding direction, head-nodding frequency [4], and head pose [31]. In [31], the forehead was used as a reference to detect the driver's head pose. Infrared sensors were used in [24] to follow the head movement and detect the driver's fatigue. In [32,33], before head position analysis was performed, a special micro-nod detection sensor was used in real-time to track the head pose feature in 3D. Moujahid et al. [20] presented a face-monitoring drowsiness-detection system that captured the most prominent drowsiness features using a hand-crafted compact face texture descriptor. Initially, they recorded three drowsiness features, namely head nodding, yawning frequency, and blinking rate. After that, they applied pyramid multi-level face representation and feature selection to achieve compactness. Lastly, they employed a non-linear SVM classifier that resulted in an accuracy of 79.84%. Dua et al. [34], introduced a driver drowsiness-detection architecture that used four deep learning models: ResNet, AlexNet, FlowImageNet, and VGG-FaceNet. These models are extracted from the driver's footage features that include head gestures, hand gestures, behavioral features (i.e., head, mouth, and eye movements), and facial expressions. Simulated driving videos were fed to the four deep learning models. The outputs of the four models were fed to a simple averaging ensemble algorithm followed by a SoftMax classifier, which resulted in 85% overall accuracy. Methodology The methodology followed to develop the proposed DDD system is presented in detail in this section. Firstly, the system design is illustrated. Secondly, a dataset description is provided. Lastly, the four main steps followed in the implementation process are discussed, which are (1) preprocessing, (2) feature extraction, (3) data labeling, and (4) classification. System Design The flowchart in Figure 1 shows the design flow of the proposed drowsiness-detection system. The system design consists of five main steps. In the first step, the system starts by capturing a video that monitors the driver's head and extracts frames from it. The second step is preprocessing, where first, the Blue, Green, and Red (BGR) colored frames are each converted to grayscale. Then, for the eyes and mouth region, face detection is applied by utilizing the Dlib Histogram of Oriented Gradients (HOG) face detector [35]. The Dlib facial landmarks detector is then applied to extract the eyes and mouth regions. Lastly, in the preprocessing step, to capture the head region, MediaPipe face mesh [36] is used to obtain a 3D map of the face and extract the 3D nose coordinates to use as a reference to estimate the driver's head position. The third step involves calculating for each frame a feature vector containing the EAR, MAR, and the nose X-Y coordinates, and storing them in a separate list. This is repeated to populate a window (matrix) with feature vectors corresponding to 15 consecutive frames. Once the system has the first 15 feature vectors stored, it feeds them to the trained classification model which results in initial drowsy or alert labels. The final decision of whether the driver is drowsy is taken if the drowsy label is produced 15 consecutive times and an alarm will sound to alert the driver. Otherwise, the driver will be considered alert. As the process continues, the system employs the moving window concept. The moving window is fixed in size and can only take 15 feature vectors corresponding to a matrix of dimension 4 × 15. When a new frame is recorded, its corresponding feature vector is fed into the feature window while the oldest feature vector in the window is dropped out. Accordingly, the first decision about the driver drowsiness status is given by the system after 1 s, as the system waits to populate the window with 15 feature vectors, followed by counting 15 classifiers labels; i.e., the first decision requires recording 30 frames: 15 to populate the feature window, and 15 label counts. Referring to the moving window discussed above, the following decisions, in contrast, are taken almost instantly. When a new frame is recorded, its corresponding feature vector is fed into the feature window while the oldest feature vector in the window is dropped out. In this case, we have now a full window with 15 feature vectors and 14 previous labels, and the current (new) label which accounts for a time period of 1 frame (1/30 s = 33 ms). A new decision requires the introduction of one new frame which spans 33 ms. Therefore, considering that the preprocessing time and the classification times are minimal, our system's first decision takes 1 s, while the following decisions will be reported every 33 ms, indicating that the response can be considered as being in real time. Dataset In this work, the NTHUDDD video dataset was used to implement this DDD system [37]. The dataset was obtained under simulated driving conditions. A total of 36 subjects were recorded while sitting on a chair playing a driving game with a simulated driving wheel and pedals, with their facial expressions monitored for drowsiness signs. Active infrared (IR) illumination was used to acquire IR videos in the dataset collection. The videos under consideration in this work were taken at a rate of 30 frames/s with a resolution of 640 × 480 pixels and an overall length of 9 h and a half. They were recorded in AVI format. The 36 subjects were of various ethnicities, genders, and facial characteristics. They were recorded under different scenarios with and without glasses or sunglasses under a variety of simulated driving conditions during the day and night times. Various subject behaviors were recorded including normal driving, talking, turning around, slow eye blinking, yawning, and head nodding. Figure 2 shows some of these behaviors. Table 1 illustrates a further description of the dataset. This work has utilized 23 subjects from the NTHUDDD dataset: 18 for training and 5 for testing. The subject selection was based on the different facial appearances and scenarios including wearing/not wearing eyeglasses. Preprocessing For preprocessing, the colored frames are each converted to grayscale. Then, to obtain the eyes and mouth features, the face was extracted by utilizing Dlib's HOG face detector, where the detector function returned a rectangle's coordinates, which surround the face region. Following that, the Dlib facial landmarks solution was utilized. This solution estimates the location of 68 points on the face, forming a map that represents the key facial structures on the face, as shown in Figure 3a [19]. Thus, it was used to detect and extract the eye and mouth regions. For the head pose estimation feature, we used the MediaPipe face mesh solution [36], which is a face geometry solution that is used to estimate 468 face landmarks in 3 dimensions, as shown in Figure 3b. The X and Y output coordinates of the face mesh solution are normalized based on the frame size. While the z coordinate represents the face mesh depth which reflects the distance of the head from the camera. In order to estimate the head pose in the captured video, the initial nose coordinates were first extracted to be used as a reference for the head location and movements in the following frames. Feature Extraction Various human and vehicle features were used to model different drowsiness detection systems. However, in this work, the modeling is based on the EAR and MAR metrics along with drowsy head pose estimation. EAR Metric According to Rosebrock [19], detecting blinking using the EAR feature has multiple advantages compared to detection with traditional image-processing methods. In traditional methods, first eye localization is applied. Then, thresholding is used to find the whites of the eyes in the image. Following that, eye blinking is indicated by detecting the disappearance of the eye's white region. In contrast, no image processing is needed when using the EAR metric. Thus, using it will require less memory space and processing time. Instead, the EAR feature depends on calculating the ratio of the distance between eyes' facial landmarks, which makes it a straightforward solution. In general, the EAR metric computes a ratio extracted from the horizontal and vertical distances of six eye landmark coordinates, as shown in Figure 4 [38]. These coordinates are numbered from the left eye corner starting from p1 and revolving clockwise to p6. Rosebrock [19] explains that all six coordinates from p1 to p6 are two-dimensional. According to [39], in the case of open eyes, the EAR value remains approximately constant. However, if the eyes were closed, the difference between coordinates p3 and p5 and p2 and p6 demolishes; thus, the EAR value drops down to zero, as illustrated in Figure 4. In order to extract the EAR feature, Equation (1) was utilized. As shown in the equation below, to compute the EAR ratio value, the numerator calculates the distance between the vertical landmarks. While the denominator calculates the distance between the horizontal landmarks and multiplies it by two to balance it with the nominator [39]. By utilizing Equation (1), the EAR values were calculated for each frame and stored in a list. MAR Metric Similar to the EAR, the mouth aspect ratio, or MAR, is used to calculate the openness degree of the mouth. In this facial landmark, the mouth is characterized by 20 coordinates (from 49 to 68), as shown in Figure 3a. However, we used points from 61 to 68, as displayed in Figure 5, to obtain the mouth openness degree. Using these coordinates, the distance between the top lip and the bottom lip is calculated using (2) to determine whether the mouth is open or not [40]. In (2), the numerator calculates the distance between the vertical coordinates, and the denominator calculates the distance between the horizontal coordinates. Similarly to (1), the denominator is multiplied by two to balance it with the nominator. As shown in Figure 6, increasing the value of the MAR indicates the mouth is open. Drowsy Head Pose In this work, head pose estimation was achieved by finding the rotation angle of the head. The rotation angle can be defined as the amount of rotation of an object around a fixed point referred to as the point of rotation. To find the rotation angle of the head, first, the center nose landmark was acquired using MediaPipe face mesh for use as a reference and as the point of rotation for the head position in the frame, as mentioned earlier in preprocessing. Then, the nose's X and Y landmarks were normalized by multiplying them by the frame width and height, respectively. Following that, by taking the initial nose 3D coordinates as the point of rotation, the rotation angles of the X and Y axis are calculated and used to estimate if the head position is up, down, left, or right based on a set of thresholds. We have estimated the angle thresholds as follows: Data Labeling According to [39,41], blinking is a quick movement of closing and reopening the eyes, which approximately takes between 100 to 400 ms , while yawning is a quick act of opening and closing the mouth, which lasts for around 4 to 6 s. As for a drowsy head pose, it can be described as random head titling due to severe drowsiness that is usually associated with eye closure, and it may last for a few seconds. Blinking, yawning, and head pose patterns differ depending on the person, action duration, degree of opening or closure, degree of head tilting, and speed. Moreover, one reading of EAR, MAR, and X and Y nose coordinates per frame is not enough to capture the event of blinking, yawning, or drowsy head pose. Thus, in order to detect the different drowsy action patterns, we have used four fifteen-frame length vectors, for each of the four readings, consecutively, as an input to the classifiers. It is well known that when a person starts feeling sleepy that the eye-closing time becomes longer. As a result, we label in this work a blink of 400 ms or longer as indicative of a drowsy driver. Given that the videos were taken at a frame rate of 30 frames/s, i.e., the frame time is 1/30 s, then a drowsy blink will span at least 13 frames. Taking into consideration that people can statistically vary in their eye closure time when they start feeling sleepy, we relax the 400 ms to 500 ms, which spans 15 frames, as was the case in [42]. In order to verify our assumption, we tested different temporal window sizes during the labeling phase, including 9, 13, 15, 17, and 21 frames (see Table 2). By doing that, we aimed to experimentally figure out the number of frames that better capture the different events of eye closure, yawning, and drowsy head pose. Our tests were conducted on three randomly labeled subjects from our training dataset. As shown in Table 2, smaller windows resulted in detecting more drowsy cases because short eye blinks (less than 400 ms) were considered as drowsy while they are, in fact, not drowsy. On the other hand, long windows resulted in some real drowsy cases being missed or not detected. The results reported in the table supported our initial decision of using a 15-frame-long temporal window as it is the case that mostly matched the video drowsiness labels. Consequently, a window of 15 frames in length was adopted. This temporal window was used to prepare the input data as follows: for every 30 frames/s video, the MAR value of the Nth frame is calculated and stored in a list, along with the MAR values from the N − 7 and N + 7 frames. Following that, these 15 MAR values are concatenated, forming a 15-dimensional feature vector for that Nth frame. In this case, we are taking the 7 neighboring frames (from each side) for each Nth frame in order to capture the actual state of the mouth at that frame, either close or open. The same method was applied to prepare the EAR, x, and y nose coordinates input vectors, resulting in a final input of four 15-frame long input vectors. Labeling the training input data was a two-step process, where first, the eyes, mouth, and head state are labeled separately. Then, a final label of the driver's state was given. As for the eyes, an EAR threshold of 0.2 was set to reflect if the drivers' eyes were open or not. For the mouth, the MAR was given a 0.5 threshold to indicate if the mouth was wide open. In terms of the head, nose coordinates were given a set of angle thresholds to reflect the different poses that a drowsy driver's head may position at, as explained previously. After labeling the state of these three parts, a final label was given of either 0 (alert) or 1 (drowsy) to indicate if the driver was drowsy or not. Label 1 was given if either of these states were met, or if a closed eye, open mouth, or drowsy head pose was present. When choosing the thresholds, we studied the maximum EAR (MAX EAR) and maximum MAR (MAX MAR) of different eyes and mouth shapes and sizes in the 18 subjects from our training dataset, as shown in Table 3. MAX EAR reflects the EAR value at the regular openness state of the eyes, and MAX MAR reflects the maximum MAR value that takes place when yawning. We found out that most of the subjects have a MAX EAR range between 0.3 and 0.37. However, we still need to consider the cases of subjects with small eyes, whose MAX EAR value reached a minimum of 0.23. Thus, we experimented with different thresholds during the labeling stage, as illustrated in Table 4. According to Table 4, at a threshold value greater than 0.4, all data frames of all the subjects were labeled "Closed eyes" regardless of the eye state, as none of the subjects in the training dataset has a MAX EAR greater than 0.37. Threshold values between 0.35 and 0.25 had a similar issue as they did not work with subjects of MAX EAR value of 0.34 and below. At the threshold value of 0.2, all the subjects got labels of "Open eyes" or "Closed eyes" successfully without any bias. Lastly, threshold values that were less than 0.2 worked as well, but they reduced the "Closed eyes" labels in the training dataset. Thus, taking into consideration both subjects with small eyes and having a balanced training dataset, we decided to set an EAR threshold value of 0.2 to identify the drowsy eyes from the alert. Similarly, for the MAX MAR values, we noticed in Table 3 that the majority of the drivers reach a MAX MAR value of 0.9 when yawning. However, drivers with small mouths can reach a MAX value of 0.6 or 0.7 depending on the size of the mouth and the way of yawning. Thus, we applied some experiments during the labeling stage to choose the best MAR threshold, as shown in Table 5. According to Table 5, at a threshold value greater than 0.9, all data frames of all the subjects were labeled "Closed mouth" regardless of the mouth state, as the MAX MAR value for the subjects in the training set is 0.9. For threshold values between 0.8 and 0.6, we noticed a similar issue as the frames of subjects with MAX MAR of 0.79 or below were always labeled as "Closed mouth." At the threshold value of 0.5, we have successfully labeled all subjects with a label "Open mouth" or "Closed mouth," reflecting the true state of the mouth. Any threshold value below 0.5 caused some frames to be mislabeled in cases such as talking or laughing. Therefore, we decided to set the MAR threshold to a minimum value of 0.5 to address any unique cases. All data frames of all the subjects were labeled "Closed mouth" 0.8 All data frames of subjects with MAX MAR of 0.79 or less were labeled "Closed mouth" 0.7 All data frames of subjects with MAX MAR of 0.69 or less were labeled "Closed mouth" yawning.avi and nonsleepy Combination.avi 0. 6 All data frames of subjects with MAX MAR of 0.59 or less were labeled "Closed mouth" 0.5 * All data frames of all subjects were labeled as "Open mouth" or "Closed mouth" successfully <0.5 Data frames of all subjects were labeled "Open mouth" in cases where the driver is talking/laughing * Chosen MAR threshold value is in bold. Classification After labeling the extracted values, two main machine learning data preprocessing steps were performed. First, data balancing is an essential step when dealing with unbalanced instances between the two classes. In our case, there were 300,266 non-drowsy labeled as 0 cases and 72,658 drowsy cases labeled as 1. Using under-sampling and over-sampling from the imbalanced learning library, we over-sampled the minority class (labels 1) and under-sampled the majority class (labels 0). The second preprocessing step is data splitting, where a data splitting function from the scikit-learn library was utilized. The data were split into 70% training and 30% testing. The training data was used to train and create the models, while the testing data was utilized to test the performance of the models. After splitting the dataset, three classification models were applied: RF, sequential NN [43], and SVM. Then, the parameters of the three models were tuned and optimized by utilizing grid search hyperparameters [28]. Random forest (RF) is a popular and effective machine learning algorithm, created by Breiman [44]. It involves constructing a group of decision trees that work together to make predictions. The trees are created using bootstrap samples and randomly selecting variables at each node. The RF model combines the predictions of each tree to determine the final prediction. In this study, the scikit-learn library's RF classifier was used with "entropy" as the criterion parameter and 50 trees in the forest. The sequential neural network (NN) model, also known as the feedforward neural network, is the basic type of neural network model [43]. In this study, we used the Keras library to build our neural network model. Keras offers an easy way to build models using the sequential approach, where each layer is added one at a time with weights corresponding to the following layer. In this work, a neural network with six layers was created, consisting of an input layer, four hidden layers with five nodes each using ReLU activation, and an output layer with one node using sigmoid activation. The model classifies the output as either 1 for drowsy or 0 for nondrowsy. Support vector machine (SVM) [45] is a supervised machine learning model that classifies two groups of data by finding a hyperplane in N dimensions. The goal is to select the hyperplane that maximizes the margin between data points, which improves future classification accuracy. The SVM model is popular because it has low computational complexity and high accuracy. The support vector classification (SVC) from the scikit-learn library was used with a linear kernel, a regularization parameter of C = 1, probability estimates enabled, and the random state parameter was set to 0 to control data shuffling. Results and Discussion This section lists the specifications of our development environment. In addition, it presents and discusses the results of the trained models using the testing data that was extracted from the NTHUDDD dataset. By finding the confusion matrix, accuracy, sensitivity, specificity, macro precision, and macro F1-score, and through two visual plots of the results, the best model for drowsiness detection was determined. This section also compares the results of the proposed system with other DDD systems. While implementing this system, we used a laptop equipped with an i7 processor, 16 GB RAM, and an integrated GPU (Intel(R) UHD Graphics 620). As for the development environment, we used Jupyter Notebook in Anaconda and developed the system using Python 3.7. We mainly used scikit-learn 1.1, TensorFlow 2.12, Keras 2.12, Dlib 19.24.1, OpenCV 4.7.0, and MediaPipe 0.9.3.0 libraries and packages. The implementation was performed in two steps, namely, the training step and the testing step. In the training step, the model was trained offline on the precollected NTHUDDD standard dataset. In the testing step, the video footage of the driver's face was taken at 30 frames/s by a webcam fixed at the center of the car's dashboard. The webcam fed the video frames to a laptop that was preloaded with the trained DDD model. The trained DDD model extracted the feature vector corresponding to each frame and classified it in a time period of (2-4 ms), which is negligible compared to the 33 ms time span between one frame and the other, thus, making the decision mainly dependent on the frame time (33 ms) and meaning it can therefore can be considered a real-time decision system. Table 6 illustrates the results of the trained models. The results show that the best performance is achieved by the RF model. When analyzing the results, it is evident that the RF model gave an almost perfect performance as it achieved 99% in accuracy, sensitivity, specificity, macro precision, and macro F1-score. In terms of the performance of the sequential NN model, it achieved second-best results with 96% accuracy, 97% sensitivity, and 96% specificity, macro precision & macro F1-score. As for the SVM model, it achieved the lowest results, where it showed 80% accuracy, 70% sensitivity, and 88% specificity. A score called Area Under the Curve (AUC) can be calculated to reflect the total area under the ROC curve and the separability degree. It is important to note that if a model shows a high AUC value, then it is better at predicting the actual outcomes of the true negative and true positive classes. The ROC curve for the testing data is presented in Figure 7. Similarly, the precision-recall curve is a perfect evaluating tool for binary classification models [46]. In this curve, if a model showed a high AUC score, that indicates a better predicting performance. Figure 8 shows the precision-recall curve for the testing data. As can be seen in Figure 7, both the RF and sequential NN models achieved a high AUC score. However, the AUC score of the SVM model was noticeably lower, which reached 0.867. Regardless, when comparing the three curves, it can be seen that the best performance was achieved by the RF model, with a 0.999 AUC score. Likewise, looking at Figure 8, the RF model gave an AUC score of 0.999, which reflects the best performance, compared to both the sequential NN and the SVM models, which gave a score of 0.991 and 0.855, respectively. The above discussion clearly shows that our proposed system can differentiate drowsy drivers from alert ones. It is easy to use and convenient for the drivers as it is non-invasive, non-intrusive, and does not require any sensors or equipment to be attached to the driver's body. It is also adaptable to be used in different vehicles, including buses, trucks, cars, motorcycles, and construction vehicles. Table 7 presents the most recent literature on drowsiness-detection systems. Due to the different utilization of the datasets and the features, one-to-one comparison is not applicable. However, as illustrated, our RF model outperforms the other techniques available in the literature. Nevertheless, it is important to note that the system has some limitations. The HOG face detector can fail in some scenarios. Some of these include having more than one subject in the frame, variation in the intensities while driving, and driving on a dark street. Conclusions In conclusion, in this paper, we proposed a real-time image-based drowsiness-detection system. In order to implement drowsiness detection, a webcam was used to detect the driver in real time and extract the drowsiness signs from the eyes, mouth, and head. Then three classifiers were applied at the final stage. When a drowsiness sign is detected, an alarm sounds, alerting the driver and ensuring road safety. Evaluation of system performance over the NTHUDDD dataset resulted in an accuracy of 99% for the RF classifier. In the future, we plan to develop a mobile application to allow users to easily use the system while driving. Furthermore, to overcome the limitation of the HOG face detector, we intend to use a more advanced camera that can adapt to the changes in lighting intensity and automatically detect and focus on the driver's face. Conflicts of Interest: The authors declare no conflict of interest. Abbreviations The following abbreviations are used in this manuscript:
2023-05-03T15:04:46.234Z
2023-04-29T00:00:00.000
{ "year": 2023, "sha1": "262a965bfe52f82c682657ea47d48f8c8a4f9401", "oa_license": "CCBY", "oa_url": "https://doi.org/10.3390/jimaging9050091", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "3abc5a0279c0f70bb09ccd96c5d1cfcc11677d76", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [] }
77418425
pes2o/s2orc
v3-fos-license
Communication to promote physical activity in family case study in community medicine for under graduate medical students Background: In current MBBS curriculum, adequate importance is not given to communication to promote physical activity (PA). However, there is a need to train undergraduates for counseling on PA as it has beneficial effect in prevention of non-communicable diseases. Methods:  After an ethical committee approval and written informed consent of the 2 nd year MBBS students, a pre-test assessment was done on PA. An interactive educational session on PA and communication skills was conducted and then again post-test assessment was done. During family case study, each student assessed their allotted older adult family member for PA using rapid assessment for physical activity questionnaire and counselled them accordingly. Faculty assessed student’s communication skills using kalamazoo essential element communication checklist-adapted (KEECC-A) and gave immediate constructive feedback. Later feedback of students and faculty were taken to evaluate the designed curriculum for physical activity. Data was entered in the excel sheet and primer-6 was used for analysis. Results: Out of total 50 students, 35 students enrolled in the study. There was statistically significant improvement in the knowledge regarding PA. Almost 58% could effectively counsel for PA. There was overall positive feedback from the students and teachers. 71% students felt PA should be part of the regular curriculum in community medicine.  Conclusions:  There was significant improvement in knowledge of students regarding PA. Students expressed need to be trained in counselling for PA. INTRODUCTION Family case study forms the basis of community medicine. Undergraduate medical students undertake family case study in second year. This is also the opportunity to train undergraduate students in naturalistic setting as communicator of health. In the MCI document of "Medical Council India-Vision 2015" one of the role of the doctor envisaged is as a "communicator with patients, families, colleagues and community". 1 One of the competencies is that, as a leader and member of the health care team and system, he/she should be able to recognize and advocate health promotion, disease prevention and health care quality improvement through early recognition and intervention in life style diseases. Regular physical activity is well recognized as important lifestyle behaviour for the development and maintenance of individual and population health and well-being. Results from the ICMR-INDIAB (phase-1) study of Anjana et al showed that a large percentage of people in India are physically inactive with fewer than 10% engaging in recreational physical activity. 2 The authors concluded that, urgent steps need to be initiated to promote physical activity to stem the twin epidemics of diabetes and obesity in India. However, in current MBBS curriculum for community medicine, much importance is not given to physical activity. 3 Thus, the study was formulated for second year MBBS students with the following objectives- To impart knowledge of physical activity to the second year MBBS students.  To train students to assess the physical activity of older adult individual using RAPA questionnaire (rapid assessment of physical activity).  To assess student's communication skills regarding physical activity by using KEECC-A (kalamazoo essential element communication checklistadapted). Brief note about study tools Rapid assessment of physical activity Strath et al, in their article have provided a decision matrix to select the appropriate assessment tool to measure physical activity in the patients/participants. 4 Based on the objectives, the questionnaire was the best method to measure the physical activity. Moreover, in the field practice area, most of the individuals available are of older age group i.e. grandparents, with younger population going for work. Thus we selected rapid assessment of physical activity (RAPA) questionnaire which is a reliable and valid tool for quickly assessing the level of physical activity of the older adult individual. 5 Kalamazoo essential element communication checklistadapted (KEECC-A) 6 Makoul G et al, mentions that KEECC-A is based on seven essential sets of communication tasks namely; 1) build the doctor-patient relationship; 2) open the discussion; 3) gather information; 4) understand the patient's perspective; 5) share information; 6) reach agreement on problems and plans; 7) provide closure. 7 Yoon M. et al, states that this instrument is designed for use at all levels of medical education and can be used as formative and summative assessment tools or as a clerkship teaching tool to evaluate actual and simulated patient-physician communication encounters. 8 METHODS The interventional study was conducted in the MCI recognised medical institute in Mumbai, affiliated to Maharashtra University of Health Sciences (MUHS) from December 2015 to January 2016. After ethical committee approval and permission from the institutional authority, written informed consent of the second year MBBS students was taken to participate in the study. Second year students are posted in the community medicine for one month. The total strength of the class is 50, so they are posted in two batches as batch A and batch B (25 students in each batch). When batch A was posted in community medicine in December 2015, the pre-test assessment regarding knowledge of the physical activity (PA) was done. Later, two hour session on physical activity and communication was conducted using interactive teaching strategies such as role play and case studies. Students were taught on the magnitude of problem, importance and measurement of PA and WHO global recommendations for an adult for PA. Students were also trained on assessment of physical activity using RAPA (rapid assessment of physical activity) questionnaire, and communication skills for PA counseling. Since assessment of the communication skills was planned with the use of kalamazoo essential elements of communication checklist adapted (KEECC-A), students were also briefed about this assessment tool in the contact session. This instrument also formed the basis to teach communication skills to the students. Students were taught importance of each domain under KEECC-A and how to communicate in the community under each domain in the local language. The students were assessed by the faculty of community medicine department. They were also oriented to the use of instrument KEECC-A and training of PA for students. The post-test assessment was done after the session. Students were provided with the handouts of the powerpoint presentations and RAPA questionnaire to practice on their family members and/or friends. After few days, during their family visits, each student assessed their allotted family in the community for physical activity using RAPA questionnaire and counselled them regarding physical activity. Immediate constructive feedback based on KEECC-A was given to the student by the faculty who was assessing her/him. Later feedback of students was taken to evaluate the effectiveness of all the activities that were undertaken. Similar interventional procedures were conducted for batch B in the month of January 2016. Feedback of the teachers was taken to find their views and suggestions. Data was entered in the excel sheet and for statistical analysis, primer-6 was used to calculate paired t-test. RESULTS Out of total 50 second year undergraduate medical students, 35 students attended training session on physical activity. The mean pre-test score pertaining to the knowledge of physical training and its importance was 4.5 (SD 1.8) and post test score was 12.6 (SD 2.9). There was statistically significant improvement in the post test score (P-value <0.05). Out of 35 students, 31 students attended family visit for family case study. Almost 58 % (18 students out of 31) were very good or excellent in PA counselling. Table 1 shows the performance of the students in different domains of communication based on KEECC-A. There was overall positive feedback from the students and teachers. Many students commented "interaction with family was good and could understand better by practice in community." Three students had language barrier while communicating with their allotted family in the field, while many felt no modification in the program required in overall activities (contact session and field session) that were planned. Seventy one percent felt "physical activity" should be part of the regular curriculum. 9 Doctors are a respected source of health-related information and are well positioned to provide physical activity counselling to patients. Anand T et al, refers doctors as the potential agents to increase the levels of physical activity in large population and thus produce important health gains. 10 E Frank et al, state that promotion of adequate PA habits during medical education may be an important step to improve the PA counselling that future clinicians provide. 11 As per the literature reviewed, this is the first study in India which is addressing these key issues. Training of medical graduates in eliciting history of PA and counselling in PA is required. This will eventually reduce the burden of non-communicable diseases. In first year of MBBS, exercise physiology is included in the university curriculum of MUHS. 12 The pre-test score however, showed that, students had very minimal knowledge regarding PA and their knowledge significantly improved after interactive session. This shows that, re-enforcement of PA knowledge is required. Currently PA is in the syllabus of community medicine for under graduate students, with minimal information about PA in K Park, which is the most commonly used standard textbook for community medicine. 13 Moreover, till date, as per our information, it is never assessed and/or taught making it least important topic. However after the session, students and faculty both felt the need to include it in curriculum. The focus was on family case and second year students. However, even in III year and internship, students can be trained to routinely elicit PA history in each clinical case, so that they become competent in counselling for PA and start practicing. It is however necessary to involve faculty members of other departments and all other students. This study was community oriented learning. This interventional model, thus not only benefitted students but the community as well. All those allotted families were counselled for PA and its importance. In this study, almost all students had positive learning experience for community oriented learning. They could understand better, by directly dealing with community and communicating with them. Harden R et al, have stated SPICES model-six education strategies relating to the curriculum in a medical school. 14 These are student-centred, problem-based, integrated based, community-based, elective and systematic-based. This SPICES model of curriculum strategy analysis can be used in curriculum planning. In designing the curriculum for physical activity, study included student centred, problem based, disciplinary, community-based, elective and systematic based educational strategy. However, it can be made integrated and hospital based also by involving other disciplines. In this research, KEECC-A was used to assess the communication skills of students. According to Barbara L et al, the KEECC-A is a psychometrically sound and user-friendly communication tool. 15 The faculty also found it easy to use this tool in the community. Since the checklist has seven domains, faculty found it easier to give constructive feedback on each domain. In this study, it was observed that, students communicated well in building the relationship, opening the discussion, gathering the information, understanding the family member's perspective and sharing the information. However, they need more training on how to reach agreement and providing closure. Looking at the paucity of education research on training under graduate medical students for PA counselling, more research is advisable with larger sample size to get consistent and valid results. CONCLUSION There was significant improvement in knowledge of students regarding physical activity. Students felt need to be trained in counselling for physical activity.
2019-03-15T13:13:56.177Z
2016-12-28T00:00:00.000
{ "year": 2016, "sha1": "bcccdd9895722383dd497f07207f04ba8556637e", "oa_license": null, "oa_url": "https://ijcmph.com/index.php/ijcmph/article/download/354/346", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "96005e5b671ab8268615405c82a033c68d1e7bfb", "s2fieldsofstudy": [ "Education", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
258887415
pes2o/s2orc
v3-fos-license
The electromagnetic fine-structure constant in primordial nucleosynthesis revisited We study the dependence of the primordial nuclear abundances as a function of the electromagnetic fine-structure constant α\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\alpha $$\end{document}, keeping all other fundamental constants fixed. We update the leading nuclear reaction rates, in particular the electromagnetic contribution to the neutron-proton mass difference pertinent to β\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\beta $$\end{document}-decays, and go beyond certain approximations made in the literature. In particular, we include the temperature-dependence of the leading nuclear reactions rates and assess the systematic uncertainties by using four different publicly available codes for Big Bang nucleosynthesis. Disregarding the unsolved so-called lithium-problem, we find that the current values for the observationally based 2H\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${}^{2}\text {H}$$\end{document} and 4He\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${}^{4}\text {He}$$\end{document} abundances restrict the fractional change in the fine-structure constant to less than 2%\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$2\%$$\end{document} , which is a tighter bound than found in earlier works on the subject. I. INTRODUCTION Since the early work of Dirac [1], variations of the fundamental constants of physics have been considered in a variety of scenarios, see [2] for a recent status report on possible spatial and temporal variations of the electromagnetic fine-structure constant α, the gravitational constant G and the proton magnetic moment µ p .See also the recent review [3]. As is well known, primordial or Big Bang nucleosynthesis (BBN) is a fine laboratory to test our understanding of the fundamental physics describing the generation of the light elements.In particular, it sets bounds on the possible variation of the parameters of the Standard Model of particle physics as well as the Standard Model of cosmology (ΛCDM).For recent reviews, see e.g.Refs.[4][5][6].Here, we are interested in bounds on the electromagnetic fine-structure constant α derived from the element abundances in primordial nucleosynthesis.For earlier work on this topic, see e.g.[7][8][9][10] and references therein.This work is part of a larger program that tries to map out the habitable universes in the sense that the pertinent nuclei needed to generate life as we know it are produced in the Big Bang and in stars in a sufficient amount, see e.g.[11,12] for reviews. Here, we focus largely on the nuclear and particle physics underlying the element generation in primordial nucleosynthesis.In particular, we reassess the dependence of the nuclear reactions rates on the fine-structure constant, overcoming on one side certain approximations made in the literature and on the other side providing new and improved parameterizations for the most important reactions in the reaction network, using modern determinations of the ingredients whenever possible, such as the Effective Field Theory (EFT) description of the leading nuclear reaction n + p → d + γ 1 and the calculation of the nuclear Coulomb energies based on Nuclear Lattice Effective Field Theory.For β-decays, we also use up-to-date information on the neutron-proton mass difference based on dispersion relations (Cottingham sum rule).Most importantly, as already done in Ref. [19], we utilize four different publicly available codes for BBN [20][21][22][23][24][25][26] to address the systematic uncertainties related to the modeling of the BBN network.In particular these codes differ in the number of nuclei and reactions taken into account as well as in the specific parameterization of the nuclear rates entering the coupled rate equations for the BBN network.Moreover, in determining the sensitivity of primordial abundances on nuclear parameters, we account for the temperature dependence of the variation of some rates on the value of the fine-structure constant α.To our knowledge, such a comparative study where this temperature dependence was explicitly considered has not been published before.We further note that we keep all other constants, like e.g. the light quark masses m u , m d fixed at their physical values. The paper is organized as follows: In Sect.II we collect the basic formulas needed for discussing the finestructure constant dependence in BBN.In this section we discuss the various dependences of the reaction rates 1 There are some ab initio calculations of other reactions in the BBN network such as [13][14][15][16][17][18], mainly concerned with radiative capture reactions.The calculations in the framework of so-called "halo-EFT" potentially offer the possibility to study the α dependence of the cross sections analytically, but the implementation is numerically rather involved and we thus refrained from doing so in the present context. on the value of the electromagnetic fine-structure constant α.The actual calculation of the reaction rates is treated in Sect.III.The BBN response matrix is introduced in Sect.IV.The numerical results of this study are presented in Sect.V and discussed in Sect.VI.We also present a detailed comparison to results obtained in earlier works.In Appendix A we give the novel parameterizations of 18 leading nuclear reactions in the BBN network. II. BASIC FORMALISM As discussed in Ref. [19], the basic quantities to be determined in BBN are the nuclear abundances Y ni , where n i denotes some nuclide.The evolution of the nuclear abundance Y n1 is then generically given by Ẏn1 = n 2 , . . ., np m 1 , . . ., mq N n1 Γ m1,...,mq→n1,...np Y where the dot denotes the time derivative in a comoving frame, and N ni is the stochiometric coefficient of species n i in the reaction.Further, for a two-particle reaction a + b → c + d , Γ ab→cd = n B γ ab→cd is the reaction rate with n B the baryon volume density.This can readily be generalised to reactions involving more (or less) particles, see [26].These equations are coupled via the corresponding energy densities to the standard Friedmann equation describing the cosmological expansion in the early universe, for details and basic assumptions, see also [22,25,26].In what follows, we discuss the various types of reactions in the BBN network and their dependence on the electromagnetic fine-structure constant. A. Reaction rates The average reaction rate γ ab→cd = N A ⟨σ ab→cd v⟩ for a two-particle reaction a+b → c+d is obtained by folding the cross section σ ab→cd (E) with the Maxwell-Boltzmann velocity distribution in thermal equilibrium (2) conventionally multiplied by Avogadro's number N A , where µ ij is the reduced mass of the nuclide pair ij, µ ij = m i m j /(m i + m j ), E is the kinetic energy in the center-of-mass system (CMS), T is the temperature and k the Boltzmann constant.Defining y = E/(kT ) this can be written in the form γ ab→cd (T ) = N A 8 kT π µ ab ∞ 0 dy σ ab→cd (kT y) y e −y . (3) This is suited for numerical computation e.g. with a Gauß-Laguerre integrator.In fact, in order to deal with cases with singular cross sections for E → 0 it is even better to split the integral and write and evaluate the first integral with a Gauß-Legendre and the second with a Gauß-Laguerre integrator for some suitable value of y.Note that in the first term the substitution x = √ y was performed. With the detailed balance relation where are the CMS kinetic energies in the entrance and exit channels, respectively, and g i is the spin multiplicity of particle i, energy conservation implies (7) in terms of the Q-value for the forward reaction.In thermal equilibrium the inverse reaction rate is then related to the forward rate as This brings us to the central question of this paper, namely how the value of the electromagnetic finestructure constant influences the reaction rates?This clearly depends on the reaction type.With the exception of the leading n + p → d + γ nuclear reaction, to be discussed in some detail below, no ab initio expressions for most of the reaction cross sections is available and accordingly one has to rely on model assumptions concerning the fine-structure constant dependence of the cross sections and thus of the reaction rates (see also the discussion in Sect.VI on this issue).These shall be discussed in the following subsections separately for direct reactions of the type radiative capture reactions of the type and β-decays, We shall start with a brief discussion of the Coulomb penetration factor for charged particles, relevant for what follows. Coulomb-penetration factor The Coulomb-penetration factor for an l-wave is given by, see e.g.[27,28] , where F ℓ , G ℓ are the regular and irregular Coulomb functions, respectively, that are the linearly independent solutions of the radial Schrödinger equation where we defined for the Coulomb-scattering of charges Z a e , Z b e with masses m a , m b and the reduced mass at the energy E of the relative motion, subject to the Coulomb-potential Approximate parameterizations of v ℓ (η, ρ) have been extensively discussed in the literature, see e.g.[29] in particular for the dependence on the nuclear distance r where this is to be evaluated for a specific reaction.As argued in [27,28], this distance is not well defined and the cross section should not depend on such an unobservable parameter.Accordingly, if one takes, as in [27,28], lim ρ → 0 the penetration factor for an ℓ-wave then reads Therefore, we shall use as Coulomb penetration factor the expression for s-waves: Note that the corrections due to a variation of α in ε 2 ℓ for ℓ > 0 according to Eq. ( 18) are of higher order in α and thus small anyway .Here we defined in terms of the so-called Gamow energy for a two-particle reaction channel ij and the CMS energy E or E + Q for the entrance and exit channel, respectively. For a direct reaction of this type the Q-value is given by where the nuclear mass of each nuclide i with mass number A i and charge number Z i reads with B i the nuclear binding energy.Thus, because of baryon number and charge conservation ( Now the binding energy can be written as where B N i denotes the strong contribution to the binding energy and is the expectation value of the Coulomb contribution proportional to the value of the electromagnetic finestructure constant.Considering its variation in the form α = α 0 (1 + δ α ) , where is the current experimental value from Ref. [30], the Qvalue varies as One therefore needs an estimate of the Coulomb contribution to the nuclear masses, this we shall discuss in Sect.II G.We shall assume that the cross section for a direct reaction a + b → c + d depends on α as where f is some function independent of α and P i (x i ), P f (x f ) are the penetration factors given by Eq. ( 19) reflecting the Coulomb repulsion in the entrance and in the exit channel, respectively.The first factor in Eq. ( 29) accounts for the exit channel momentum dependence of the cross section of the direct reaction a + b → c + d.Here, are the arguments of the penetration factors, with the Gamow-energies in the entrance and the exit channel, respectively, and µ ij = m i m j /(m i + m j ) the corresponding reduced masses.Although in order to calculate the linear response of the abundances one could proceed by calculating first order partial derivatives etc. , we prefer not to presume linearity and rather calculate a variation of the cross section with a variation α = α 0 (1 + δ α ) through the expression where specifically the first factor reads and the remaining factors are given by We note that these factors are energy-dependent and therefore the change in the rate i.e. the factor depends on the temperature T and as it stands requires a numerical evaluation of Eq. (38) . Similar considerations hold for radiative capture reactions.The cross section of a reaction a + b → c + γ is assumed to depend on α as with f some α-independent function and P i (x i ) the penetration factor, see Eqs. (19,30,32), for the entrance channel.The first factor accounts for the fact that in the amplitude for a radiative capture reaction the photon coupling is proportional to e, leading to a factor proportional to α = e 2 /(ℏc) in the cross section.The second factor reflects the final momentum dependence assuming dipole dominance of the radiation 2 .We thus calculate a variation of the cross section for radiative capture with a variation α = α 0 (1 + δ α ) via 2 Note, however, that this is not always the case, exceptions with appreciable E2 (electric quadrupole) contributions are e.g. the reactions: where the first factor is the same as in Eq. (36) .Again note that both factors are energy-dependent and therefore a change in the rate, see Eq. (38), is temperaturedependent. D. Approximate treatment of α-dependent factors As mentioned twice, the variation of the cross sections with a variation of α induces energy-dependent factors, that in turn lead to temperature-dependent variations in the corresponding reaction rates, that can be fully accounted for only via a numerical integration of Eq. (38).In fact this is what was done in the present work for the most important 18 nuclear reactions in the BBN network, listed in Sect.V.For the remaining reactions we relied on the following approximations, that turned out to be effective.For neutron induced reactions where for a non-resonant reaction R(E) is a weakly dependent function of the CMS kinetic energy E, see e.g.[31] .If we make the extreme approximation that R(E) ≈ const.the maximum of the remaining energy dependent factors in the integrand of Eq. ( 38) is reached at the energy Likewise, assuming that for the astrophysical S-factor for charged particle induced reactions holds, one finds that the maximum of the remaining energy-dependent factors in the rate is reached at Substituting E → E n , E c in the expressions in Eqs.(35,41) then leads to temperature-dependent factors, that can be taken in front of the integral in Eq. ( 38) and thus merely multiply the corresponding rates.The quality of this approximation may be inferred from Fig. 1, where we compare the results of the numerical calculation of the rates according to Eq. (38) (yellow areas for a variation δ α ∈ [−0.1, 0.1]) with the approximation discussed in this subsection, represented by blue lines for δ α = −0.1 and 0.1 .Note that in the present treatment we preferred to account for the Coulomb suppression in an entrance or exit channel with charged particles by the penetration factor of Eq. ( 19) and do not rely on a simple Gamowfactor ∝ e −x = e − √ E G (α)/E , with E being the CMS energy of the relevant channel with charged particles.We found that doing so would lead to overestimating the αdependence in the rates by a factor ≈ 1.5, while the temperature dependence would still roughly follow the same trends as in Fig. 1. E. Coulomb-effects in β-decays Next, we consider the various β-decays in the BBN network.The rate for β-decays a → c + e ± + (−) ν can be written as, see e.g.[32], where G is Fermi's weak coupling constant, M ac the nuclear matrix element and see Eq. (2.158) of Ref. [33] , where we defined q = Q/m e = (m a − m c )/m e .Further, F (±Z, E) is the socalled Fermi-function with the definitions where Z is the atomic number of the daughter nucleus c and R its radius, see Eqs. (2.121)-(2.125),(2.131) of Ref. [33] .The upper/lower sign holds for β − /β + decays, respectively.For Z α ≪ 1 we can approximate or, with such that lim Z→0 F 0 (±Z, E) = 1.Accordingly, setting a = 2π Z α then Defining also p = q 2 − 1 (i.e. the maximal momentum in β-decay divided by the electron mass) and with the substitution y = √ x 2 − 1/p we can rewrite the expression for f (±Z, q) as which is slightly better suited for a numerical implementation, e.g with a Gauß-Legendre-integrator.Note that both a and p depend on α. Electromagnetic contribution to the proton-neutron mass difference The neutron-proton mass difference plays an important role in BBN, see e.g.[41].According to Refs.[34,35] the proton-neutron mass difference is given by where the nominal electromagnetic contribution is somewhat smaller than the value ∆m QED = 0.7 ± 0.3 MeV given earlier in Ref. [36]. We also note that the splitting in strong and electromagnetic contributions is convention-dependent, for a pedagogical discussion see [37].For a comparison of these results with lattice QCD and other phenomenological determinations of the electromagnetic contribution to the neutron-proton mass difference, we refer to [34]. The neutron-proton mass difference is a crucial parameter both in the various n ↔ p weak interactions in the early phase of BBN and in all β-decays.We shall start with a discussion of the latter. Implications for β-decays Writing the Q-value for the β-decay a → c + e ∓ + (−) ν e depends on a variation of α = α 0 (1 + δ α ) as where, as in Sect.II B, B N i is the nuclear (strong) contribution to the binding energy of nuclide i and V C i the expectation value of the Coulomb-interaction to the binding energy of nuclide i . One thus finds for the variation of the β-decay rate with a variation of α, where and the factor determining the variation of the β-decay rate with a variation of α , see Eq. ( 57), is determined by evaluating f (ã, p) and f (a, p) via Eq.( 53) .We note that ≥ m e implies an upper limit for δ α : As for other cases where during a variation of α the Qvalue of a reaction becomes negative, we have put the corresponding rate to zero.We also note that for the neutron decay n → p+e − +ν the variation of the rate with a variation of α merely implies a variation of the neutron lifetime τ n ∝ 1/λ n→p . Implications for the weak n ↔ p reactions As detailed in Ref. [26] the six reactions determine the evolution of the neutron abundance in the early phase of BBN and hence are crucial for all other primordial nuclear abundances.Assuming local thermodynamical equilibrium in terms of a temperature T and a distinct neutrino temperature T ν in the so-called infinite nucleon mass approximation the n → p (angular averaged) reaction rate can be written, see e.g.[26] for details, as with the Fermi-Dirac distribution function.The ratio T ν /T follows from the cosmological evolution, see the black 2. Variation of the rate ratios Rn→p (Eq.( 63), blue hatched area) and Rp→n (Eq.( 64), red hatched area) with decreasing temperature (T9 = T /[10 9 K]) for δα in the range δα = −0.1 (lower curves) up to δα = 0.1 (upper curves).Also shown is the ratio Tν /T (black curve).curve in Fig. 2. The constant K is fixed by requiring that Γ n→p (∆m; 0) = 1/τ n , with τ n the neutron lifetime.The p → n rate is simply given by substituting ∆m → −∆m in Eq. ( 61) above.In this case Γ p→n (∆m; 0) = 0 .As discussed in [26] and [31] there are a number of corrections to the n → p and p → n rates as given above, viz. the Coulomb correction (as discussed above in section II E 2), electromagnetic radiative corrections, finite nucleon mass corrections, plasma corrections and non-instantaneous neutrino decoupling effects.Some of these involve the fine-structure constant α, but since these effects are small corrections anyway, the most relevant effect when varying α is through the change ∆m(α) = ∆m(α 0 ) − ∆m QED δ α .This effect is illustrated in Fig. 2, where the double rate ratios and obtained by a numerical integration according to Eq. ( 61) (with a method similar to that of Eq. ( 4)) are plotted as a function of T 9 = T /[10 9 K]).This double ratio was chosen such that for T → 0 the n → p curves tend to unity and the p → n curves to zero; the α-dependence of the n → p rate in this low-temperature limit is then given by the expressions in the preceding section II E 2. As is evident from this figure the variation of the rates with varying α is non-linear and strongly temperature dependent. F. The n + p → d + γ reaction Fortunately, for the n + p → d + γ reaction an accurate treatment within the framework of pionless EFT [38,39] is available.Accordingly, for this leading nuclear reaction in the BBN network it is possible to study dependences of the cross section and hence of the reaction rate on various nuclear parameters, such as the binding energy of the deuteron, np scattering lengths, effective ranges etc. as was done in [19].Here we shall focus on the α-dependence. The cross section was given in [39] as where p is the relative momentum, m N = (m p + m n )/2 denotes the nucleon mass, γ = √ B d m N is the so-called binding momentum, with B d = 2.225 MeV the binding energy of the deuteron, and , are the dimensionless amplitudes for isovector electric dipole, isovector magnetic dipole, isoscalar magnetic dipole and isoscalar electric quadrupole contributions, respectively.For the energies relevant in BBN only the isovector contributions are significant and these were calculated at N4LO and N2LO for the electric and magnetic parts, respectively.The overall theoretical uncertainty is claimed to be better than 1% for CMS energies E ≤ 1 MeV .The expression of Eq. ( 65) with all terms included was used to calculate the cross section for this reaction throughout the present investigation. Concerning the variation of this cross section when varying α = α 0 (1 + δ α ) it is evident that the dominant effect is simply Note that there is no Coulomb-contribution to the binding energy of the deuteron, while the expectation value ⟨v EM ⟩ of the electromagnetic interaction, mainly due to the magnetic dipole-dipole interaction moment term, see [40] for a treatment based on the Argonne v 18 nucleon-nucleon potential, is very small, ⟨v EM ⟩ = 0.018 MeV.Hence the effects of a change of the Q-value of the reaction with varying α, as discussed in the previous subsections, are considered to be negligible in the present context.Moreover, in the expression of Eq.( 65), as well as in the expressions for the amplitudes χ of Ref. [39] the nucleon mass m N = (m p + m n )/2 occurs at various instances.Although a moderately accurate value for the electromagnetic contribution to the neutronproton mass difference is available (and was, in fact, used in our discussion of the β-decays in Sect.II E 1), only rough estimates are available for the electromagnetic contribution to the neutron and proton mass separately.In Eq. ( 12.3) of Ref. [42] the estimates m Born p ≈ 0.63 MeV, m Born n ≈ −0.13 MeV (with an estimated accuracy of ≈ 0.3 MeV) for the total electromagnetic self-energy can be found, which, via m Born N ≈ 0.25 MeV , would imply that m N varies with α (putting m N ≈ 1 GeV for this estimate) as For |δ α | < 0.1 , as considered here, this would lead to effects well below the theoretical accuracy quoted above and therefore this effect was neglected and the variation of the n + p → d + γ cross section with α is assumed to be entirely given by Eq. ( 66) . G. Coulomb energies A variation of the value of the fine-structure constant α implies a variation of the nuclear binding energies and hence a variation of the Q-values of the reactions, which in turn leads to a variation of the cross sections and the corresponding rates.Therefore the present study requires an estimate of the electromagnetic contribution to the nuclear masses or equivalently to the nuclear binding energies.A rough estimate is provided by the Coulomb term in the Bethe-Weiszsäcker formula (for a recent determination, see e.g.[43] and references therein): approximately accounting for the Coulomb repulsion by the protons in a nucleus.However, this formula is not very precise when applied to the light nuclei relevant here.We therefore prefer to use the expectation values of the Coulomb interaction as determined from a recent ab initio calculation of light nuclear masses in the framework of Nuclear Lattice Effective Field Theory (NLEFT) [44] , listed in Table I .We also compare the calculated binding energies to the experimental data as used here in order to give an impression of the quality of the calculation. III. CALCULATION OF THE REACTION RATES For the 18 a from [44].The errors quoted in parentheses include all the statistic and systematic uncertainties.In case of 2 H, the error is entirely given by the variation of the np phase shifts at N3LO within their uncertainties.b from [45], as used in the present work.c extrapolated from a least-squares fit to the other data with the rates and their variations with α are calculated by a numerical integration of Eq. ( 38) and tabulated for 60 temperatures in the range 0.001 ≤ T 9 = T /[10 9 K] ≤ 10.0.These values were then used via a cubic spline interpolation in the four publicly available BBN codes as outlined in Sect.IV.The resulting rates and their variations with α = α 0 (1 + δ α ) in the range δ α ∈ [−0.1, 0.1] are displayed in Fig. 1 in Sect.II D. To this end, we made new fits to the cross sections (or equivalently of the corresponding astrophysical S-factors) for the reactions listed above.The parameterizations can be found in Appendix A. In addition in Fig. 3 the resulting reaction rates for α = α 0 are compared to the rates implemented in the original versions of the four programmes considered here.In Fig. 3 we also display the rates obtained with the NACRE II database, see [46], which served as a further check on our calculated reaction rates at α = α 0 . The rates of all other reactions were taken as in the original implementation of the codes and the variation of the rates with α was calculated as discussed in Sect.II D. The variation of the β-decay rates according to Eq. ( 57) was implemented directly in the various codes.In Fig. 4 it is shown how the β-rates at low temperature (i.e.T ≪ T 9 ) change by a variation of α = α 0 (1 + δ α ).In particular the rates of the tritium decay and the 14 Cdecay strongly depend on the value of δ α , the effect of (relatively large) changes in the (relatively small) Qvalues due to changes in the Coulomb contribution to the binding energies being dominant. As already touched upon in section II E 3 the variation of the weak n ↔ p rates with α is dominated by the variation of proton-neutron mass difference with α and is strongly temperature dependent in the early phase of BBN.In the default version of the Kawano code NUC123 [20] this temperature dependence is parameterized as outlined in Appendix F of Ref. [20], but a numerical integration along Eq. ( 61) can be enforced and was in fact used to implement the α-dependence of these rates.The PArthENoPE code [23][24][25] contains a slightly more sophisticated parameterization, see e.g.Appendix C of Ref. [31], accounting also for some higher order corrections.Here we used the α-dependence of the n ↔ p rates as illustrated in Fig. 2 as a factor multiplying the parameterized rate.In the AlterBBN code [21,22] the temperature dependence of the weak n ↔ p rates was already determined numerically as in Eq. ( 61) and the α dependence can be accounted for by an appropriate variation of ∆m .In this code also the Coulomb correction, see Eq. ( 57), was included in the integrand of Eq. ( 61), but this was found to have no significant impact on the final abundances to be discussed below in section V.The PRIMAT [26] implementation offers the possibility to study the α-dependence of the weak n ↔ p reactions in all detail including all the higher order electromagnetic corrections mentioned in section II E 3 .In fact this code was used to verify that the variation of the rates through the variation of ∆m with α as discussed in section II E 3 is indeed the dominant effect.Indeed, ignoring the α dependence in the higher order corrections implemented in PRIMAT led to response coefficients that differ at most by 0.5% from the values listed in Table III below.Accordingly, in spite of the fact that the n ↔ p reactions are treated at various levels of sophistication, the resulting primordial abundances and their variation with α, to be discussed in section V, were found to be rather consistent. IV. THE BBN RESPONSE MATRIX We estimated the linear dependence of the primordial abundances Y n on small changes in the value of the finestructure constant α = α 0 (1 + δ α ) by calculating the abundance of the nuclide n, with n ∈ { 2 H , 3 H + 3 He , 4 C) as well as an implementation as a mathematica-notebook, PRIMAT [26] .To this end we performed least-squares fits of a quadratic polynomial to the abundances: such that will be called an element of the linear nuclear BBN response matrix.It represents the dimensionless fractional change in the primordial abundance ratio Y n /Y H due to a fractional change α in linear approximation.Deviations from a linear response are then given by the coefficient c 2 . V. RESULTS AND DISCUSSION In most of what follows we shall use η = 6.14 • 10 −10 from Ref. [30] as the nominal baryon-to-photon density ratio while varying α.The programs were modified as indicated in Sect. 4 of Ref. [19] and the rates for the most relevant reactions listed in Sect.III, resulting from the new fits of the cross sections presented in Appendix A, were used in all programmes. The resulting nominal (i.e. at α = α 0 ) abundances at the end of the BBN epoch in terms of the number ratios Y2 H /Y H , Y3 H+ 3 He /Y H , Y6 Li /Y H , Y7 Li+ 7 Be /Y H , and the mass ratio for 4 He are compared to the values quoted in Ref. [19] and experimental data in Table II .Although the mass ratio for 4 He and, to a minor extend, the number ratio for deuterium did not change significantly with respect to the values obtained in Ref [19], the 3 H + 3 He number ratio increased by approximately 10% , the 6 Li number ratio was found to be larger by about 70 − 80% , while the 7 3. Reaction rates γ(T9) for 18 leading nuclear reactions in the BBN network, where T9 = T /[10 9 K] .The rates resulting from the new parameterizations of the S-factors in Appendix A are represented by solid red curves (color online).The rates in the original version of the programmes are given by green curves for NUC123 [20], magenta curves for PArthENoPE [25], blue curves for AlterBBN [22] and cyan curves for the PRIMAT [26] code.Also shown as a thin black curve is the result from the NACRE II database, see [46] .a factor of three, a phenomenon known as the lithiumproblem, which is thus unsolved even using the updated cross sections used here.As stated previously in [19], in spite of this unresolved issue in BBN the consistency of the cosmic microwave background observations with the determined abundances of deuterium and helium is considered to be a non-trivial success.Accordingly, we think that this issue is no obstacle for the study presented here. The elements of the response matrix were then determined by a polynomial fit, as explained above in Sect.IV for the abundances relative to the hydrogen abundance, namely Y2 H /Y H , Y3 H+ 3 He /Y H , Y6 Li /Y H , Y7 Li+ 7 Be /Y H , and the mass ratio for 4 He . The dependence of these ratios on the value of the fine structure constant α = α 0 (1 + δ α ) is displayed in Fig. 5 abundance ratios is found to be very similar for all four publicly available codes, in spite of the fact that these codes differ in details, such as the number of reactions in the BBN network or the manner in which the rate equations are solved numerically.Note, however, that in the present study the rates calculated for the major reactions listed in Sect.III and their variation with α are the same. Of course this then also applies to the values for the resulting response matrix elements.The response matrix elements ∂ log (Y n /Y H )/∂ log α = c 1 and the coefficients of the quadratic term in Eq. ( 73) are given and compared to some results from the literature in Table III.Note that with the exception of 6 Li, we have |c 2 | ≃ |c 1 |, so that due to the smallness of α, the second order contribution to the response is of minor importance.All programs were run with the full network implemented in the original version codes.We checked that if we run the programs with a smaller network the results listed in Tables II,III change only in the last digit and therefore conclude that the approximation, see Sect.II D, we made for rate changes in the reactions beyond the reactions listed in Eqs.(69)(70)(71) are without any effect for the present investigation. Apart from the values of c 1 for 2 H(≈ 3.6) and for 6 Li(≈ 6.8) the values obtained in the present study, although consistent among each other, differ appreciably from the values obtained in Refs.[7,8,10].In particular in the present calculations the linear response for 3 H + 3 He is much larger while the linear response for 7 Li + 7 Be is appreciably smaller in magnitude, although there seems to be at least a consensus concerning the sign. In order to clarify this issue, we shall discuss in some detail the relevance of the various factors that reflect the α-dependence of the nuclear rates: • First of all we list in Table IV the linear response of the BBN abundances to a variation of α in the β-decay rates only. • In Table V we display the linear response of the BBN abundances to a variation of the nuclear reaction rates.The relevance of the variation of the binding energies with α may be appreciated by the linear response due to changes in α accounting for the effects due to the Coulomb penetration factors only, i.e. without accounting for changes in the binding energies, listed in Table VI .Here we also compared our results to the results presented in Table I of Ref. [10] for the dependence of the abundances on the nuclear rate variation with α, that thus differ from our results significantly for c 1 ( 3 H + 3 He) and c 1 ( 7 Li + 7 Be) , our results being larger in magnitude for the former and smaller for the latter. Indeed, if we substitute our values, as well as the results we obtained in [19] for the linear response of the abundances on binding energies and the neutron life-time τ n for the values of the response matrix C of Table I in [10] and furthermore account for the smaller value ∂ log τ n /∂ log α ≈ 2.90, obtained via Eq.( 57) (instead of 3.86 in [10]) and the smaller value ∂ log Q N /∂ log α ≈ −0.45 (instead of −0.59 in [10]), due to the smaller new value for ∆m QED , and use our values for the response of the binding energies ∂ log B i /∂ log α that are smaller by about 10% in Table IV of [10] we find approximately for the linear responses , AlterBBN [22], PArthENoPE [25], PRIMAT [26] .Here, we use η = 6.14 • 10 −10 and τn = 879.4s .Also shown are the solid curves obtained by the fits according to Eq. ( 73) with the parameters listed in Table III.The experimental values cited in PDG [30] (thick red lines) are indicated by yellow-highlighted regions (color online) representing the 1σ limits by red lines. rather close to our values for c 1 given in Table III.Most of the effects listed above are, although significant, of minor importance only, and accordingly the difference can be traced back to the fact that our results for the variation of the rates with a variation of α when ignoring the effects based on Q-value changes, as listed in Table VI differ appreciably from those of [10] .Unfortunately in the latter reference no results on the α-dependence of the rates are explicitly given.In appendix A.2 of [10] it is mentioned that parameterizations of the S-factors were used, the parameters determined by fitting the NETGEN rates as closely as possible.In order to check our parameterizations of the nuclear rates we compared our rates with results generated by the NETGEN-tool [46] in Fig. 3 and found that these are indeed compatible for all reactions, except for the reaction 7 Be + n → 4 He + 4 He, where the NETGEN-tool merely uses the THALYS nuclear reaction model [47].We instead used data, see also Fig. 26 for our fit of the S-factor.Therefore the difference must be due to the different way the Coulomb penetration effects are 73) at η = 6.14 • 10 −10 and τn = 879.4s .Yn/YH are the number ratios of the abundances relative to hydrogen; Yp is conventionally the 4 He/H mass ratio.The results obtained with the four BBN codes NUC123 [20], PArthENoPE [25], AlterBBN [22], PRIMAT [26] are given in four subsequent rows and compared to earlier results from Refs.[7,8,10].treated.Note that, as emphasized in Sect.II A 1, we did not rely on temperature-independent penetration factors taken as a Gamow-factor, but rather accounted for the penetration dependences in the cross-section, which then leads to temperature-dependent changes in the rates. Our results also differ from the results in Refs.[8] and [7] published even earlier.Concerning the treatment in [8], it is noted that, although the authors present a detailed discussion of the α-dependence in the penetration factors, even accounting for additional αdependent effects due to the peripheral nature of some radiative capture reactions such as e.g. the 3 He + 4 He → 7 Be + γ , an effect taken into account also in the present treatment.Nevertheless, in contrast to our treatment, α-dependent effects seem to be treated merely by temperature-independent factors multiplying the rates.In Ref. [7], the changes in the reaction rates due to changes in α were treated through approximate expressions based on expansions of the S-factors, whereas we preferred to make no further approximations beyond the modeling of the penetration factors discussed in Sect.II A 1. Note that a comparison with the work of [9] is not possible, since there any variation of the fine-structure constant is tied to the variation of certain Yukawa couplings. All in all our results indicate that the BBN abundance for 7 Li + 7 Be is less sensitive and the abundance of 3 H+ 3 He is more sensitive to variations of the value of the electromagnetic fine-structure constant α than what was determined earlier.Note that such a reduced sensitivity on nuclear quantities, such as binding energies etc. was also observed in [19] .There it was also found that this is mainly due to inclusion of the temperature-dependent changes in the rates.Unfortunately, the primordial abundance of 3 H + 3 He is not known precisely enough to lead to any implications and the nominal prediction for the 7 Li + 7 Be abundance is too large anyway. If we focus on the deuterium and 4 He abundance ratios alone we can extract from the observationally based data bounds on the variation δ α of the value of the finestructure constant as listed in Table VII for the four programs considered here, showing that one can allow for a variation of the fine-structure constant α by less than 2% on the basis of the results obtained with all programs considered here using the current value for the baryonto-photon ratio η = 6.14•10 −10 , given in [30] .The values ) limits for the variation δα of the fine-structure constant α = α0 (1 + δα) determined such that the resulting abundance lies within the error bounds of the observationally based abundance ratios for 2 H and 4 He given in [30].for the 4 He mass ratio Y p obtained with all four programs are rather consistent and the range [−0.018, 0.006] which is more restrictive than the rough estimate |δ α | < 0.1 quoted in [7,8] and the limit |δ α | ≤ 0.019 mentioned in [10].The values found on the basis of the deuterium number ratio show a larger spread, mainly because the nominal values, see Table II, vary more strongly for the four programs.In spite of this we can determine the range [−0.007, 0.008] , also still more restrictive than the (1σ) range [−0.04, 0.10] of [8].Our new restrictions on the variation of α are also stronger than found earlier in the NLEFT analysis of the triple-alpha process in hot, old stars [48,49].From a comparison of Tables IV-VI we also see that the linear response for Y p due to variations in the βdecay is the dominant effect.Indeed, as argued in [7], the variation of Y p with α mainly depends on the variation of the proton-neutron mass difference with α , i.e. on δm QED that enters the n → p weak decay. As was done previously in Ref. [8] we also studied to what extend the results presently obtained vary with variations of the baryon-to-photon ratio η and found that our results for the linear response coefficients c 1 do not change significantly if η is varied within the error range quoted in [30], η 10 = η • 10 10 = 6.143 ± 0.190 .With the values of the primordial abundance ratios for d, 4 He and 7 Li+ 7 Be mentioned in PDG [30] we can derive parameter ranges for restricting δ α and η as presented in Figs.6-8 .Note that we here allowed for a variation of η well beyond the currently accepted limits quoted in [30] .The results are similar to those obtained in Ref. [8] although the regions of possible values for the δ α -and η-values are narrower here due to the newer, more precise observational data quoted in [30] .The comparison of these results again show that the value of the Li/Be abundance is incompatible with the other data and we therefore refrain from any conclusions concerning possible variations of α on the basis of the 7 Li observation. VI. SUMMARY In the present paper we investigated the impact of variations in the value of the fine-structure constant α on the abundances of the light elements, viz.n + p → d + γ reaction in the framework of pionless EFT.For all other reactions of this kind we rely on modifications of experimentally determined reaction cross sections, trying to account for electromagnetic effects, such as penetration factors, modeling the suppression due to the Coulomb barrier in channels involving charged particles as well as changes in the binding energies of nuclides due to the Coulomb repulsion of the protons and hence the Q-values of the nuclear reactions where these are involved.To this end we made new parameterizations of the cross sections of the 18 leading nuclear reactions in the BBN network using current experimental data compiled by EXFOR.We made an assumption about the α dependence of the penetration factors which differs from the Gamow-factor form that was used in previous investigations and used novel estimates for the Coulomb contribution to nuclear binding energies based on a recent ab initio calculation in the framework of NLEFT in order to determine the α-dependence of the nuclear binding energies and the corresponding Q-values.A further new ingredient for studying the α-dependence of the weak β-decays in the BBN network is a novel value for the electromagnetic contribution to the neutron-proton mass difference, which is slightly smaller than what has been used before.All these new inputs were then used to determine the variation of the reaction rates with varying α.Here, we found in particular that the variation of the reaction rates depends on the temperature, a feature that seems to have been ignored in previous investigations.We found consistent results with all four codes mentioned above and hence conclude that the model-dependence concerning the specific treatment of the BBN network is of minor importance for the α-dependence of the primordial abundances studied here.The results for the linear response do, however, deviate significantly from older results, in particular for the α-dependence of the abun-dances of 3 H + 3 He and 7 Li + 7 Be, the former being much larger and the latter much smaller than found previously.Unfortunately, in the standard Big Bang scenario used here, the nominal abundance ratio Y7 Li+ 7 Be /Y H exceeds the current observationally based determination by a factor of three, a feature known as the lithium-problem that is not solved in the present treatment.This then also impedes a determination of consistent bounds on the value of the fine-structure constant from all available primordial abundance data.Using the observations for 2 H and 4 He alone, we can nevertheless state that these data would limit a possible variation of α to |δ α | < 0.02.This is a stronger bound than found earlier in comparable investigations. An investigation of the kind presented here heavily relies on the modeling of electromagnetic effects in the cross section data (or, equivalently the astrophysical S-factors) of the relevant nuclear reactions in the BBN-network.Here we opted for a specific form of Coulomb penetration factors that differ from Gamowfactors used before and stressed the relevance of the temperature dependence of the variation of α in the reaction rates that resulted from numerically integrating γ(α; T ) ∝ ∞ 0 dE E σ(α; E) exp(E/kT ).It seems that further progress with the purpose of using primordial nucleosynthesis as a laboratory for exploring our understanding of fundamental physics, apart from astrophysical or cosmological aspects will be feasible only if ab initio theories describing the relevant nuclear reactions including electromagnetic effects become available.NLEFT appears to be a promising framework for doing just that, see e.g.Refs.[50,51]. The cross section for the leading nuclear reaction of BBN, namely n + p → d + γ, was calculated according to the formulas , viz.Eqs.(3.3)- (3.16) given in [39] with the parameters quoted there, also see Sect.II F. In Fig. 9 this description is compared to the existing data as compiled in [54] . Other radiative capture reactions The parameters found by a fit of the parameters in Eqs.(A2,A3) to the data are displayed in Table VIII for most radiative capture reactions treated here. The parameterizations are compared to experimental data compiled by EXFOR [54] FIG. 10. Fit (red curve, color online) of the S-factor for the d + p → 3 He + γ reaction compared to data compiled by EXFOR [54] . The only exception is the reaction d + 4 He → 6 Li + γ, where a resonance appears.In this case the S-factor is given by the sum of a cubic polynomial in E and a relativistic Breit-Wigner function: with the parameters listed in Tables VIII and XI.This parameterization is compared to experimental data compiled by EXFOR [54] in Fig. 15 .FIG. 13.Fit (red curve, color online) of the S-factor for the 3 He + 4 He → 7 Be + γ reaction compared to data compiled by EXFOR [54] . S(E) [MeV mb] own fit [91] [92] [93] FIG.14. Fit (red curve, color online) of the S-factor for the 6 Li + p → 7 Be + γ reaction compared to data compiled by EXFOR [54] .TABLE IX.Fit parameters of the S-factor, see Eq.(A1), according to Eqs. (A2,A3,A7,A8) and (A9) for charged particle reactions.S0 is given in MeV mb; a k and q k in units of MeV −k .For these reactions q3 = 0 .For this reaction the fit of the S-factor is compared to experimental data compiled by EXFOR [54] in Fig. 24 . Reaction For the reaction 7 Be + n → p + 7 Li the following parameterization in terms of a non-relativistic Breit-Wigner function and a polynomial in √ E was used: where again the CMS energy E is given in MeV and the parameters of the Breit-Wigner function can be found in Table XI.For this reaction the S-factor fit is compared to experimental data compiled by EXFOR [54] in Fig. 25 . Finally, the form of the parameterization for the reaction 7 Be + n → 4 He + 4 He reads The S-factor is compared to experimental data compiled by EXFOR [54] in Fig. 26 . TABLE I . Binding energies B (calculated (cal) and experimental (exp) values) and expectation values for the Coulomb interaction V C of light nuclei. TABLE III . BBN response matrix c1 = ∂ log (Yn/YH )/∂ log α and the coefficients c2 of the quadratic term in Eq. ( TABLE IV . BBN response matrix c1 = ∂ log(Yn/YH )/∂ log α accounting for the variation of the β-decay rates only.See also the caption of TableIII. TABLE V . BBN response matrix c1 = ∂ log(Yn/YH )/∂ log α accounting for the variation of the nuclear rates only, but also including the variation of the binding energies and thus of the Q-values of the reactions.See also the caption of TableIII. TABLE VI . BBN response matrix c1 = ∂ log(Yn/YH )/∂ log α accounting for the variation of the nuclear rates only, but excluding the variation of the binding energies.Also see caption to TableIII. TABLE VIII . Fit parameters of the S-factor, see Eq.(A1), according to Eqs. (A2,A3) and Eq.(A6) for radiative capture reactions.S0 is given in MeV mb; a k and q k in units of MeV −k . TABLE X . Fit parameters of the function S[n], see Eq.(A10), according to Eqs. (A12,A3) neutron-induced reactions.S is given in MeV 1/2 mb, and a k and q k in units of MeV −k . a here S[n] 0 in units of MeV mb; to be divided by √ E, E in MeV. in order to yield S[n]. [54]18.Fit (red curve, color online) of the S-factor for the 3 H + d → n + 4 He reaction compared to data compiled by EXFOR[54].
2023-05-26T01:16:16.373Z
2023-05-25T00:00:00.000
{ "year": 2023, "sha1": "ff7417b8dbbfd25e729f8eb6f33fbc12fe883888", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1140/epja/s10050-023-01131-3.pdf", "oa_status": "HYBRID", "pdf_src": "ArXiv", "pdf_hash": "ff7417b8dbbfd25e729f8eb6f33fbc12fe883888", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
244573191
pes2o/s2orc
v3-fos-license
Management of information processes under a radiation emergency The article provides examples of radiation emergencies (ES), emergency response measures aimed at protecting the public and territories, including applicability of a comprehensive monitoring system of the public security conditions. A special attention is paid to scientific approaches and research results aimed at optimization of information processes during the operational use of data from monitoring systems on the conditions status under radiation emergencies. Besides, the expediency and sufficiency of using two data sources producing reports was determined and justified if one of the data sources is a monitoring system. Based on the research results, other provisions concerning the management of information processes under radiation ES conditions related to the requirements for operational public notification were justified. In particular, schemes for constructing such systems under special radiation ES conditions, algorithms for their operation, operating modes, and some parameters that these systems should provide with were justified. The information subsystem as a part of the CSPPCM is a kind of information system for the prompt public notification, and the research results are quite relevant for it. Introduction The nuclear and energy industry development, as well as the widespread use of radioactive sources in industrial areas and medicine, has not only improved the quality of human life, but also has created new technogenic risks associated with the radiation accidents (RA) onset, radioactive substances (RS) release incidents and environmental pollution. Four grave radiation accidents have happened since NPP started operating: Windscale (UK, 1957), Three Mile Island (USA, 1979), Chernobyl (USSR, 1986) and Fukushima Daiichi (Japan, 2011). For the benefit of the public and environment protection from radioactive contamination, it is necessary to consider various types of RA: accidents at nuclear and power installations, industrial facilities, research and medical facilities, with unattended sources, and other accidents. Radioactive contamination of the environment becomes a substantial environmental factor that affects people health and living conditions in the territories exposed to radioactive contamination, leads to significant economic damage and long-term negative consequences of a socio-economic and political nature [1,2,3]. Radiation accidents belong to the category of emergency situations (ES), where decisions on possibility of conducting or continuing previously initiated emergency response measures are built on the results of radiation investigation and received data with regard to the levels of RS in the surrounding environment and the following estimation of doses of the population external and internal exposure. At the same time, the efficiency, reliability and completeness of investigation and monitoring data are important, which enable the proper assessment of the radiation conditions and adequate decision-making with regard to the response to the developed situation. The radiation monitoring systems installed at nuclear fuel cycle facilities and in territorial components of the economy play a key role. Collection of reliable information for effective decisions making purposes is determined by the availability and equipment status of monitoring systems, as well as the minimum required time for obtaining information. As practice shows, the minimum time to obtain reliable data on a radiation ES is up to 1,5 hours if complete monitoring system is used, and more than a day is required without it. For example, during the accident at the Mayak production association in 1957, when a chemical explosion occurred in a storage tank for highly active liquid radioactive waste, data on the real scale of the consequences of the accident was obtained late, in facts, more than 3 days after the accident. The lack of operational and reliable information on the level and nature of radioactive contamination did not allow prompt implementation of protective measures [4,5]. Unfortunately, the accident at the Fukushima-1 nuclear power plant in 2011 did not take into account the experience of previous ES, including Chernobyl. The lack of proper coordination of work among the various elements of the emergency response system left the officials responsible for making decisions at the initial period of emergency without adequate assessments, forecasts and recommendations on protection measures, and left the public without prompt and correct notification on the developed and predicted status of the ES. Effective measures such as notification on the measured values (dose loads on personnel, the population and the environment) and predicted radiation doses, as well as the ratio to certain health hazards have not been taken with regard to notification of decision-makers (and the population in general) to take reasonable protective measures. The experience of ES recovery demonstrates that the elimination of public anxiety may be helpful for mitigation of the radiological and non-radiological accident consequences [6]. If there is a radiological ES, the damage and consequences gravity are determined not only by the impact of ionizing radiation on the human body, but also by non-radiation factors. In radiationhazardous situations, even if the radiological risks are insignificant, the heightened perception of these risks by the public, as well as the officials responsible for decisions on public protection measures, is a specific factor of vulnerability in the socio-economic sphere. This imposes special requirements for a precise assessment of the developing ES, adequate and timely response, as well as effective and competent public notification. The basic measures aimed at preventing and eliminating the consequences of RA are carried out in the mode of daily activities, high-alert mode and ES mode, taking into account the features and requirements for ensuring radiation safety [7]. In terms of the rapid development of the information society, there is a real need to change approaches and methods of information handling, especially under threats and ES conditions. The main goals of the public notification system development of EMERCOM of Russia are: operative delivery of the information on public and territories security issues both during daily activities and under high alert conditions regime, threats or evolving ES to the public in due time; ensuring providing the information of maximum accuracy and reliability and maintenance of public trust towards the executive state bodies under the crisis case conditions, including the timely recognition of the spread of unreliable and false information and its prevention; ensuring targeted information deliverythe ability to provide information to both general population and certain target audiences. IT development has led to new approaches to notification and informing on a threat or ES. Formerly the public used to obtain information through official media primarily, currently the speed of dissemination of information and the ability to obtain it, including not entirely reliable, has increased In this regard, it should be noted that there was quite an acute public resonance in the far Eastern Federal district of Russia to the Fukushima-1 nuclear power plant accident, although the distance to the station of accident was more than a thousand km. Due to prompt public informing actions, which were taken by representatives of EMERCOM of Russia and Rosatom State Corporation jointly with the media, based on the forecasts of the ES development made on time, it was possible to stabilize the situation [8]. Immense capacity of opportunities for the global community to disseminate information must be taken into account when responding to ES. Management of information processing under radiation ES should be based, first of all, on sufficient volume of timely reliable information about the conditions status for its adequate assessment and generation of right decisions regarding the response to ES and public protection. Obtaining, processing and delivery of reliable information to users in due time is one of the key factors of ensuring the public life security and the safety of material assets at prevention, occurrence and elimination of ES stages [9]. Materials and methods The data obtained solely by monitoring system cannot be used for the purposes of resolving the issues on providing information to the public and to the officials which are in charge of decision-making activities under ES. Information from other sources is also to be used, including those that are usually less reliable than monitoring systems. The considered information processes are presented as an algorithm in Figure 1. In fact, for the purposes of information collection about ES in general a number of sources can be used. They can be divided into three types: monitoring systems; open sources such as media, social networks, press; information systems of interacting agencies, including organizations in the scientific and technical expertise, internal systems of SUSPRES and INTERCOM of Russia, law enforcement and other agencies involved. A number of studies on management of information processes under ES conditions were conducted. The mathematical model of the processes of data on ES collection and processing is based on the hypothesis theorem (Bayes formula), which was adapted to the specified terms with a number of other mathematical dependencies [10]. During the research, a posteriori probabilities of radiation ES condition status were determined. Based on a number of conditions and initial data these probabilities were determined, including: a priori probability of a radiation ES condition status, which was determined by using the results of predictive assessments of expert modeling tools; characteristics of data collection tools, first of all, conditions monitoring systems and means of transmitting and receiving messages within data collection process, including using the theory of reliability; use of automated data processing technologies. The research was carried out according to the developed methodology and was aimed at optimizing information processes and justifying the requirements for the systems used for the population and officials making decisions under ES informing and notification purposes. Results and discussion As the example, some options of radiation ES calculation by using monitoring systems and other sources as data sources are given. A variant to specify the condition status under an ongoing ES is given. Two data sources and three data sources options are provided. To study the influence of erroneous reports from sources, the combinations when all messages are correct and when messages are contradictory were calculated. Data received and provided by the monitoring system and other sources on the condition status are designated as reports. These reports may be correct and correspond to the Two options are considered: the first is when the monitoring system and another source are used as the source of information on conditions status; the second is used when the monitoring system and two other independent sources of information are used. When a monitoring system and another source are used: 1 combination + +, this means that both reports from the sources are correct; 2 combination + -, means that sources' reports are inconsistent, where the monitoring system gives the correct report; the second source gives the wrong report. Preliminary a priori probabilities of an ES conditions status are determined by using predictive estimates obtained by systems for predicting the evolving radiation-related ES. Initial probabilities for the forecast are: -the probability of correctness of the conditions forecast PS(Xe) = 0.7; -the error probability in the conditions forecast PS(Xn) = 0.3. The probabilities of receiving and providing correct and erroneous reports about conditions status by monitoring systems and other sources are determined based on the characteristics of these sources, means of transmitting information, as well as by using calculation methods from the theory of reliability [11,12]. In the variants considered above, to achieve a more correct result, the probability parameters for sources were set within a range of two values. Situation monitoring system is: -probability of correct report 1Р(Се/Хе) = 0.95; 0.9; -probability of an erroneous report is 1Р(Сn/Хе) = 0.05; 0.1. Other data sources are: -the probability of a correct report is 2Р(Сn/Хе) = 0.7; 0.6; -probability of an erroneous report is 2Р(Сn/Хе) = 0.3; 0.4. The results of determining a posteriori probability of the conditions status Рk(Хе/S) for the considered variant are shown in Table 1. When using the monitoring system and two other sources: 1 combination + + +, all reports are correct; 2 combination + + -: reports are inconsistent, the monitoring system and the second data source provide the correct report, the third source gives an erroneous report; 3 combination + --, the monitoring system gives the correct report, the second and third sources give erroneous report. A priori probabilities of the conditions status, as well as the values of the probabilities of correct and erroneous reports for the monitoring system and for the second data source were taken in the same way as above. And for the third data source it was: the probability of a correct report Р(Се/Хе) = 0.7; 0.6; the probability of an erroneous report Р(Сn/Хе) = 0.3; 0.4. The results of determining a posteriori probability of the conditions status Pk(Xe/S) according to the variant for the monitoring system and two other data sources are shown in Table 2. Conclusion Some conclusions were made based on the results obtained. The calculations results analysis showed that in the considered ES, when the monitoring system is used as a data source, it is advisable to rely on forecast data and data from the most reliable source, such as the monitoring system, when determining the conditions status. In the case of two data sources, even if an erroneous report is sent from the second one, the probability of correctly determining the state of the situation will not be lower than 0.938. When using three sources, the probability of a correct assessment of the situation, even if the forecast is of low reliability and there are conflicting data from the second and third data sources for the average parameters, will be high and at least 0.889. This article presents only two options, and a number of options for the studied combinations in a reasonable range were examined within surveys which allow to justify the priority of using automated monitoring systems as a data sources on the conditions status under radiological ES. Besides, the expediency and sufficiency of using two data sources producing reports was determined and justified if one of the data sources is a monitoring system. Based on the research results, other provisions concerning the management of information processes under radiation ES conditions related to the requirements for operational public notification were justified. In particular, schemes for constructing such systems under special radiation ES conditions, algorithms for their operation, operating modes, and some parameters that these systems should provide with were justified. The information subsystem as a part of the CSPPCM is a kind of information system for the prompt public notification, and the research results are quite relevant for it.
2021-11-25T20:06:29.296Z
2021-11-01T00:00:00.000
{ "year": 2021, "sha1": "55d89f62123d460ddf57e50fc85f38696f44e38c", "oa_license": null, "oa_url": "https://doi.org/10.1088/1755-1315/843/1/012054", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "55d89f62123d460ddf57e50fc85f38696f44e38c", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Physics" ] }
255477309
pes2o/s2orc
v3-fos-license
Transmission and detection of monkeypox virus in saliva (part II): Implications for sequential monitoring of viral load a Department of Oral and Maxillofacial-Head and Neck Oncology, Fengcheng Hospital of Fengxian District, Shanghai Ninth People’s Hospital Fengcheng Branch Hospital, Shanghai, China b Department of Oral and Maxillofacial-Head and Neck Oncology, Shanghai Ninth People’s Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China c College of Stomatology, Shanghai Jiao Tong University, National Center for Stomatology, National Clinical Research Center for Oral Diseases, Shanghai Key Laboratory of Stomatology, Shanghai, China d Department of Oral Mucosal Diseases, Shanghai Stomatological Hospital & School of Stomatology, Shanghai Key Laboratory of Craniomaxillofacial Development and Diseases, Fudan University, Shanghai, China e Department of General Dentistry, Shanghai Ninth People’s Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China Monkeypox virus; PCR test; Salivary diagnostics; Cutaneous lesions With the outbreak of monkeypox ongoing nonendemic countries since May 2022, the World Health Organization declare it is a public health emergency of international concern on 23 July 2022. Over 95% of monkeypox virus (MPXV) is currently spreading among men who have sex with men. 1 The most transmission route is direct contact of monkeypox lesions and/or large respiratory droplets. Meanwhile, the transmission can occur by direct contact with bodily fluids of an infected person and by contact of mucosa or nonintact skin with infectious materials. 1 Oral saliva as one important type of body fluids is closely related to contact of mucosa, oral sex, and large respiratory droplets. Recently, we and other authors propose that dentists and maxillofacial specialists should keep an eye out for oral and maxillofacial manifestations of monkeypox infection and protective measures. 2e5 For dentists and oral medicine specialists, special attention is thereupon focused on the use of saliva as a sampling material for MPXV test. 2 Whether saliva may emerge as viable sample for sequential monitoring of viral load is not described. This paper attempted to describe what saliva can tell us in the transmission and infection of MPXV to better understand the possible role in monitoring viral load. As presented in Supplementary Table S1, there were three existing studies involved in MPXV-DNA detection in saliva versus skin lesion obtained from the patients. 6e8 Among 31 cases extracted from three studies, there are 29 (93.5%) cases with MPXV-DNA positive infection in both skin and saliva specimens. To directly compare the performance of MPXV-DNA detection by quantitative PCR tests in saliva versus skin lesions, viral loads (given as PCR cycle thresholds [Ct]) of 29 cases are extracted. The median Ct value, inversely reflected the viral loads, is 21 for skin lesion, and the median Ct is 27 for saliva sample. Besides, 13 positive cases whose skin lesion and saliva samples were tested in the same day, and the remaining cases are of day interval between saliva test and lesion test. The mean viral loads in saliva samples tested both synchronously (0 day) and asynchronously (1 day interval) are also lower than those in skin lesion (Fig. 1). The implications of asynchronous test in saliva samples may be used in sequential monitoring of viral load. Diagnostic testing of MPXV is vital in containment of the disease, and there may have been undetected infections in the monkeypox outbreak. 1 Although a swab of a skin lesion and posterior real-time PCR assay is considered the gold standard for the detection of MPXV, transmission might begin with prodromic symptoms before the onset of skin lesions. 6e8 Also, skin lesions may be scarce, located only in the anogenital area or even limited to a single lesion. The high concordance (93.5%) of MPXV infection between saliva-based PCR tests and skin lesion, indicating that saliva-based tests may be a viable method for MPXV-DNA detection, as well as sequential monitoring of viral load, with infectivity appearing at various stages of disease course (asynchronous). Compared with MPXV detection sampling from anatomical sites, salivary diagnostics has some advantages, such as noninvasive, easy collection, even self-collection, less exposure of healthcare workers and risk of cross infection, no need of specific instruments. More importantly, the positive prevalence (93.5%) and viral load (median Ct, 27) of MPXV-DNA test in saliva fluid was reported to be higher than those in other body fluids, such as semen, plasma, urine. 9 In addition to the diagnosis itself, the research on molecular feature of the saliva containing proteomic, metabolomics, IgGs, cytokines, and chemokines at different times of disease course in cases of monkeypox will help understanding molecular changes of transmissible viral form. Understanding the mode of transmission could allow for the development of proper interventional approaches to reduce the intensity of the current outbreak. Overall, the current evidence indicates that MPXV transmission through oral saliva might be a viable and recognized route, especially in the current 2022 outbreak of disease. 10 All sexually active men who have sex with men should be informed about the presence of MPXV on the skin, in the throat, in the anus, and in body fluids including oral saliva in case of infection. However, current evidence is not robust; the sample size is a major limitation. The approach and protocol for collection of saliva sample has yet officially standardized. There are still potential saliva cross-contamination by skin lesion, viremia and other viral particles shredded from oral mucosa and throat symptoms (e.g. exfoliated epithelial cells). As presented in Table S1, the vast majority (29/31 cases) of the saliva samples were seemingly not contaminated by oral epithelial cells, because they were not suffered oral mucosal lesions by monkeypox. Taken together, the saliva-based test for MPXV diagnostics, as well as sequential monitoring of viral load, seems to be potentially promising and appealing. Since saliva is easily collected and clinically informative for disease detection, the consideration that maximizes the benefit of using saliva as a diagnostic fluid deserves more attention. It is reasonable to incorporate the saliva-based MPXV assay into a part of multiple lines of diagnostics, Fig. 1 Average virus loads (given as PCR cycle thresholds [Ct]) of monkeypox virus (MPXV) in skin lesions and saliva that tested synchronously (0 day) and asynchronously (1 day interval from lesion test to saliva test). which believably may further facilitate the identification of monkeypox patients. Declaration of competing interest The authors have no conflicts of interest relevant to this article.
2023-01-07T05:09:59.318Z
2023-01-01T00:00:00.000
{ "year": 2023, "sha1": "ec7d8453d3b06f0044882b7130f647e4861c924e", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.1016/j.jds.2022.12.018", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "ec7d8453d3b06f0044882b7130f647e4861c924e", "s2fieldsofstudy": [], "extfieldsofstudy": [] }
227297518
pes2o/s2orc
v3-fos-license
Altered regional homogeneity and functional brain networks in Type 2 diabetes with and without mild cognitive impairment Patients with Type-2 Diabetes Mellitus (T2DM) have a considerably higher risk of developing mild cognitive impairment (MCI) and dementia. The initial symptoms are very insidious at onset. We investigated the alterations in spontaneous brain activity and network connectivity through regional homogeneity (ReHo) and graph theoretical network analyses, respectively, of resting-state functional Magnetic Resonance Imaging (rs-fMRI) in T2DM patients with and without MCI, so as to facilitate early diagnose. Twenty-five T2DM patients with MCI (DM-MCI), 25 T2DM patients with normal cognition (DM-NC), 27 healthy controls were enrolled. Whole-brain ReHo values were calculated and topological properties of functional networks were analyzed. The DM-MCI group exhibited decreased ReHo in the left inferior/middle occipital gyrus and right inferior temporal gyrus, and increased ReHo in frontal gyrus compared to the DM-NCs. Significant correlations were found between ReHo values and clinical measurements. The DM-MCI group illustrated greater clustering coefficient/local efficiency and altered nodal characteristics (efficiency, degree and betweenness), which increased in certain occipital, temporal and parietal regions but decreased in the right inferior temporal gyrus, compared to the DM-NCs. The altered ReHo and impaired network organization may underlie the impaired cognitive functions in T2DM and suggesting a compensation mechanism. These rs-fMRI measures have the potential as biomarkers of disease progression in diabetic encephalopathy. The global prevalence of Type-2 Diabetes Mellitus (T2DM) has been rapidly increasing. The International Diabetes Federation has released new estimates on the prevalence of diabetes worldwide, indicating that 1 in 11 adults are currently living with diabetes, which is 10 million more individuals than reported in 2015 1 . Patients with T2DM demonstrate an increased risk of Alzheimer's disease (AD) and cognitive impairment, which commonly manifests as declining memory, information processing speed, and learning and executive functions 2,3 . However, the pathophysiological mechanism of T2DM-induced cognitive impairment has not yet been elucidated. Previous studies suggested that T2DM and AD may share several patterns of brain pathogenesis, such as impaired insulin sensitivity and signaling, cerebral amyloid beta aggregation and tau hyperphosphorylation 4 . The pathophysiology of cognitive decline in the diabetic brain has aroused much interest owing to its high incidence. Neuroimaging techniques can provide important clues regarding brain structure and function for understanding the neurological involvement in T2DM patients. Morphological atrophy was considered related and observed in white and gray matter, including hippocampal structures 5,6 . Moreover, white matter lesions 7 , reduced white matter integrity 8,9 , decreased density of axons/dendrites 10 , and altered cerebral metabolism 11 were detected and associated with cognitive dysfunction. Besides structural and metabolic information, neural activity is a sensitive functional measurement that can be acutely altered together with structural measures of brain lesions 12 . Neural activity may provide clues to track the early effects of diabetic causative factors. Resting-state functional Magnetic Resonance Imaging (rs-fMRI) can noninvasively detect spontaneous neural activity at baseline and be used to further investigate the local and global properties of functional brain networks. Currently, rs-fMRI is commonly used to study cognitive function in neuropsychological disorders 13 . Regional homogeneity (ReHo) is one of the main metrics used to assess the local characteristics of rs-fMRI signals. It has been used to analyze the synchronization of a given voxel with its neighboring voxels 14 . ReHo values in T2DM patients were reported to decrease in the occipital lobe, postcentral gyrus and fusiform gyrus, and increase in the medial frontal gyrus and anterior cingulate gyrus 15,16 , indicating altered local neuronal synchronization in these regions. Furthermore, The brain is organized into segregated complex systems with different functional areas that are specialized for processing distinct information. Information exchanges between interconnected brain regions are thought to be the biological basis for human cognitive processes 17 . Graph theory-based network analysis is an effective method to investigate the topological organizations of brain functional networks 18 . This analysis has been instrumental in understanding the underlying mechanisms of many neuropsychlogical disorders such as AD, epilepsy and schizophrenia 18 . Graph theory-based network analysis demonstrated altered topological organizations of the brain network in a group of T2DM patients, including those with normal cognition and impaired cognition [19][20][21] . As effective indicators reflecting the intrinsic organization of the resting brain, ReHo and network connectivity approaches have been conjunctively applied in studies of complex functional activity in AD 22 . Previous rs-fMRI studies regarding the diabetic brain have focused primarily on differences between T2DM patients and healthy controls, which have demonstrated altered patterns of brain activity associated with cognitive abnormalities in T2DM patients. However, whether these changes are the result of early stage dementia or T2DM factors, such as neurodegeneration caused by advanced glycation end products (AGEs) or toxic effects from high blood glucose 23 , needs to be further investigated. Moreover, it is important to note that some patients with diabetes progress to mild cognitive impairments (MCI) or dementia rapidly, while other patients only demonstrate a similar range compared to normal cognitive decline with aging. We hypothesized that altered ReHo values and topological organizations would be detected within specific brain regions. Applying voxel-based ReHo analysis of brain activity and graph theory analysis of functional connectivity in a same dataset, we aimed to investigate the possible changes of both local and global functional brain activities between T2DM patients with MCI and without MCI. Participants. A cross-sectional study design was conducted in this research. With the approval of the Institutional Review Board of Tongji Medical College, Huazhong University of Science and Technology, 54 participants (51-72 years of age, 30 female) with confirmed T2DM were recruited from the endocrinology clinical service. Twenty-seven of the patients had mild cognitive impairment (DM-MCI group), while 27 patients had normal cognition (DM-NC group). A battery of neurocognitive tests was performed to diagnose MCI and assess their cognitive functions, as detailed in the next subsection. Detailed information regarding hypoglycemic agent application, family history, clinical complaints and complications was recorded. Clinical examinations, including measurements of blood biochemistry, lipids, cholesterol, plasma glucose, glycosylated hemoglobinA1c (HbA1c), and body mass index (BMI), were carefully performed by specialists. The diagnosis of T2DM was based on standard criteria from the American Diabetes Association 24 . Twenty-seven euglycemic subjects (51 to 73 years of age, 15 female, fasting glucose level < 7.0 mmol/L, HbA1c < 6.0%, without diabetes family history) were also enrolled as healthy controls (HC group). The exclusion criteria included the following: (a) lesions in the brain, such as tumors, cerebral infarction, hemorrhage, or vascular malformation; (b) a history of stroke, epilepsy, head trauma, or brain surgery; (c) systemic organic disease or a history of tumors; (d) other types of diabetes; and (e) contraindication to MRI examination, such as the presence of metallic implants or claustrophobia. All participants were right-handed and matched by age, gender, and education by group totals. Neuropsychological assessments. All subjects underwent comprehensive physical, neurological, and neuropsychological assessments, which included the Mini-Mental State Examination (MMSE), Montreal Cognitive Assessment (MoCA), Hachiski test, Activity of Daily Living (ADL) test, and Auditory Verbal Learning test (AVLT). MMSE and MoCA tests were performed at 2-week interval again for reliability.The inclusion criteria for DM-MCI group, same as our previous study 8,10 , required that patients demonstrated the following: (a) complaints of memory decline, which occurred after clinical T2DM diagnosis; (b) both MoCA and MMSE scores ≤ 27; and (c) absence of any other physical or mental disorders that can lead to cognitive impairment. The Hachiski test was used to exclude vascular dementia (N = 0). Demographics, clinical data and cognitive assessment results were compared among the three groups using a one-way Analysis of Variance test (ANOVA), a Student's t-test and a Pearson chi-square test with SPSS software (SPSS Inc., Chicago, IL, USA). MRI image acquisition. Images were acquired on a 3-T MRI scanner (Discovery MR750, GE Healthcare, Waukesha, WI, USA) using a commercial 32-channel head coil. The subjects were instructed to close their eyes but stay awake during the scanning (monitored by MR technicians from a screen outside). High-resolution anatomical images were obtained with a sagittal T1-3D brain volume imaging sequence (repetition time/echo time/ inversion time = 8.2/3.2/450 ms, flip angle = 12°, section thickness = 1 mm, matrix size = 256 × 256 × 160, field of view = 25.6 × 25.6 cm 2 , and NEX = 1) for radiological evaluation and identifying lesions specified in the exclusion criteria. Functional images were obtained axially using a gradient-echo echo planar imaging sequence with the following parameters www.nature.com/scientificreports/ total, 240 volumes were acquired interleaved head-to-foot, and the scan time was 8 min. Scan planes were axially positioned and covered the whole brain, including the brain stem and cerebellum. Data preprocessing and ReHo analysis. Functional image preprocessing and ReHo calculation were conducted with the Data Processing & Analysis of Brain Imaging toolkit (DPABI v3.0, www.nitrc .org/proje cts) 25 and SPM12 (www.fil.ion.ucl.ac.uk/spm) software. The first 10 volumes were removed, taking into account the magnetization equilibrium. The remaining images were processed with the following steps. First, fMRI images were corrected for slice timing and realigned to the mean image to correct for head movement using Friston 24-parameter motion 26 correction and framewise displacement value calculation (the bad time points could be flagged by any volume with framewise displacement > 0.2 mm) 25 . Two participant from the each DM group was excluded from further data analysis because of excessive head motion (> 2 mm of displacement or > 2° of rotation). The realigned images were spatially normalized to a standard template in the Montreal Neurological Institute (MNI) space and resampled to 3 × 3 × 3 mm isotropic voxel size (T1 structural image were used in this process). Then, detrending and nuisance regression procedures were performed to remove linear trends and nuisance signals from the image time series, and the data were filtered at the 0.01-0.1 Hz band to remove the effects of low-frequency drift and high-frequency noise. The ReHo calculation was performed on the preprocessed images. Individual ReHo maps were generated by calculating the Kendall coefficient concordance to measure the similarity of the time series of a given voxel and its 26 nearest neighbors in a voxel-wise way 14 . Finally, a z-transformation was conducted on the individual ReHo maps to generate normally distributed zReHo maps. Within-Group and between-group statistical analysis. One-sample t-tests were performed on individual zReHo maps for each group using Statistical Analysis in DPABI 25 . A statistical significance threshold was set at p < 0.001 and a false discovery rate (FDR) correction was applied for multiple comparisons with p < 0.005. Group comparisons of ReHo values were performed (within a Gray-Matter mask) with one-way Analysis of Covariance (ANCOVA) with age, gender, and education level as covariates, and post-hoc pairwise comparisons were performed by a general linear regression model if ANCOVA yielded significant results. The statistical threshold was set at p < 0.01 and a minimum cluster size of 80 voxels, which corresponded to a corrected p < 0.01 (AlphaSim correction; http://afni.nih.gov/afni/docpd f/Alpha Sim.pdf). To investigate the relationship between ReHo values, cognitive performance, and diabetes-related parameters (fasting plasma glucose/HbA1c levels and disease duration), Pearson ' s correlation analyses were performed in a voxel-wise manner adjusted by age, gender, and education level covariates using the DPABI software. A statistical threshold was set at p < 0.01 (AlphaSim correction) to explore the most significant correlations among MR voxels. Functional network analysis. The preprocessed rs-fMRI data were segmented into 90 regions (45 in each hemisphere) using the anatomically labeled (AAL) template reported by Tzourio-Mazoyer 27 . Each region represented one node of the brain network. A few sparsity thresholds ranging from 0.1-0.34 with an interval of 0.01 were applied as suggested 21,28 . Graph theoretical analysis was carried out using GRETNA software 29 . For brain networks at each sparsity threshold, we calculated global and regional network parameters, which involved (1) small-world parameters (normalized characteristic path length λ, normalized clustering coefficient γ, smallworldness σ), clustering coefficient Cp, and characteristic path length Lp; (2) network efficiency measures: global efficiency Eg and local efficiency Eloc, and (3) nodal parameters (efficiency, degree and betweenness). Then, the area under the curve (AUC) for each network metric was calculated, which was sensitive at detecting topological alterations and provided a summarized scalar for the topological characterization of brain networks 20,21 . The AUCs were calculated for each parameter over the entire sparsity range in this study (0.1 ≤ Sp ≤ 0.34). The network analyses were visualized using BrainNet Viewer 30 software. A one-way ANOVA was performed on the AUC of all network metrics, and the statistically significant level was set at p < 0.05. For nodal parameters, an FDR correction was applied for multiple comparisons. Ethical approval. The current study was approved by the Research Ethics Committee of the Tongji Medical College, Huazhong University of Science and Technology. Informed consent was obtained from all individual participants included in the study. All methods were carried out in accordance with relevant guidelines and regulations (Declaration of Helsinki). Results Sample characteristics. The clinical and neuropsychological characteristics of the three groups are summarized in Table 1. No significant differences were observed among the three groups for age, gender, years of education and BMI. Although no significant differences were observed in fasting and postprandial glucose levels, the DM-MCI group exhibited a higher level of glycosylated HbA1c (p = 0.014) and a trend towards an increase in disease duration (p = 0.073) compared to the DM-NC group. The DM-MCI group had significantly lower MoCA and MMSE scores than both the DM-NC and HC groups and performed worse on the AVLT, indicating a decline in verbal memory. ReHo analysis. In each group, zReHo values in the bilateral frontal/parietal/occipital cortex, the posterior cingulate cortex and precuneus, which include main parts of the default-mode network, were significantly higher than the global mean values (Fig. 1) www.nature.com/scientificreports/ left inferior occipital gyrus, the middle occipital gyrus, and the right inferior temporal gyrus compared with the DM-NC group, while increased ReHo values were observed in rectus gyrus and the right inferior frontal gyrus triangular part ( Fig. 2 and Table 2). As shown in Fig. 3, the ReHo values were significantly correlated with HbA1c level in the left cuneus (r = − 0.611, Fig. 3a) and diabetic duration in left rectus gyrus (r = 0.605, Fig. 3b) for all the T2DM subjects. Moreover, the ReHo values correlated with MMSE/MoCA scores in the right middle frontal gyrus (r = − 0.68, Fig. 3c), the superior frontal gyrus(medial orbital) (r = − 0.510, Fig. 3d) (p < 0.01, AlphaSim corrected). Functional network analysis. All the three groups exhibited economical small-world organization (σ = γ/λ and σ > 1, respectively) 19 , that combined the topological advantages of both regular network and random network. There were no intergroup differences in global efficiency and characteristic path length among the three groups. The DM-MCI group exhibited a significant elevated local efficiency and clustering coefficient compared to the HCs, while the intergroup difference between the DM-NC group and HCs was not significant (Fig. 4). Furthermore, in all the T2DM patients, Lp showed a weakly negative correlation with HbA1c (r = − 0.351, p = 0.044), whereas Eg showed a positive correlation (r = 0.380, p = 0.041; Fig. 5). Correlations between any other global network properties and clinical measurements were not statistically significant (p > 0.05). Altered regional nodal characteristics. The results of brain regions showing significant between-group differences (p < 0.05) among DM-MCI, DM-NC and HCs in at least one of the three nodal characteristics (efficiency, degree and betweenness) are summarized in Table 3. Compared to the HCs, the DM-NC group showed only increased nodal characteristics in the right postcentral gyrus, precuneus, left hippocampus and inferior temporal gyrus. No decreased nodal characteristics were detected. Furthermore, the DM-MCI group showed increased nodal characteristics in the left median cingulate and paracingulate gyri, middle occipital gyrus, postcentral gyrus and the right fusiform gyrus, but decreased nodal characteristics in the right inferior temporal gyrus, compared to the DM-NC group. Figure 6 shows the nodal efficiency, degree and betweenness centrality alternations among the three groups. Discussion This study is, to our knowledge, the first to apply ReHo and network-connectivity approaches conjunctively to study the complex functional activity at resting-state in T2DM in one dataset. The observations included: (1) altered regional synchronization in DM-MCI versus DM-NC subjects was demonstrated. The DM-MCI group exhibited significantly decreased ReHo values in the left inferior/middle occipital gyrus and the right inferior Table 1. Sample characteristics. Data are expressed as the mean ± standard deviation or percentage number (%) unless otherwise indicated. a p-values labeled with * and † were obtained using a Pearson Chi-square test (2-sided) and an ANOVA, respectively. All other p-values were obtained using a 2-tailed Student's t-test between the DM-MCI and the DM-NC groups. b Hypertension: systolic pressure ranged between 140-159 mmHg or diastolic pressure ranged between 90-99 mmHg. Patients with moderate and severe hypertension were excluded. c Hyperlipidemia was evaluated as cholesterol > 5.7 mmol/L or triglyceride > 1.7 mmol/L. d Family history accounts for immediate family members who had T2DM within three generations. (2) at a regional level, altered nodal characteristics (efficiency, degree centrality and betweenness) were also detected in the DM-MCI group (decreased in the right inferior temporal gyrus). (3) increased ReHo and nodal characteristics in several brain regions in the DM-NC group (compared to controls) may contribute some clues for the early detection of T2DM related cognitive impairment. (4) altered ReHo of some brain regions and global network properties were related to cognitive assessments and HbA1c. Significantly decreased ReHo values, which indicate decreased neural coherence, were found in the occipital gyrus and inferior temporal gyrus only in the DM-MCI patients. This is a major finding of this study. Previous rs-fMRI studies always have focused on comparisons between general T2DM patients, including those with normal or impaired cognition, and HCs. Typically, more regions with decreased neural activity (vs. increased) were found in the majority of previous studies. For example, T2DM patients exhibited lower ReHo values in the occipital lobe, postcentral gyrus, and middle temporal gyrus in one study 15 , and in the fusiform gyrus and precentral gyrus in another study 16 . In addition, ReHo values were found to be decreased in the occipital lobe, temporal lobe, postcentral gyrus, and cerebellum in both T2DM patients with or without microangiopathy compared to HCs 31 . Similar regions demonstrated decreased ReHo values in the present study, such as the occipital and temporal gyrus. Furthermore, decreased ReHo values in the inferior temporal gyrus were found, which are regions important for visual processing and representation of complex object features, as well as the early recognition of numbers and words 32 . Decreased neural synchronization in this region may therefore contribute to patients' poor performance on cognitive tests. More importantly, DM-NC patients only exhibited elevated ReHo in several regions; no cortical areas exhibited significantly decreased ReHo values. Therefore, we speculate that the decreased regional synchronization revealed in T2DM patients in previous studies may have mainly resulted from or have been closely related with the cognition decline. Clinical information DM-MCI group (n = 25) DM-NC group (n = 25) Healthy controls (n = 27) p-value a Patients with cognitive dysfunction typically showed decreased regional synchronization. However, some studies indicated that there were a few brain regions with enhanced ReHo values in MCI or AD patients 33,34 . Increased ReHo values in the medial frontal gyrus, anterior cingulate gyrus, precuneus and insula 15,16,31 were also found in T2DM subjects relative to controls. Our results detected higher ReHo values in the left angular and superior temporal gyrus in the DM-NC group, as well as in the frontal lobe in the DM-MCI group. These results indicated enhanced neuronal synchronization in the functional clusters or brain regions. Since a majority www.nature.com/scientificreports/ www.nature.com/scientificreports/ five years, we postulate that this could reflect a compensation in those areas mentioned above after long-term weakened neural activities in the temporal lobe. Recruiting more T2DM volunteers and separating them by their disease duration may potentially address this issue in future studies. MMSE and MoCA scores were negatively correlated with the ReHo values in the right middle frontal gyrus and the superior frontal gyrus (medial orbital). Lower MMSE or MoCA scores indicated more impaired cognitive functions, which are associated with decreased information processing speed. We note that these regions were included in or very adjacent to those regions which showed increased RoHo in the DM-MCI group. As higher ReHo values in the frontal gyrus might be interpreted as a compensatory mechanism for reduced neural activities to maintain cognition function 35 , more severely impaired cognition may presumably arouse relatively enhanced ReHo in these regions. ReHo values in the cuneus were reported to be negatively correlated with the Complex Figure test and Trail-Making test in T2DM patients 15 . In the current study, a negative correlation was also found in the left cuneus between ReHo and HbA1c; this supports the idea that T2DM patients may benefit from regular blood glucose control to prevent cognitive decline. These brain activity alterations are likely a gradual process related to cognitive decline, diabetic duration and severity. The graph theory analysis of functional brain networks revealed abnormal architecture in T2DM patients. Previous studies demonstrated higher normalized Cp and Eloc, and some also found lower characteristic path length Lp in a group of T2DM participants compared to healthy controls [19][20][21] . In the present study, the cognitive status was considered and the T2DM participants were subdivided into two groups. We found only DM-MCI subjects exhibited altered global network characteristics, measured as increased Cp and Eloc, as compared to controls. Cp quantifies the number of connections between the nearest neighbors of a region as a proportion of the maximum number of possible connections. The combination of a high Cp and a high Eloc reflects high local specialization of the brain in processing information, and more efficiency in synchronizing neural activity between brain regions 36 . This result seemed strange as it implying that the whole brain networks were better organized or enhanced in DM-MCI than those in healthy controls. In the research of the functional network among T2DM www.nature.com/scientificreports/ patients, prediabetes patients and healthy controls, similar results to our study were found. This mainly suggesting a compensation mechanism of the functional whole-brain network in the cognition decline stage. Combined with our ReHo analysis, we obtained a similar conclusion. The altered Cp and Eloc revealed in previous studies [19][20][21] were more closely related with cognitive decline. The correlation analysis between Eg and Lp and clinical measurements also supported that regular blood glucose control may help to prevent cognitive decline. Nodal efficiency, degree and betweenness reflect the extent of integration between the immediate neighbors of a given node, and quantify how much information may traverse the node 37 . As increased nodal characteristics were found in the DM-NC and DM-MCI groups in some brain regions, decreased nodal characteristics were only detected in the right inferior temporal gyrus in DM-MCI group. This result was partly in accordance with our ReHo findings (Table 2 and Fig. 2). Reduced ReHo and nodal characteristics may attribute to disrupted visual processing and memory network function, which were relevant with inferior temporal gyrus 32 , and therefore be associated with patients' slower responses when completing cognitive tests. Interestingly, we noticed that there were more findings related with inferior temporal gyrus in T2DM patients. The decrease of functional activity of the inferior temporal gyrus is closely related to memory decline 38 , and node properties of the inferior temporal gyrus were positively correlated with BMI 21 . The current study has some limitations. First, as rigorous clinical diagnosis of diabetes-related cognitive impairment remains a challenge, dividing the T2DM patients into subgroups can be subject to inaccuracy. Second, the small sample size in our study may have influenced our detection of areas with altered ReHo and functional brain network measurements. Third, although we detected several brain areas with altered ReHo or nodal parameters, the accurate function of some of these regions is complicated and remains unclear. Therefore, it is difficult to provide accurate explanations regarding changes in these regions. Forth, it is essential to differentiate the true signal of interest from other noise-related fluctuations in fMRI. To reduce the effect of physiological signals (respiratory and cardiac artifacts), the fMRI data were filtered by using a band pass filter (0.01-0.1 Hz) 39 and linear regression with the averaged time series. In further studies, physiological signals should be measured during rs-fMRI acquisition and correction for physiological noise will be performed. Finally, we studied the cross-sectional differences but not the transition from normal to impaired cognition in T2DM. Further studies with longitudinal follow-up times are necessary to confirm these findings. In conclusion, this study compared both neuronal synchronization and functional brain network characterizations in T2DM participants with cognitive impairment and those with normal cognition. The findings demonstrated significantly altered neuronal synchronization, as well as functional brain networks properties (increased Cp and Eloc, and altered nodal characteristics) in some regions, such as the right inferior temporal gyrus, that were significantly related to cognition. Moreover, our findings suggested some alteration is already apparent in the DM-NC stage prior to MCI. Thus, the application of rs-fMRI and graph theory-based network
2020-12-06T14:06:29.035Z
2020-12-01T00:00:00.000
{ "year": 2020, "sha1": "1ff87d5a423fb05e65515b1b8a9368f7309d688c", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/s41598-020-76495-3.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "efb26cbc6f467b37b9caf9a61cf4e0a487eba3f1", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
247223923
pes2o/s2orc
v3-fos-license
Using noninvasive anthropometric indices to develop and validate a predictive model for metabolic syndrome in Chinese adults: a nationwide study Purpose Metabolic syndrome (Mets) is a pathological condition that includes many abnormal metabolic components and requires a simple detection method for rapid use in a large population. The aim of the study was to develop a diagnostic model for Mets in a Chinese population with noninvasive anthropometric and demographic predictors. Patients and methods Least absolute shrinkage and selection operator (LASSO) regression was used to screen predictors. A large sample from the China National Diabetes and Metabolic Disorders Survey (CNDMDS) was used to develop the model with logistic regression, and internal, internal-external and external validation were conducted to evaluate the model performance. A score calculator was developed to display the final model. Results We evaluated the discrimination and calibration of the model by receiver operator characteristic (ROC) curves and calibration curve analysis. The area under the ROC curves (AUCs) and the Brier score of the original model were 0.88 and 0.122, respectively. The mean AUCs and the mean Brier score of 10-fold cross validation were 0.879 and 0.122, respectively. The mean AUCs and the mean Brier score of internal–external validation were 0.878 and 0.121, respectively. The AUCs and Brier score of external validation were 0.862 and 0.133, respectively. Conclusions The model developed in this study has good discrimination and calibration performance. Its stability was proved by internal validation, external validation and internal-external validation. Then, this model has been displayed by a calculator which can exhibit the specific predictive probability for easy use in Chinese population. Supplementary Information The online version contains supplementary material available at 10.1186/s12902-022-00948-1. Introduction Metabolic syndrome (Mets) is a group of complex metabolic disorders, including abdominal fat accumulation, high triglyceride, high cholesterol, hypertension, and hyperglycaemia. Mets is not a specific disease but a cluster of multiple risk factors, an intermediate state between health and disease, whose primary role is to bring attention to the possibility of disease in people with an abnormal metabolism. The main components of Mets are insulin resistance and obesity, especially central obesity. Obesity plays an important role in the occurrence and development of Mets [1][2][3][4]. BMI is the most commonly used index to assess obesity. However, recent studies have found that people with normal BMI still have a risk of Mets [20,21]. Our previous studies have also found that normal weight obesity (NWO) populations have a risk of long-term cardiovascular disease and diabetes [22,23]. It is obvious that the use of BMI alone to assess the risk of metabolic syndrome is flawed. It cannot accurately reflect the degree of obesity of the human body. Therefore, an accurate diagnostic tool is needed to determine whether a person has Mets to better prevent possible cardiovascular diseases and diabetes in the future. In previous studies on Chinese populations, WHtR, BRI and AVI have been found to have a good ability to discriminate Mets or its components from BMI [21,24,25]. However, their AUCs were generally approximately or under 0.8, which did not take into account calibrations, and the studies did not screen these anthropometric indices and demographic information together to fit a more accurate prediction model. Therefore, this study aims to use a variable selection technique to screen anthropometric indices, establish a simple metabolic diagnosis model in a Chinese population, evaluate the model performance by discrimination and calibration and assess the overfit by internal, internal-external and external validation. The final model will be displayed by a scoring system in an Excel document [26,27]. This study is reported in accordance with the TRIPOD (Transparent Reporting of a multivariable prediction model for Individual Prognosis or Diagnosis) Statement, a guideline specifically designed for the reporting of studies developing or validating a multivariable prediction model [28]. Source of data and Participants The development set for this study came from a large cross-sectional study: the China National Diabetes and Metabolic Disorders Survey (CNDMDS). This is a nationwide epidemiological survey from June 2007 to May 2008, which was completed by 17 clinical centres in 14 provinces and cities across the country. A multistage, stratified cluster sampling method was used to select persons aged 20 years or older. In total, 152 urban street districts and 112 rural villages were selected, 54,240 participants were invited to participate in the study, and 47,325 persons (18,976 men and 28,349 women) accepted the invitation. Finally, 46,239 adults completed the survey. The validation set came from phase 3 follow-up surveys of CNDMDS in Shaanxi Province, which were conducted from October 2016 to November 2017. A total of 1072 participants were included in phase 3 followup surveys. Relevant information on the dataset can be found in our previous studies [22,23,29,30]. Because some centres did not conduct BIA testing in the development set, we included 27,494 participants from 9 centres. Because of the measurement error of BIA, any data points of PBF that were 3 interquartile ranges below the first quartile (Q1) or 3 interquartile ranges above the third quartile (Q3) were considered outliers and excluded from the analysis. Some participants with missing information on family history, height, weight, BP (blood pressure), TGs (triglycerides), HDL-C (high-density lipoprotein cholesterol), and BG (blood glucose) were also excluded. Then, participants who were using antihypertensive medications, lipid medications and diabetes medications were excluded. Finally, a total of 19,685 participants from 9 centres were included in the development set. Similarly, a total of 671 participants were included in the validation set ( Fig. 1). Outcome The outcome definition is according to the Asian criteria of the 2009 Joint Interim. Participants were asked to wear a single layer of light clothing when they were measured for weight, height, PBF, WC and HC. The definition or calculation formula of each factor is as follows: Age was measured as of the date of completing the survey. Education level: Those who had a college degree or above were defined as having a high education level, and those who had a secondary education or below were defined as having a low education level. Smoking history: Those who had smoked more than 100 cigarettes in their lifetime and were still smoking now were defined as category yes, those who had smoked in the past and had quit for more than 1year were defined as category quit, and the others who had never smoked were defined as category no. Physical activity history: Those who had regular exercise more than three times a week with each session lasting at least half an hour were defined as category regular, and the others were defined as category never. Family history of metabolic disorders: A family history of hypertension was defined as having at least one of the parents, siblings, and children diagnosed with hypertension in their lifetime. Similarly, we defined the Blood pressure: A mercury column sphygmomanometer was used to measure blood pressure. The participant was required to rest quietly and relax before measurement. Height and weight: A height and weight scale was used to measure height and weight. The measurement results were required to be accurate to 0.5 cm and 0.5 kg. WC andHC: A measuring tape was used to measure WC andHC. The measurement results were required to be accurate to 0.5 cm. PBF: A Tanita body composition analyser (TBF-300 WA; Tanita Corporation Limited, Tokyo, Japan) was used to measure PBF. Calculation formula: Where male = 0 and female = 1 for Sex. Missing data and Sample size All missing values were removed, and a total of 19,685 participants were included in this study. According to relevant research, the events per variable (EPV) required in multivariate analysis must be 10 or greater, namely, the positive event should be more than 10 times the number of predictors. A total of 6505 participants were diagnosed with Mets, so the sample size was large enough [27,49]. Statistical analysis The baseline information in the dataset was compared using the chi-square test or Fisher's exact test for categorical variables and the two-sample t test or Mann-Whitney U test for continuous variables. Anthropometric indices, SBP, DBP and age were analysed as continuous variables, while sex, education level, smoking history, physical activity history and family history were analysed as discrete variables. To avoid overfitting of the prediction model and multicollinearity among predictors, least absolute shrinkage and selection operator (LASSO) regression was performed to screen the predicted variables. With the increase in the penalty parameter lambda, the coefficients of each predictor shrank, and the number of predictors was reduced. Then, according to the number of predictors, area under the receiver operator characteristic curves (AUCs), and misclassification error, we selected some factors as independent variables to establish the logistic regression model [26]. The overall model performance was evaluated mainly by the Brier score, which is simply defined as (y − p) ^ 2, with y the outcome and p the prediction for each subject. The average score of all subjects is the Brier score of the model. It refers to the distance between the predicted outcome and actual outcome. The Brier score is a proper scoring rule that combines calibration and discrimination, similar to Nagelkerke's R 2 [27]. Then, the discrimination and calibration of the model were evaluated. Harrell's concordance statistic (C-index) is the most commonly used performance index to measure the discriminative ability of generalized linear regression models. It gives the probability a randomly selected subject which experienced an event had a higher risk score than a subject which had not experienced the event. For a binary outcome, the C statistic was the same as the area under the receiver operating characteristic (ROC) curve (AUC). Calibration refers to the agreement between observed outcomes and predictions, which refers to if we observe p% positive event among subjects with a predicted risk of p% in ideally situation. Calibration was usually evaluated by a calibration plot with predictions on the x-axis and the outcome on the y-axis, meaning the agreement between observed outcomes and predictions, perfect predictions should be at the 45° line. For different sample size data, different smoothing techniques were used to estimate the difference between the observed probabilities of outcomes and the prediction probabilities. As Steyerberg recommended, locally weighted least squares regression (loess) smoother was used to draw the calibration curve of development set because its sample size is greater than 5000, and restricted cubic spline (RCS) smoother was used to draw the calibration curve of validation set [26,50]. For internal validation, we computed using a 10-fold cross validation procedure to validate the model performance: 90% of the data were used to train the model, and the remaining 10% were used to compute the model performance; this process was repeated 10 times. For internal-external validation, as Steyerberg and Harrell advocated [51], 8 of the 9 centres' data were selected as the development set to generate the model, the data of the 1 remaining centre was used as a validation set to evaluate the model performance, and this process was repeated to verify each centre sequentially and calculate the mean performance. For external validation, we used a new dataset as a validation cohort to evaluate the model performance of the original model. The final model is displayed by a scoring system made in Excel, which is convenient to use. All statistical analyses were performed using R 4.1.1 (R Foundation for Statistical Computing, Vienna, Austria) and RStudio 1.4.1106, the Integrated Development for R (250 Northern Ave, Boston, MA 02210, USA). The package "compareGroups" was used to make baseline tables, the package "glmnet" was used to perform LASSO regression, the package "rms" was used to build a prediction model, the package "pROC" was used to plot ROC curves, and the package "CalibrationCurves" was used to plot calibration curves. Results A total of 19,685 participants from 9 centres were included in model training, and 5138 participants were diagnosed with Mets, accounting for 26.1% of the total number. The differences in all baseline characteristics and predictors between the Mets group and the Non-Mets group are shown in Table 1. Categorical variables were compared using the chi-squared or Fisher's exact test, and all continuous variables were verified to conform to a normal distribution, so they were compared using the t test or ANOVA (analysis of variance) test. The p values of the overall groups significance were shown in the table. Model selection and development To reduce the overfitting of the model and the collinearity among the dependent variables, the LASSO method was used to screen the prediction factors, which achieves the selection of predictors by shrinking some coefficients to zero by penalizing the absolute values of the regression coefficients (Fig. 2). Figure 2 shows the change trend of AUCs and misclassification error (percentage of predicted values that do not match observed known values; the lower the misclassification error is, the better the model) with the increase in log (λ). The coefficients for the final model can be chosen at the lowest cross validated log (λ) value (the position of a black dotted line on the left of Fig. 2), or more conservatively, at a 1 standard error larger value of log(λ) (the position of a black dotted line on the right of Fig. 2). However, we conclude that in the first half of the increase in the contraction penalty λ, the increase in misclassification error and the decrease in AUCs are not significant. After comprehensive consideration, we chose the corresponding model at a 1 standard error larger value of log(λ) for AUCs; at this time, the misclassification error was close to 0.2, and the AUCs were still higher than 0.85. Therefore, the model finally included SBP, DBP, WC, WHtR, PBF and CUN_BAE as predictors, and we used Model performance and validation: discrimination and calibration Model discrimination was evaluated by ROC curves, and model calibration was evaluated by calibration curves. We assessed discrimination and calibration in the development set through internal (via 10-fold cross Fig. 3 The ROC curves and calibration curves of development set (A) and validation set (B). In the ROC curve, the y-axis is the sensitivity from 0 to 100%, and the x-axis is the specificity from 100% to 0. The y-axis of the calibration curve is the proportion of positive outcomes observed in the corresponding group, and the x-axis is the average prediction probability of the model, the perfect prediction should be on the 45-degree line. The calibration curve of development set is constructed with the restricted cubic spline (RCS) smoother and the calibration curve of validation set is constructed with the "loess" smoother validation), internal-external (across centres) and external (via another dataset) methods. The ROC curves and calibration curves are shown in Fig. 3A. In the original model performance, the C statistic/AUC was 0.88 (95% CI: 0.875-0.885), and the Brier score was 0.122. After 10-fold cross validation, the mean C statistic/AUCs was 0.879, and the mean Brier score was 0.122 (Table 2), which are extremely close to the original model performance. In the internal-external validation, the model performance of each centre is shown in Table 3. The mean C statistic/AUC of the 9 centres was 0.878, and the mean Brier score of the 9 centres was 0.121, which were also very close to the original model performance. In the validation set, the C statistic/AUC was 0.862 (95% CI: 0.833-0.891), and the Brier score was 0.133 (Fig. 3B). Consequently, these results suggested that the prediction models had good performance. Model visualization: making a risk score calculator The formulas of these anthropometric indices were quite complicated, and it is not convenient to calculate all anthropometric indices manually; therefore, we created an Excel file that included all of the formulas and coefficients of predictors. When entering age, height, weight, WC and other simple values, it can automatically calculate anthropometric indices and the probability of Mets (Fig. 4). We randomly entered the data of a participant as a demonstration; this was a 49-year-old man with Mets, and his predicted probability of Mets calculated by the model was 79.05%. Discussion To our knowledge, this is the first study that developed a diagnostic model with noninvasive anthropometric indices for Mets in a large representative Chinese population and many minority groups. When screening predictive variables, we excluded indices such as the triglycerideglucose index (TyG) [32] and the visceral adiposity index (VAI) because their calculation formulas included serum triglyceride or HDL-C, which were included in the diagnosis standard of Mets, and we also hoped to use some noninvasive measurement indices as predictors. LASSO regression was used to improve the multicollinearity among predictors and prevent model overfitting, and 6 predictors were selected by shrinking the coefficients towards zero. Among the predictors, SBP, DBP and WC are part of the diagnostic criteria for Mets, and WHtR, PBF and CUN_BAE have been proven to have separate predictive values for Mets in previous studies [24,41,52], so the prediction model established by these 6 predictors has very good performance. To evaluate the performance of the model, we calculated the AUCs, plotted the calibration curve and calculated the Brier score of the model to evaluate the calibration. The AUCs of the model are greater than 0.8, which is considered to indicate better discrimination, and the Brier score of the model is less than 0.2, which is considered to indicate better calibration. However, we also found that the model was significantly overestimated in groups with high predictive probabilities: at the right third of the calibration curve, the actual prevalence was lower than the average predictive probability, and this result is consistent with Wang's study. The AUCs of this study is slightly lower than that of Wang's study (AUC 0.901), but the development set of Wang's study came from Spanish workers without external validation set. And the model of Wang's study was shown by nomogram, the corresponding indicator such as body fat percentage (calculated as body fat percentage = 1.2 × (BMI) + 0.23 × age (years) − 10.8 × gender (male, 1; female, 0) − 5.4) need to be calculated well in advance before using the nomogram [53]. Similarly, Zhang's study established a prediction model for 4-year risk of Mets with age, TC (serum total cholesterol), UA (serum uric acid), ALT (Alanine aminotransferase), and BMI, this was a longitudinal study (AUC 0.783, Brier score 0.156) but included invasive biochemical indicators as predictors [54]. Compared to Zhang's study, our study lacks longitudinal validation because our study is a crosssectional study, we cannot use this model to predict the probability of being diagnosed with Mets in the future, But we believe that this overestimation is useful to remind people of metabolic disorders. People who have been misdiagnosed have a high predictive probability, and they are likely to have metabolic problems in the future. We will conduct research to verify the ability of this model to predict the long-term outcomes of Mets. We carried out 10-fold cross internal validation, external validation, and multicentre internal-external validation, which were rarely conducted at the same time in previous studies. The average AUCs and Brier score showed no significant changes in the internal validation and multicentre internal-external validation and were almost equal to the values in the original model. The performance of the validation set dropped slightly but was still in a good range. These results showed that the model has good performance and a stable predictive ability in population of different provinces in China, which is conducive for use in different centres. Previous studies have mostly used nomographs to display predictive models, but nomographs are not accurate enough and convenient to use, and some predictors in this study need to be calculated indirectly, so a scoring system is used to display the model. There are still some shortcomings in our study. 1) This study is a national study that was focused on investigating the prevalence of diabetes in China. The lifestyle, exercise history and eating habits of participants were kept simple, resulting in poor predictive ability of relevant predictors, and their coefficient in LASSO regression shrank to zero early; they were not selected as the final prediction factor. 2) The first phase of this study was conducted in approximately 2008. Due to the limited personnel and equipment, some new anthropometric indices could not be recorded. In future research, we will discuss how to add new prediction factors to improve the performance of the model. 3) In the 10-fold cross-validation, the performance of the model is not obviously decreased, but the performance of some centres has declined in internal and external validation, indicating that regional and ethnic differences may adversely affect the model's performance. 4) The external validation datasets came from Shaanxi Province's Fig. 4 The models were presented as logistic regression equations in this Excel document, it will calculate the predictor (lp) first. According to logit transformation, lp = logit(p) = log p 1−p , so p = exp (lp) 1+exp (lp) , the p value will be displayed in the purple cell. *p=The prediction of probability of Mets.
2022-03-04T14:42:04.839Z
2022-03-03T00:00:00.000
{ "year": 2022, "sha1": "8ca926b18433a38f913118c1a7b47d28d6e99a31", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Springer", "pdf_hash": "8ca926b18433a38f913118c1a7b47d28d6e99a31", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
246001269
pes2o/s2orc
v3-fos-license
Mental Health Prediction Using Machine Learning: Taxonomy, Applications, and Challenges )e increase of mental health problems and the need for effective medical health care have led to an investigation of machine learning that can be applied in mental health problems. )is paper presents a recent systematic review of machine learning approaches in predicting mental health problems. Furthermore, we will discuss the challenges, limitations, and future directions for the application of machine learning in the mental health field. We collect research articles and studies that are related to the machine learning approaches in predicting mental health problems by searching reliable databases. Moreover, we adhere to the PRISMA methodology in conducting this systematic review. We include a total of 30 research articles in this review after the screening and identification processes. )en, we categorize the collected research articles based on the mental health problems such as schizophrenia, bipolar disorder, anxiety and depression, posttraumatic stress disorder, andmental health problems among children. Discussing the findings, we reflect on the challenges and limitations faced by the researchers on machine learning in mental health problems. Additionally, we provide concrete recommendations on the potential future research and development of applying machine learning in the mental health field. Introduction Mental illness is a health problem that undoubtedly impacts emotions, reasoning, and social interaction of a person. ese issues have shown that mental illness gives serious consequences across societies and demands new strategies for prevention and intervention. To accomplish these strategies, early detection of mental health is an essential procedure. Medical predictive analytics will reform the healthcare field broadly as discussed by Miner et al. [1]. Mental illness is usually diagnosed based on the individual self-report that requires questionnaires designed for the detection of the specific patterns of feeling or social interactions [2]. With proper care and treatment, many individuals will hopefully be able to recover from mental illness or emotional disorder [3]. Machine learning is a technique that aims to construct systems that can improve through experience by using advanced statistical and probabilistic techniques. It is believed to be a significantly useful tool to help in predicting mental health. It is allowing many researchers to acquire important information from the data, provide personalized experiences, and develop automated intelligent systems [4]. e widely used algorithms in the field of machine learning such as support vector machine, random forest, and artificial neural networks have been utilized to forecast and categorize the future events [5]. Supervised learning in machine learning is the most widely applied approach in many types of research, studies, and experiments, especially in predicting illness in the medical field. In supervised learning, the terms, attributes, and values should be reflected in all data instances [6]. More precisely, supervised learning is a classification technique using structured training data [7]. Meanwhile, unsupervised learning does not need supervision to predict. e main goal of unsupervised learning is handling data without supervision. It is very limited for the researchers to apply unsupervised learning methods in the clinical field. In this paper, the main objective is to provide a systematic literature review, critical review, and summary of the machine learning techniques that are being used to predict, diagnose, and identify mental health problems. Moreover, this paper will propose future avenues for research on this topic. It would also give attention to the challenges and limitations of applying the machine learning techniques in this area. Besides that, potential opportunities and gaps in this field for future research will be discussed. Hence, this paper will contribute to the state of the art in the form of a systematic literature review concerning the machine learning techniques applied in predicting mental health problems. is paper hence contributes a critical summary and potential research directions that could assist researchers to gain knowledge about the methods and applications of big data in the mental health fields. Although previous papers have been published by reviewing the applications of machine learning approaches toward the mental health field [6,8], these are general review papers that discuss the applications and concepts of the techniques but do not provide a focused critical summary of the recent gaps in the literature as well as future research directions for this field. As such, this systematic literature review paper aims both to cover recent advancements in this field in addition to providing a focused critical summary concerning the gaps in the literature in terms of the applications of machine learning in the mental health field and to subsequently highlight potential avenues for future research. e audiences for this paper center around the community of practitioners who are applying machine learning techniques in mental health. Besides that, this paper is targeting the practitioners in the machine learning communities where they can keep updated on the application of machine learning nowadays particularly in the mental health field. e relevant research papers and documents are gathered and collected through academic publication repositories with specific keywords. en, the collected documents are identified and categorized into several sections in mental health problems. e performance on the machine learning algorithms or techniques that are used by the researchers is being evaluated by identifying the accuracy, sensitivity, specificity, or area under the ROC curve (AUC). Hence, the sections of this paper are organized as follows. After Introduction, the Background section presents the information about the mental health prediction problem and, subsequently, machine learning algorithms are discussed. e Methodology section will discuss the strategy of finding the relevant research documents. e Results and Discussion sections will review and examine the machine learning approaches used in predicting mental health problems. Lastly, the Conclusion section will conclude this paper. Background is review paper follows the standard process of the systematic literature review as shown in Figure 1. First of all, this review paper begins with the planning phase where the research questions or objectives are investigated and determined. In the planning phase, the data sources are being selected, and then the terms that are related to the topic will be used for searching in the data sources. In conducting the review, several aspects need to be prioritized. For instance, publications of the research articles or papers are identified, the studies of the related topic will be selected, and studies that satisfy the research questions will be chosen. Besides that, the evaluation part will begin by extracting the data from the chosen research articles or papers. en, further analysis will be carried on the data or evidence from the selected articles and papers. e trends of the research based on the topic will be discussed and investigated. e last part of the process is the discussion and conclusion. e limitations, drawbacks, or gaps of the research will be discussed and examined in this part. Besides that, future directions and potential areas of the research will be investigated and determined. A conclusion will be provided based on the findings from the research. Figure 2 shows the categorization and classification of the systematic literature review on this topic. e machine learning approaches are being investigated and explored within the scope of mental health problems. e scope of mental health problems is divided into five types of problems, namely, schizophrenia, anxiety and depression, bipolar disorder, posttraumatic stress disorder (PTSD), and mental health problems among children. Additionally, the data of the mental health problems are collected through several domains and sources. is paper will review and highlight the implementation of machine learning models in each mental health problem. Figure 2 presents the machine learning approaches divided into supervised learning, unsupervised learning, ensemble learning, neural networks, and deep learning. en, the machine learning models are classified based on the type of learning approaches. Besides that, the performances of the machine learning model will be included in this paper to show the efficiency of the machine learning approaches within the mental health field. For instance, the performances such as accuracy, the area under the ROC curve (AUC), F1-score, sensitivity, or specificity will be specified and mentioned in this review paper to provide further analysis. Mental Health Problems. e World Health Organization (WHO) reports the region-wise status of different barriers in diagnosing mental health problems and encourages researchers to be equipped with the scientific knowledge to address the issue of mental health [9]. Now, there are various techniques to predict the state of mental health due to advancement of technology. Research in the field of mental health has increased recently and contributed to the information and publications about different features of mental health, which can be applied in a wide range of problems [10]. Many steps are involved in diagnosing mental health problems, and it is not a straightforward process that can be done quickly. Generally, the diagnosis will begin with a specific interview that is filled with questions about 2 Applied Computational Intelligence and Soft Computing symptoms, medical history, and physical examination. Besides that, psychological tests and assessment tools are also available and are used to diagnose a person for mental health problems. ere are several types of research carried out to investigate and examine the movements of the face to identify certain mental disorders [11]. e increase of research in the mental health field has led to the rise of information in the form of finding suitable solutions to reduce mental health problems. However, the precise reasons for mental illnesses are still unclear and uncertain. Applied Computational Intelligence and Soft Computing 3 Types of inconvenience to the adults, especially in their families, workplaces, and in the society. ere are many types of mental disorders commonly known as schizophrenia, depression, bipolar disorder, and anxiety. Schizophrenia is a mental illness that interrupted by events of psychotic symptoms, which are hallucinations and delusions. Hallucinations are experiences that are not comprehensible to others. Meanwhile, delusions are impressions that are held by the patients although contradicted by the rational and real arguments. Schizophrenia is often diagnosed by symptoms such as social withdrawal, irritability, and increasing strange behaviours. Studies of whether an early diagnosis of such symptoms and intervention could improve the outcomes are still in progress [12]. e primary symptom of depression is an interference of the mood, which is usually severe sadness. Sometimes, anger, irritability, and loss of interests might dominate the symptoms of the depression. In terms of physiological symptoms, sleep disturbance, appetite disturbance, and decreased in energy are commonly shown across cultures. e cognitive symptoms such as slow thinking, suicidal thoughts, and guilt might occur among the patients. Most of the individuals that suffer from depression will have recurrence episodes [13]. Many individuals do not recover completely and they might have a form of chronic mild depression [14]. Bipolar disorder is another mental disorder identified by the episode of mania and depression. Sometimes, there is an episode mixed with both mania and depression. Mania is known by irritability, increased in energy, and decreased need for sleep. Individuals that experience mania often exhibit reckless behaviours. Meanwhile, a depressive episode for bipolar disorder is almost the same as the depression symptoms. Some studies report some recovery to baseline functioning between episodes; however, many patients will have residual symptoms that cause impairment [15]. Another common mental disorder is an anxiety disorder, which is usually identified as an inability to regulate fear or worry. Panic disorders belong to this category, which appears to be unexpected panic attacks and intense fear. e physiological symptoms that are caused by panic disorder include a racing heart, sweating, and dizziness. Generalized anxiety disorder is characterized by excessive worry. Emotional numbness caused by traumatic events characterizes posttraumatic stress disorder (PTSD). Individuals that have a social anxiety disorder are frequently afraid of social situations. Surveys show that delays in seeking professional treatment for an anxiety disorder are widespread [16]. Data Mining and Machine Learning. In modern days, the management and processing of data have fully grown into a popular topic in the field of computer science. Data mining is knowledge discovery in databases, which is discovering useful patterns and relationships in large volumes of data. Within the medical field, data mining techniques are increasingly applied for tasks such as text expression, drug design, and genomics [17]. Data mining techniques can be separated into two forms, which are supervised learning and unsupervised learning. For unsupervised learning, it determines the object's similarity and detects patterns through the group's data. It can be grouped into clustering, association, summarizing, and sequence discovery [18]. Unsupervised learning is particularly valuable in helping to identify the structure of the data automatically through learning inherent from input data when the data set is unlabelled. In short, data mining is a crucial technique in the role of computer science. e complexity of the data sets collected can be solved rapidly and swiftly through data mining. In addition, many parties can gain an advantage using data mining for better outcomes and solutions of their challenging problems. Machine learning is an application of artificial intelligence (AI), which implements systems with the capability to learn and improve from experience without being explicitly programmed. Machine learning has offered essential advantages to a wide range of areas such as speech recognition, computer vision, and natural language processing. It is allowing many researchers to extract meaningful information from the data, provide personalized wisdom, and establish automated intelligent systems [4]. It is believed that machine learning introduced many types of approaches and learning. For instance, the commonly used machine learning approaches are supervised learning and unsupervised learning. Supervised learning is an approach that predicts the outcome result with given labelled data input. Supervised learning is excellent at classification and regression problems. e purpose of this learning is to make sense of data toward the specific measurements. e unsupervised learning is in contrast to the supervised learning, which tries to make sense of data in itself. In unsupervised learning, there are no measurements or guidelines. Additionally, the ensemble learning is a process where the classifiers combined and generated strategically to solve a specific problem. e primary usage of ensemble learning is to improve the performance of a model or reduce the probability of selecting models with poor performance [19]. Moreover, neural networks and deep learning have recently become more well known among machine learning approaches due to their ability to solve many problems such as image recognition, speech recognition, and natural language processing. ese approaches are based on the neuronal networks of the brain where they enable the algorithms to learn from the observational data. In the medical field, machine learning algorithms have been used to discover new drugs, perform radiology analysis, predict epidemic outbreaks, and diagnose diseases. Generally, machine learning algorithms are tools to analyze the massive medical data sets. ey are utilized as tools in assisting for medical diagnosis as they became more reliable in their performance. From time to time, machine learning and data mining approaches continue to develop rapidly. Powerful algorithms and more advanced neural networks, decision trees, gradient boosting, and others were introduced and applied to solve more complicated medical diagnosis problems. Methodology In this review paper, the planning phase is conducted followed by the searching and analysis phase. en, the discussion of the relevant documents that are found will be highlighted and summarized in this paper. e conclusions will be presented to conclude this review paper. Several research questions or objectives for this review paper have been highlighted and investigated. First of all, we want to provide a summary of the latest research on machine learning approaches in predicting mental health problems, which can give useful information to the clinical practice. Besides that, this review paper also will identify the types of machine learning algorithms that have been widely used for this field. We also want to learn and investigate the limitations of the application of machine learning within this field. Moreover, we want to determine the future opportunities or research avenues that can maximize the potential of machine learning approaches within the mental health fields. For the planning stage, the sources of the database for collecting the research papers and articles are identified. e journals and conferences that are related to the research such as Journal of Psychiatric Research, International Conference on Computational Intelligence and Data Science, and International Conference on Advanced Engineering, Science, Management and Technology have been highlighted in this review paper. Besides that, the reliable publishers such as Springer, ScienceDirect, and IEEE publisher were chosen as the repositories to provide the research papers and articles. To conduct the searching and analysis, the topic stated has been explored in the following publishers' website. Besides that, the queries such as Machine Learning Algorithms in Mental Health, Psychiatric Medical with Machine Learning Techniques, and Machine Learning in Predicting Mental Health Problems have been used on these sites. e analysis phase is started by finding out and investigating the performance of the machine learning approaches that were used to diagnose or predict mental health problems. Some of the documents and research papers that do not meet the requirement of the topic will be removed. e discussion phase will begin by reviewing the machine learning algorithms used by the researchers in their experiments to predict the mental problems. Mental health problems will be divided and categorized into several parts. en, the performance for the machine learning techniques will be described and further analyzed in this phase. Besides that, the research questions will be acknowledged and answered with using the details found during the review of the literature. e conclusions related to the topic will be highlighted based on the findings and discussion. Moreover, the prediction of the mental health problems by using machine learning approaches will be generalized and summarized. is review paper will follow the standard PRISMA protocol, which stands for Preferred Reporting Items for Systematic Reviews and Meta-Analyses. It is an evidencebased minimum set of items for reporting systematic reviews and meta-analyses. Based on Figure 3, a total of 142 types of research articles and papers related to this field were found and recorded through the database searching. Besides that, additional records are also identified through other sources. e records that have already been identified will be screened where the duplicate records will be removed or excluded from this review paper. After that, the records with full-text articles that are evaluated for eligibility will be included in this review paper. However, the full-text articles or papers that do not meet the appropriate conditions will be excluded from the review paper for a reason. Hence, a total of 30 research studies related to the topic will be included and highlighted in this paper. Results In this section, the documents and information related to the machine learning approaches that have been used by the researchers to conduct a prediction or diagnosis for mental health problems will be reviewed and discussed. Moreover, the performance of the machine learning algorithms used will be evaluated and analyzed. e mental health problems will be categorized into several mental health disorders such as schizophrenia, anxiety and depression, bipolar disorder, posttraumatic stress disorder, and children's mental health problems. A total of 30 research articles were included in this review paper. e research articles were divided and categorized based on the mental health problems such as schizophrenia, bipolar disorder, anxiety and depression, posttraumatic stress disorder, and mental health problems among children. According to Figure 4, six research articles (20.0%) were highlighted in schizophrenia; meanwhile, seven research articles (23.3%) were analyzed in anxiety and depression. Furthermore, there are seven research articles (23.3%) included in bipolar disorder. Eight research articles (26.7%) will be discussed and investigated in posttraumatic stress disorder. ere are only two research articles (6.7%) that will be analyzed for mental health problems among children. e statistics provided in Figure 5 shows the trends of the reviewed research articles and papers based on the years. Machine Learning Approaches in Predicting Schizophrenia. According to the paper by Greenstein et al., classification of childhood-onset schizophrenia has been performed [20]. e data consist of genetic information, clinical information, and brain magnetic resonance imaging. e authors use a random forest method to calculate the probability of mental disorder. Random forest is being used in this paper because it has lower error rates compared with other methods. e accuracy of 73.7% is obtained after the classification. Applied Computational Intelligence and Soft Computing In one of the research works conducted by Jo et al., they used network analysis and machine learning approaches to identify 48 schizophrenia patients and 24 healthy controls [21]. e network properties were rebuilt using the probabilistic brain tractography. After that, machine learning is being applied to label schizophrenia patients and health controls. Based on the result, the highest accuracy is achieved by the random forest model with an accuracy of 68.6% followed by the multinomial naive Bayes with an accuracy of 66.9%. en, the XGBoost accuracy score is 66.3% and the support vector machine shows an accuracy of 58.2%. Most of the machine learning algorithms show promising levels of performance in predicting schizophrenia patients and healthy controls. e support vector machine, which is a machine learning model, has been implemented to classify schizophrenia patients [22]. e data set is obtained from the 20 schizophrenia patients and 20 healthy controls. en, the support vector machine algorithm is used for classification with the help of functional magnetic resonance imaging and single nucleotide polymorphism. After the classification, an accuracy of 0.82 is achieved with the functional magnetic resonance imaging. For the single nucleotide polymorphism, an accuracy of 74% is obtained. Srinivasagopalan et al. [23] used a deep learning model to diagnose schizophrenia. e National Institute of Health provides the data set for the experiments. e accuracy of each machine learning algorithm is obtained and recorded. e results obtained from the experiment show that deep learning showed the highest accuracy with 94.44%. e random forest recorded an accuracy of 83.33% followed by logistic regression with an accuracy of 82.77%. en, the support vector machine showed an accuracy of 82.68% in this experiment. In another study conducted by Pläschke et al., the schizophrenia patients were distinguished from the matched health controls based on the resting-state functional connectivity [24]. Resting-state functional connectivity could be used as a spot of functional dysregulation in specific networks that are affected in schizophrenia. e authors have used support vector machine classification and achieved 68% accuracy. Pinaya et al. applied the deep belief network to interpret features from neuromorphometry data that consist of 83 healthy controls and 143 schizophrenia patients [25]. e model can achieve an accuracy of 73.6%; meanwhile, the support vector machine obtains an accuracy of 68.1%. e model can detect the massive difference between classes involving cerebrum components. In 2018, Pinaya et al. proposed a practical approach to examine the brain-based disorders that do not require a variety of cases [26]. e authors used a deep autoencoder and can produce different values and patterns of neuroanatomical deviations. Machine Learning Approaches in Predicting Depression and Anxiety. A machine learning algorithm is developed to predict the clinical remission from a 12-week course of citalopram [27]. Data are collected from the 1949 patients that experience depression of level 1. A total of 25 variables from the data set are selected to make a better prediction outcome. en, the gradient boosting method is being deployed for the prediction because of its characteristics that combine the weak predictive models when built. An accuracy of 64.6% is obtained by using the gradient boosting method. In order to identify depression and anxiety at an early age, a model has been proposed by Ahmed et al. [28]. e model involves psychological testing, and machine learning algorithms such as convolutional neural network, support vector machine, linear discriminant analysis, and K-nearest neighbour have been used to classify the intensity level of the anxiety and depression, which consists of two data sets. Based on the results obtained, the convolutional neural network achieved the highest accuracy of 96% for anxiety and 96.8% for depression. e support vector machine showed a great result and was able to obtain an accuracy of 95% for anxiety and 95.8% for depression. Besides that, the linear discriminant analysis reached the accuracy of 93% for anxiety and 87.9% for depression. Meanwhile, the K-nearest neighbour obtained the lowest accuracy among the models with 70.96% for anxiety and 81.82% for depression. Hence, Applied Computational Intelligence and Soft Computing the convolutional neural network can be a helpful model to assist psychologists and counsellors for making the treatments efficient. In the research paper by Sau and Bhakta, they developed a predictive model for diagnosing the anxiety and depression among elderly patients with machine learning technology [29]. Elderly patients have different sociodemographic factors and factors related to health. e data set involved 510 geriatric patients and tested with a tenfold cross-validation method. en, ten classifiers as shown in Table 1 were selected to predict the anxiety and depression in elderly patients. e metrics of each classifier were evaluated and summarized. According to Table 1, the highest prediction was obtained by random forest with 89.0%. en, the J48 accuracy score was 87.8% followed by random subspace with an accuracy of 87.5%. Random tree showed the prediction accuracy with 85.1%; meanwhile, the Bayesian network achieved an accuracy of 79.8%. Next, the naive Bayes and multilayer perceptron achieved the accuracy of 79.6% and 77.8%, respectively. Sequential minimal optimisation and K-star achieved the same accuracy, which is 75.3%. Finally, logistic regression showed the lowest accuracy prediction of 72.4%. In research conducted by Katsis et al., a system based on physiological signals for the assessment of affective states in anxiety patients has been proposed [30]. e system is proposed to predict the affective state of an individual according to five predefined classes, which are neutral, relaxed, startled, apprehensive, and very apprehensive. e authors use machine learning algorithms in this research such as artificial neural networks, random forest, neurofuzzy systems, and support vector machine. e neuro-fuzzy system can obtain the highest accuracy with a score of 84.3% followed by random forest with an accuracy of 80.83%. Meanwhile, the support vector machine and artificial neural networks achieved the accuracies of 78.5% and 77.33%, respectively. A research paper by Sau and Bhakta shows the prediction of depression and anxiety among seafarers [31]. Seafarers are easily exposed to mental health problems, which typically are depression and anxiety. Hence, machine learning technology has been useful in predicting and diagnosing them for early treatments. e authors were able to obtain a data set of 470 seafarers who were interviewed. In this research conducted by them, features including age, educational qualification, marital status, job profile, type of family, duration of service, existence or nonexistence of heart disease, body mass index, hypertension, and diabetes have been selected to predict the outcome. Five classifiers, which are CatBoost, random forest, logistic regression, naive Bayes, and support vector machine, were chosen on the training data set with 10-fold crossvalidation. In order to determine the strength of the machine learning algorithms, the data set with 56 instances are deployed on the trained model. For the training set, the results indicate that the boosting algorithms method CatBoost performs best on this training data set with an accuracy of 82.6%. Random forest has achieved a satisfying accuracy score of 81.2%; meanwhile, logistic regression obtained an accuracy score of 77.8%. e support vector machine and naive Bayes obtained 76.1% and 75.8%, respectively. For the test data set, the CatBoost algorithm has performed better than the other machine learning algorithms with a predictive accuracy of 89.3%. Meanwhile, logistic regression has performed very well with the predictive accuracy of 87.5%. Besides, the support vector machine and naive Bayes score the accuracy with the same percentage, which is 82.1%. e random forest shows the lowest accuracy percentage score of 78.6% for the test data set. Hilbert et al. used machine learning approaches to separate the complicated subjects from healthy ones and distinguish generalized anxiety disorders from major depression without generalized anxiety disorder [32]. For the data set, they used the multimodal behavioural data from a sample of generalized anxiety disorders, healthy persons, and major depression. ey applied a binary support vector machine and found out that the prediction of generalized anxiety disorders was difficult when using the clinical questionnaire data. Meanwhile, the input involves the inclusion of cortisol and grey matter volume can reach accuracies of 90.10% and 67.46% for the classification of case and disorder, respectively. A study has been conducted to detect depression from text and audio by Jerry and others [33]. e study aims to collect the data and improve the analysis from the features of text and voice. e mean of F1-score is analyzed and recorded to determine the best performance among the machine learning algorithms. Tables 2 and 3 show the performance of the machine learning algorithms in detecting depression in text and audio features, respectively. Based on Tables 2 and 3, random forest has shown the best performance for the text features. With a mean F1-score of 0.73, random forest outperforms all the baseline algorithms. Meanwhile, XGBoost called extreme gradient boosting shows the best performance for audio features with a mean F1-score of 0.50. It is slightly better than the other algorithms. Machine Learning Approaches in Predicting Bipolar Disorder. In research performed by Rocha-Rego et al., the authors examined the practicality to determine the bipolar disorder patients from healthy controls by using pattern [34]. e data samples consist of two populations that remitted bipolar disorder patients. A Gaussian process classification algorithm is applied to grey matter and white matter structural magnetic resonance imaging data. e result shows that the accuracy of the algorithm for the grey matters is 73% in study population 1 and 72% in study population 2. Meanwhile, the classification of white matters scored the accuracy of 69% in study population 1 and 78% in population 2. Grotegerd et al. applied two machine learning models to differentiate depressed bipolar from unipolar patients [35]. e samples involve neuroimaging acquisition where the support vector machine manages to obtain accuracies of 90% in happy against neutral face, 75% in negative against the neutral faces, and 80% when merging the expressions. Meanwhile, the Gaussian process classification shows accuracies of 70% in happy against neutral face, 70% in negative against the neutral faces, and 75% when fusing the expressions. Valenza et al. suggested a PSYCHE system that functions as some wearable device and the data gathered will be further analyzed for predicting the mood changes in bipolar disorder [36]. e data set consisted of electrocardiogram signals recorded from the patients, and heart rate features from the signals will be selected as the prediction outcome. After applying the support vector machine, an average accuracy of 69% is obtained in predicting the mood states in bipolar disorder. In another study by Mourão-Miranda et al., the authors applied functional magnetic resonance imaging to explore the differences of the brain activity in patients that have bipolar disorder, major depressive disorder, and healthy controls [37]. e Gaussian process classification algorithm is then trained to determine the bipolar disorder from unipolar depression. e algorithm can achieve an accuracy of 67% with a specificity of 72% and sensitivity of 61%. In a research article by Roberts et al., a support vector machine is used to distinguish bipolar disorder patients, risk subjects, and healthy controls [38]. e research involves the data from resting functional connectivity of the left inferior frontal gyrus. e authors used three classes at once to classify the target individual. Based on the result, an overall accuracy of 64.3% was obtained with an independent accuracy of 74.5% in bipolar disorder, 64.5% in risk subjects, and 58.0% in healthy controls. Another study shows that neuropsychological tests were also applied machine learning techniques published by Akinci et al. [39]. e authors proposed a noninvasive approach to predict bipolar disorder. e different positions of the pupil have been monitored by using the eye pupil detection system. Moreover, the time interval of the pupils when glancing at particular positions and making decisions is managed by the system. With the samples of the data set from the eye pupil, the support vector machine is being applied for the prediction. e prediction accuracy managed to reach an impressive accuracy score of 96.36%. ere are several kinds of research using machine learning approaches and neuropsychological measures to determine the bipolar disorder. Wu et al. conducted an experiment to investigate and determine bipolar disorder among individual patients by using neurocognitive abnormalities [40]. Machine learning known as the LASSO algorithm is then applied to analyze the individual patient with bipolar disorder. e accuracy of 71% and AUC of 0.714 are managed to be obtained through this experiment. Machine Learning Approaches in Predicting Posttraumatic Stress Disorder (PTSD). A study conducted by Reece et al. uses the machine learning algorithm random forest to predict the PTSD and depression among the Twitter users [41]. e authors have analyzed more than 243,000 posts from Twitter related to the users that experienced PTSD. en, the data consisting of PTSD users and healthy controls have been applied in the prediction. With random forest, the authors can predict the PTSD with an AUC score of 0.89. Leightley et al. applied machine learning techniques for identifying the PTSD among the military forces in the United Kingdom [42]. e authors have collected around 13,690 subjects of the military forces from 2004 to 2009 and used the data as a prediction of PTSD. Various machine learning algorithms are being applied in the prediction. From the experiments, it is found out that fandom forest has achieved the highest accuracy, which is 97%, in the prediction. Meanwhile, Bagging obtained an accuracy of 95% followed by support vector machine with an accuracy of 91%. e artificial neural network is able to achieve the lowest accuracy among the machine learning algorithms, which is 89%. Another research about machine learning approaches in PTSD prediction is conducted by Papini et al. [43]. e authors utilized the clinical data, psychological questionnaires, and localization variables when conducting the research. e data set consists of 110 PTSD patients and 231 trauma-exposed controls. A machine learning algorithm known as gradient-boosted decision trees has been built and applied due to its capability in handling the nonlinear [44]. e authors use a sample of 441 trauma-exposed subjects as the training data set and 211 trauma-exposed subjects as the new testing data set. Machine learning techniques such as random forest with conditional inference, least absolute shrinkage and selection (LASSO), and logistic regression are being applied to predict the PTSD survivors. Based on the results obtained, using the random forest with conditional inference has shown the highest accuracy of 77.25% compared with the LASSO with an accuracy percentage of 74.88% and logistic regression with an accuracy percentage of 75.36%. Another research that also used machine learning approaches in the prediction of PTSD from the audio recordings was shown by Marmar et al. [45]. e authors have collected and gathered speech samples from warzone-exposed veterans. en, the speech attributes that could help in predicting the PTSD such as slower monotonous speech and less change in tonality are extracted from the clinical interviews. Random forest has been used in the prediction and the model can reach the accuracy of 89.1% with AUC of 0.954. In addition, Vergyri et al. have researched on the audio recordings from the war veterans and compared those with the speech elements of clinicians and patients to predict the PTSD [46]. In the research, they have collected 39 male patients and explored three types of features, which are frame-level features, longer-range prosodic features, and lexical features. en, they selected Gaussian backend, decision tree, neural network classifiers, and boosting for the prediction model. Using several machine learning models, an overall accuracy of 77% can be generated in the prediction of PTSD. Based on the study conducted by Salminen et al., the authors have applied a support vector machine in diagnosing PTSD among war veterans by using cortical and subcortical imaging [47]. e data collected are from 97 war veterans who are exposed to the early stress life and participate in the military encounters. Furthermore, they selected the surface in the right posterior cingulate as a major attribute in the classification. e authors can obtain the diagnosis of PTSD with a low accuracy of 69%. Additionally, Rangaprakash et al. have introduced support vector machines in identifying areas related to the PTSD by combining the functional magnetic resonance imaging and diffusion tensor information [48]. A sample of 87 male soldiers was collected and analyzed to obtain related information and features. After the classification by using the support vector machine, the authors achieved the accuracy percentage of 83.59% and found a relationship between hippocampal-striatal hyperconnectivity and PTSD. Machine Learning Approaches in Predicting Mental Health Problems among Children. In the research paper by Sumathi and Poorna, the authors have predicted mental health problems among children by various machine learning approaches [49]. e factors, symptoms, and psychological tests of the mental health problems are being observed by professionals. e data set is obtained from a clinical psychologist containing 60 instances. Several features and attributes have been selected for the classification process. Different machine learning algorithms were applied to this problem to test their prediction accuracies. From the result shown in Table 4, the machine learning technique called average one-dependence estimator (AODE) has recorded 71% of accuracy. Meanwhile, MLP shows the highest accuracy, which is 78%. Next is logical analysis tree (LAT) with 70% accuracy; meanwhile, the multiclass classifier is at 58% of accuracy. Another machine learning technique called radial basis function network (RBFN) records the accuracy with 57%. K-star and functional tree (FT) obtained the same accuracy score of 42% in this experiment. Based on the recent research conducted by Tate et al., the authors applied machine learning algorithms to predict the mental health problems among children [50]. e data consist of a total of 7638 twins from the Child and Adolescent Twin Study in Sweden. ey used 474 predictors that are extracted from the register data and parental data. en, the Strengths and Difficulties Questionnaire was applied to determine the outcome. Based on the result from the test set, the random forest showed the highest AUC of 0.739 followed by the support vector machine with the AUC of 0.736. e neural network recorded the AUC with a score of 0.705. en, the logistic regression scored an AUC of 0.700, and the XGBoost performed on the test set with an AUC of 0.692. Summary. e articles utilizing machine learning approaches in predicting mental health problems have been listed in Table 5. Critical Analysis and Discussion In this paper, there are a total of 30 research papers that have been reviewed and evaluated in which the use of machine learning techniques or approaches in predicting mental health problems is highlighted. e research papers and articles have been divided and categorized into different types of mental health problems such as schizophrenia, depression, anxiety, bipolar disorder, and PTSD. Besides that, the performance of the machine learning mechanisms that are being applied has been highlighted because it could provide benefits within the medical field in data mining or big data fields. Based on the summary table provided in Table 5, there are 6 articles that applied various machine learning approaches to identify and predict the schizophrenia patients with different data sets [20][21][22][23][24][25]. Several research projects have been conducted to analyze and classify depression and anxiety. 7 research papers have been reviewed in this paper to evaluate the performance of machine learning techniques in determining the depression and anxiety among people [27][28][29][30][31][32][33]. Besides that, there are 7 studies on the mental health problem bipolar disorder. e studies are being conducted to predict the bipolar disorder among patients by using the machine learning approaches [34][35][36][37][38][39][40]. In addition, research on the application of the machine learning in predicting the PTSD has been gaining popularity, and thus there are 8 research articles highlighted this problem in this paper [41][42][43][44][45][46][47][48]. ere are also 2 articles that predict the mental health problems among children with various machine learning approaches [49,50]. In terms of sample data sets used by the researchers, the data sets used for the classification are mostly small size, which is below 100 subjects. For example, the authors Jo et al. [21], Yang et al. [22], Rocha-Rego et al. [34], Grotegerd et al. [35], Mouraō-Miranda et al. [37], Akinci et al. [39], Wu et al. [40], Vergyri et al. [46], Salminen et al. [47], and Rangaprakash et al. [48] have applied small size of sample data for the classifications. Moreover, some studies are conducted by using a partial large size of the data set, which is above 100 subjects. Some of the research papers highlighted such amount of the data set, which are Greenstein et al. [20], Srinivasagopalan et al. [23], Pläschke et al. [24], Pinaya et al. [25], Roberts et al. [38], Reece et al. [41], and Marmar et al. [45]. e researchers also performed the prediction with large size of data set. For instance, Chekroud et al. [27], Sau and Bhakta [29], Sau and Bhakta [31], Leightley et al. [42], Papini et al. [43], Conrad et al. [44], and Tate et al. [50] have utilized large size of data set in the prediction of the mental health problems, which are above 300 subjects. Not only that, some authors such as Ahmed et al. [28], Katsis et al. [30], Hilbert et al. [32], Xu et al. [33], Valenza et al. [36], and Sumathi and Poorna [49] have conducted the classification experiments with different types of data set. In the research articles, the data sets consisting of interviews, questionnaires, electrocardiogram signals, physiological signals, and text and audio data have been applied to perform the classifications. According to the research papers provided, the experiments of the classification are conducted with various machine learning models. It is undeniable that machine learning models such as random forest and support vector machine have been the most popular choice to be applied in the experiments. is is because random forest and support vector machine at most of the time are able to provide an excellent performance in terms of the accuracy For example, Greenstein et al. [20], Jo et al. [21], Yang et al. [22], Srinivasagopalan et al. [23], Pläschke et al. [24], Pinaya et al. [25], Sau and Bhakta [29], Ahmed et al. [28], Katsis et al. [30], Sau and Bhakta [31], Hilbert et al. [32], Xu et al. [33], Grotegerd et al. [35], Valenza et al. [36], Roberts et al. [38], Akinci et al. [39], Reece et al. [41], Leightley et al. [42], Conrad et al. [44], Marmar et al. [45], Salminen et al. [47], Rangaprakash et al. [48], and Tate et al. [50] have applied the random forest and support vector machine in the classification of the mental health problems. According to the results provided by the authors, they usually present the accuracy as the performance measurement level for the machine learning models in predicting the mental health problems. Hence, this paper will highlight the performance of machine learning used in the experiments for each mental health problem that are stated. First of all, the support vector machine shows an unsatisfying performance in classifying the schizophrenia patients where the accuracy is lower than 70% as stated by Jo et al. [21], Pläschke et al. [24], and Pinaya et al. [25]. However, the support vector machine presents an excellent accuracy as stated by Yang et al. [22] and Srinivasagopalan et al. [23]. Moreover, random forest has provided great accuracy in the experiments conducted by Greenstein et al. [20] and Srinivasagopalan et al. [23], but Jo et al. [21] show that random forest obtains a low accuracy, which is 68.9%. A research article published by Srinivasagopalan et al. [23] shows deep learning can provide an excellent accuracy, which is 94.44%, in classifying the schizophrenia problem. In classifying the depression and anxiety cases with machine learning models, the research shows a better result in terms of accuracy for the studies conducted. Most of the research articles show that machine learning models have obtained the accuracy of above 70%. However, Chekroud et al. [27] present that gradient boosting achieves the accuracy of 64.6%. Meanwhile, the convolutional neural network has obtained excellent performance with an accuracy of 96.0% for anxiety classification and 96.8% for the depression classification as stated in the article by Ahmed et al. [28]. Besides, the random forest and support vector machine perform very well in classifying the depression and anxiety cases as stated in the research articles by Sau and Bhakta [29], Katsis et al. [30], Sau and Bhakta [31], and Hilbert et al. [32]. Other research articles show different results obtained from bipolar disorder prediction with machine learning models. Rocha-Rego et al. [34] and Grotegerd et al. [35] can apply a machine learning model known as Gaussian process classification and obtain average performance, which is above 70%. Meanwhile, Mourão-Miranda et al. [37] obtained an accuracy of 67% by using the Gaussian process classification. Besides that, the support vector machine provides an unsatisfying performance with an accuracy of 64.3% in Roberts et al. [38] and an accuracy of 69% in Valenza et al. [36]. However, this machine learning model can reach an accuracy score of 96.36% when predicting bipolar disorders as stated by Akinci et al. [39]. When predicting mental health problems for PTSD, machine learning models commonly used are random forest and support vector machine. In the reviewed research articles, random forest has shown an excellent performance in predicting PTSD individuals. For example, Leightley et al. Table 5: Summary of the machine learning approaches within mental health problems. [42] have achieved a percentage of 97% of accuracy with random forest. In addition, Marmar et al. [45] and Reece et al. [41] applied random forest in their studies to predict PTSD individuals. From the results, Marmar et al. [45] obtained an accuracy score of 89.1% with random forest; meanwhile, Reece et al. [41] managed to reach an AUC of 0.89 with random forest. In a research article by Leightley et al. [42], the authors have utilized the support vector machine and obtained a satisfying accuracy of 91%. Besides that, Rangaprakash et al. [48] have shown that the support vector machine can achieve an accuracy of 83.59% when classifying the PTSD among male soldiers. However, the support vector machine shows some drawbacks when predicting the PTSD among war veterans in Salminen et al. [47] where it only obtained an accuracy of 69%. Based on the research articles published by Sumathi and Poorna [49] and Tate et al. [50], the authors have used machine learning models when predicting the mental health problems among the children. From the obtained results, Sumathi and Poorna showed that multilayer perceptrons can achieve an accuracy of 78%, which is the highest accuracy among machine learning models [49]. Moreover, Tate et al. applied machine learning models to predict the mental health problems with twins children data set. ey obtained the highest AUC by using random forest, which is 0.739, followed by support vector machine, which is 0.736 [50]. Gaps in the Literature. In this section, it would be crucial to provide the challenges and limitations encountered by the researchers to learn the gaps in the literature of machine learning approaches in this field. Small Sample Size. It is notable that most of the reviewed research lack a sample size or applying a small sample size in their experiments. Even though machine learning can exhibit robustness when analyzing the large sample size, certain approaches can perform with a small sample without compromising the accuracy depending on the settings toward the model applied in the experiments. Vabalas et al. mentioned that usage of the small sample is common in the field of mental health because of the cost that is related to the data collection that involves the human participants and the experimental rules with different conditions are still under development [51]. Insufficient Validation. Due to the small sample sizes and insufficient acceptable validation from external sources, many types of research are still in a proof-of-concept stage. For example, structural neuroimaging research projects are usually carried out in subjects who already had mental health illness. is is difficult to decide whether structural brain alterations are the risk factors, result, or illness source. e researchers should cooperate with a clinical professional to provide important information such as validation, truth, and biases, which could lead to the analysis of data, improve accuracy, and manage deployment risks [52]. [53]. Such exploration is very crucial for the researchers to convince the medical professionals to apply the predictive mental health system. Lack of Real-Life Testing. Although machine learning can show the researchers about the prediction on mental health, there is still a lack of testing being applied in real life due to several reasons. Many medical professionals still doubt the accuracy of automated methods such as machine learning, as well as issues of consistency and difficulty when applying the machine learning predictive systems to realworld medical practices. Dang et al. stated that there is no standard way to collect high-quality data, difficulty in achieving the labels, which cause the supervised learning approaches to be inconsistent, and also the lack of acknowledging the best practices in handling machine learning models [54]. Such challenges and reasons could reduce the real-life application of machine learning models in the mental health field. Avenues for Future Research. Next, this paper will highlight the specific approaches that could help in the development of the research toward the effectiveness of the machine learning application in this field. Exploration in Deep Learning. e success of applying machine learning approaches in mental health prediction can be expanded to include deep learning approaches. Such approaches could even predict mental health problems together with diagnosis of other chronic diseases such as cancer, diabetes, and others. Architectures of deep learning in processing the image can be useful to identify and predict mental health problems from facial expression. In this context, deep learning architectures hopefully could be combined with memory [55] and attention mechanisms [56] to build greater accuracy clinical architectures. High-Quality Data. In order to develop more accurate predictive tools, data such as sociodemographics, speech, medical report profiles, and facial expressions of the patients can be recorded or taken via photography combined with magnetic resonance imaging of the brain. In this approach, the data have a larger volume where the deep learning algorithms can be useful and applied. Obtaining such a detailed and large data set shows a challenge for the mental health field and requires immediate collaboration among institutes and organizations [57]. e application of new models to predict the clinical results should be given the research opportunity. Besides, web-based predictors and medical analytics tools should be developed to transform the effective predictive models into useful clinical decision systems such as for identifying the different types of mental disorders, medication plans, as well as preventive plans. For instance, Psycho Web is being developed where the application allows users to collect and predict the data from mental health patients using machine learning [58]. However, this application is still in its infancy and undergoing continual improvements. Explainable Model. Performance of machine learning models and being explainable are necessary for mental health problems. Medical professionals need to understand the underlying system of prediction and classification very well before practising it in the real world and with patients. Making the results obtained by these models understandable should be the main priority toward establishing reliable systems. In a paper conducted by Holzinger et al., the authors encouraged an innovative and interactive explainable approach called counterfactual graphs for the beneficial future interaction between humans and artificial intelligence [59]. Transfer Learning and Flexible Algorithms. Transfer learning is an algorithm developed for adaptability to different purposes where it could help to improve the generalization performance of machine learning models. Transfer learning has been widely applied in fields that require image analysis, which could be useful to incorporate in clinical settings [60]. Meanwhile, flexible algorithms will become the main challenge toward mental health because of heterogeneity in the input data. Machine learning models need to have a life-long framework as it can help preventing the catastrophic forgetting [61]. Achieving the best results with these future opportunities will need cooperative efforts between the data researchers, computer scientists, and medical professionals. Conclusion Many different techniques and algorithms had been introduced and proposed to test and solve the mental health problems. ere are still many solutions that can be refined. In addition, there are still many problems to be discovered and tested using a wide variety of settings in machine learning for the mental health domain. As classifying the mental health data is generally a very challenging problem, the features used in the machine learning algorithms will significantly affect the performance of the classification. e existing studies and research show that machine learning can be a useful tool in helping understand psychiatric disorders. Besides that, it may also help distinguish and classify the mental health problems among patients for further treatment. Newer approaches that use data that arise from the integration of various sensor modalities present in technologically advanced devices have proven to be a convenient resource to recognize the mood state and responses from patients among others. It is noticeable that most of the research and studies are still struggling to validate the results because of insufficiency of acceptable validated evidence, especially from the external sources. Besides that, most of the machine learning might not have the same performance across all the problems. e performance of the machine learning models will vary depending on the data samples obtained and the features of the data. Moreover, machine learning models can also be affected by preprocessing activities such as data cleaning and parameter tuning in order to achieve optimal results. Hence, it is very important for researchers to investigate and analyze the data with various machine learning algorithms to choose the highest accuracy among the machine learning algorithms [62]. Not only that, challenges and limitations faced by the researchers need to be managed with proper care to achieve satisfactory results that could improve the clinical practice and decision-making. Data Availability e data used to support the findings of this study are available from the corresponding author upon request. Conflicts of Interest e authors declare that they have no conflicts of interest.
2022-01-08T16:21:26.063Z
2022-01-05T00:00:00.000
{ "year": 2022, "sha1": "c68bb7739d63ddb7505836f9154a4078b7e368fc", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1155/2022/9970363", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "ae239ece116cd7a78f67af1ad08ad451036beb5e", "s2fieldsofstudy": [ "Psychology", "Computer Science", "Medicine" ], "extfieldsofstudy": [ "Computer Science" ] }
17724768
pes2o/s2orc
v3-fos-license
Gene Expression, Bacteria Viability and Survivability Following Spray Drying of Mycobacterium smegmatis We find that Mycobacterium smegmatis survives spray drying and retains cell viability in accelerated temperature stress (40 °C) conditions with a success rate that increases with increasing thermal, osmotic, and nutrient-restriction stresses applied to the mycobacterium prior to spray drying. M. smegmatis that are spray dried during log growth phase, where they suffer little or no nutrient-reduction stress, survive for less than 7 days in the dry powder state at accelerated temperature stress conditions, whereas M. smegmatis that are spray dried during stationary phase, where cells do suffer nutrient reduction, survive for up to 14 days. M. smegmatis that are spray dried from stationary phase, subjected to accelerated temperature stress conditions, regrown to stationary phase, spray dried again, and resubmitted to this same process four consecutive times, display, on the fourth spray drying iteration, an approximate ten-fold increase in stability during accelerated temperature stress testing, surviving up to 105 days. Microarray tests revealed significant differences in genetic expression of M. smegmatis between log phase and stationary phase conditions, between naïve (non spray-dried) and multiply cycled dried M. smegmatis (in log and stationary phase), and between M. smegmatis in the dry powder state following a single spray drying operation and after four consecutive spray drying operations. These differences, and other phenotypical differences, point to the carotenoid biosynthetic pathway as a probable pathway contributing to bacteria survival in the spray-dried state and suggests strategies for spray drying that may lead to significantly greater room-temperature stability of mycobacteria, including mycobacterium bovis bacille Calmette-Guerin (BCG), the current TB vaccine. Introduction Tuberculosis kills more than three million people annually and is ranked among the top ten causes of global mortality and morbidity [1]. The current Mycobacterium bovis bacille Calmette-Guerin (BCG) TB vaccine, which is given intradermally to 100 million infants annually, is formulated as a dry powder via freeze drying (lyophilization) [2,3]. This process typically results in a live attenuated vaccine with 10-30% viability relative to the pre-dried formulation [4]. When kept at refrigerated conditions the commercial lyophilized BCG loses approximately one log of activity after one year to 18 months on the shelf. This is dramatically reduced when placed at room temperature stability conditions (25 °C) resulting in a month or two of accepted viability [1]. Preserving the viability of BCG in dried powders is thought to be an important factor in the potency of the vaccine [5]. Thermostability is of particular importance due to the rugged conditions typically encountered in the regions of the world affected by infectious disease. Previous work in our lab has shown that we have been able to improve on the typical viability and stability achieved through lyophilization. This is done by spray drying the bacteria in a dilute osmolyte solution. Increasing the osmolyte concentration in spray dried solution leads to less viability ultimately reflecting stresses that lead to cell death [6]. In general mycobacteria have well known cellular responses to environmental crisis and stresses such as heat shock, cold shock, nutrient limitation, and osmotic and oxidative stresses [7]. During the formulation process mycobacteria are exposed to stresses, which can cause cell damage and death. It is likely that bacteria that can survive the spray drying process more significantly express protective agents that render these bacteria more resistant to osmotic, heat and nutrient limitation stresses. We have therefore hypothesized that by repeatedly exposing bacteria to stresses involved in the processes of spray drying and dry state containment, we might succeed in selecting for bacteria populations with greater biochemical and biophysical ability to survive. We chose to work with M. smegmatis as an illustrative mycobacterium given relative rapid growth and previous experience in spray drying. We spray dry M. smegmatis in dilute osmolyte conditions, recover the dry powder and expose the dry powder to 40 °C conditions sufficiently long enough to eliminate nearly all viable bacteria. We then re-suspend the highly stressed dry powder in culture media and grow the remaining live bacteria to stationary phase. This process was repeated (cycled) several times after which we examined the bacterial RNA through microarrays to quantify differences in gene expression. By selecting viable bacteria in harsh stability conditions and identifying protective factors that allow them to survive, we hope to identify mechanisms through which highly robust and thermostable bacteria may be formulated so as to persist in the dry powder state. Ideally these results could then be applied to a broad range of live or attenuated whole-cell vaccines against infectious pathogens including M. tuberculosis. Results and Discussion M. smegmatis cultures were formulated into dry powders and placed in accelerated stability conditions at 40 °C and the viability was followed over time. The dry powders were prepared from: (1) bacteria growing in optimal exponential growth phase conditions (2) bacteria that had entered stationary phase and (3) bacteria that were exposed to repeated spray drying and post-drying exposure to 40 °C conditions -for four cycles of spray drying. Viability As illustrated in Figure 1, bacteria dried after growing in log phase conditions exhibit the least resistance to the accelerated stability conditions, resulting in complete loss of viability within 7 days (n = 3). When the bacteria are grown to stationary phase for 24 hours, and then spray dried, they are able to survive longer in the desiccated state at accelerated stability conditions, with no detectable colonies after 14 days (n = 3). Viability over time in the desiccated state continued to increase as the formulations were cycled through the drying and heat-exposure process. "Cycling" consisted of repeated application of the following steps: first culturing bacteria to stationary phase, then processing cultures for spray drying (centrifugation and re-suspension in low osmolyte excipient solutions), then spray drying, then collecting and processing the dry powder (vial filling), then incubating the vials at 40 °C in stability chambers until viable bacteria were mostly eliminated, then culturing surviving bacteria from dry powder to stationary phase. After repeating the cycle four times ("multiply cycled bacteria") the bacteria showed an almost 10-fold increase in stability with the ability to form colonies until 105 days (n = 3). Phenotype The increased viability over time of multiply cycled bacteria was accompanied by some minor changes in growth rate and overall gross morphology differences between the colony forming units. In log growth phases, the wild type non-spray dried bacteria exhibited a doubling time of 2.4 ± 0.3 hours (n = 3), whereas multiply cycled bacteria doubled approximately every 3.1 ± 0.1 hours (n = 3) ( Figure 2). Surface topology was identical between colonies with both the non-previously spray dried bacteria and the multiply cycled bacteria exhibiting rough morphology. Strikingly, the color of the multiply cycled bacteria colonies differed from the non-spray dried bacteria. Approximately 30 ± 5% of the colonies on multiply cycled plates were orange pigmented upon removal from the plate incubator whereas only 5 ± 3% of the wild type non-spray dried plates were orange colored upon removal. This pigmented phenotype began to emerge after the second spray drying cycle and became dominant by the fourth cycle. The proportion of multiply cycled colonies exhibiting pigmentation, as well as the intensity of the pigmentation, increased when plates were left on the bench-top and exposed to light and air. The percentage of heavily pigmented colonies grew to greater than 90% ± 5% after 1 day exposure to light and air ( Figure 3). M. smegmatis colony forming units of (a) wild-type non-spray dried bacteria and (b) multiply-cycled bacteria after 1 day exposure to light and air. Bacteria are not exposed to light during incubation. The orange phenotype will emerge in the wild-type strain after exposure to air and light at low frequency. Multiply-cycled bacteria emerge from the incubator with the orange phenotype which becomes more intense upon exposure to light and air. Gene Expression We performed two sets of gene expression experiments to uncover factors important for sustained viability in the dry powder formulation process. In the first experiment we examined gene expression in log phase and stationary phase cultures of bacteria, neither of which had been previously exposed to spray drying. In our second set of experiments we compared gene expression in non-previously spray dried bacteria to that in multiply spray dried bacteria. In this case we made head-to-head comparisons in log phase, stationary phase, and dry powders that had 24 hour exposure to accelerated stress conditions. a b Log versus Stationary Comparison in Non-Spray Dried Cultures We extracted RNA from log phase (O.D. = 1.0) and stationary phase (O.D. > 3.0) bacteria and performed four microarrays -two biological replicates each with a dye swap to minimize dye specific bias. As expected, significant differential gene expression was observed. Out of approximately 7000 genes on the microarray, about 2500 were differentially expressed at a p-value < 0.05 level of significance. Out of these 2500 genes, approximately 1400 were differentially expressed with a pvalue < 0.01. The log 2 median average intensity of the M. smegmatis spots was 9.8 whereas the median average intensity for the A. thaliana control spots was 7.2. This indicated that signal was, on average, 5-fold greater than non-specific cross-hybridization noise. Genes up-regulated in log phase over stationary phase included a nearly complete complement of ribosomal proteins (Appendix Table 1) as well genes that are important for growth including electron transport (e.g. ATP synthase components), energy metabolism (e.g. TCA cycle enzymes), and cell maintenance needs (e.g. lipid metabolism and protein folding) (Appendix Table 2). Genes upregulated in stationary phase over log phase included those typically associated with states of stress including catalases, nitrite reductases, alternative sigma factors, and various amino acid permeases and transporters (Appendix Table 3). Two clusters related to the expression and assembly of [NiFe] hydrogenase were up-regulated along with other stress related genes included UsfY (MSMEG_1769 and MSMEG_1791), the starvation-induced DNA protecting protein (MSMEG_6467), the sporulation factor WhiB (MSMEG_1597 and MSMEG_1953), and L-lysine-epsilon aminotransferase (MSMEG_1764). Since differential regulation of gene expression is mainly controlled by the presence of primary and alternative sigma factors we expected to see significant up-regulation of MysA (primary housekeeping factor) in log phase and sigB and sigF in stationary phase (stress related factors) [8]. While we found that these were three of the six most highly expressed transcripts, as measured by average intensity across all channels, there was little evidence of differential expression (Appendix Table 4). Instead we found that two sigma factors related to the sigma-54 factor (nitrogen limitation and alternative carbon utilization [9]) and two sigD factors (alternative stress [10]) were most differentially expressed with respect to stationary phase as well as a large (100kD), uncharacterized sigma factor expressed with respect to log phase. Non-Previously Spray Dried versus Cycled In our second set of experiments, we performed microarray analysis that compared gene expression in bacteria that had never been spray dried to that in bacteria that had been subjected to multiple spray drying cycles. We compared the differently processed bacteria by performing four microarrays in log phase (two biological replicates each with a dye swap), three microarrays in stationary phase (two biological replicates with a single swap), and two microarrays in dry powder form (single biological sample with a dye swap). In the log phase comparison, 79 genes were differentially expressed with a p-value < 0.05 of which 36 were differentially expressed at a p-value < 0.01 level of significance. All but two of these genes, acyl-CoA dehrydrogenase (MSMEG_1821) and malonyl CoA-acyl carrier protein transacylase (MSMEG_4325), were upregulated in the multiply cycled bacteria. In the stationary phase comparison there were no genes differentially expressed at p-value < 0.05 level of significance. However, using the log odds scores calculated by the Limma statistical package we found that there were ten genes that had 50% or greater probability of differential expression (three up-regulated in non-cycled bacteria and seven up-regulated in multiply cycled bacteria -see Appendix Table 6). In addition, there was a significant number that had some (>10%) probability of differential expression. In the dry powder comparison there was a much higher level of differential expression. Approximately 1200 genes were differentially expressed with p-value < 0.05, however, of these only 140 had a p-value < 0.01 and the number of genes that had a 50% or greater chance of being differentially expressed was only 291. The median average intensity for the M. smegmatis spots in this comparison was 8.3, approximately 2-fold below the medians for both the log phase (9.1) and the stationary phase (9.5) comparisons indicating a lower level of signal. Log Phase Comparison Results for the log phase differential expression data are given in Appendix Table 5. The differentially expressed genes are dominated by a large gene cluster (22% of the statistically significant genes) that runs from MSMEG_1766 to MSMEG_1802. Two copies of the UsfY gene product (MSMEG_1769; MSMEG_1777) in the cluster are differentially expressed whereas a third copy of UsfY in the cluster (MSMEG_1791), the one that is closest upstream to sigF and most highly expressed in stationary phase, is not differentially expressed. A fourth copy of UsfY (MSMEG_4406) elsewhere in the genome is also not expressed. SigF is likely expressed, based on an intensity 1.2 standard deviations above the median average intensity, but not differentially (intensity ratio = 0.1). S-(hydroxymethyl) glutathione dehydrogenase is differentially expressed at two loci (MSMEG_0671; MSMEG_6616). Also differentially expressed were genes involved in the acquisition or production of osmolytes and carotenoid antioxidants (e.g. MSMEG_2926 and MSMEG_3184; MSMEG_2345 and MSMEG_2346), two catalases (MSMEG_6213; MSMEG_6232), and the starvation-induced DNA protecting protein (MSMEG_6467). Stationary Phase Comparison Stationary phase microarray data did not have any statistically significant differentially expressed genes. However, many transcripts did have positive probability of differential expression (Appendix Table 6) with phytoene synthase (MSMEG_2346) having the highest probability of differential expression (66%). Other differentially expressed transcripts include phytoene dehydrogenase (MSMEG_2347), which participates in the same biosynthetic pathway as phytoene synthase, a manganese containing catalase (MSMEG_6213), maltooligosyl trehalose synthase (MSMEG_3185), S-(hydroxymethyl) glutathione dehydrogenase (MSMEG_0671), and the MSMEG_1769 locus of UsfY. Genes appearing in the stationary phase comparison but not in the log phase comparison include glycerol kinase (MSMEG_6759), glycerol-3-phosphate dehydrogenase 2 (MSMEG_6761), and AmiB (MSMEG_1679). Notably, these three genes were down-regulated relative to the cycled bacteria. SigB (MSMEG_2752) was up-regulated in this comparison where it was not observed to be differentially expressed in the previous non-previously spray dried log versus stationary phase experiments. Dry Powder Comparison The dry powder comparison showed that the non-cycled bacteria increased transcriptional expression of genes associated with growth processes (Appendix Table 7). These transcripts included those for glycolysis (MSMEG_4107), sulfur uptake (MSMEG_5789), fatty acid metabolism (MSMEG_2081; MSMEG_6512), and amino-acid biosynthesis (MSMEG_1843). In addition, there were expressed transcripts related to shut-down or repair including those for amino acid scavenging (MSMEG_5486; MSMEG_6332), oxidative damage (MSMEG_3215), nucleic acid degradation (MSMEG_3902; MSMEG_5226), and the soluble pyridine nucleotide transhydrogenase (MSMEG_2748), which catalyzes the conversion of NADH to NADPH and is important for catabolic processes. Genes expressed at higher levels in cycled bacteria contained a number of genes related to lipid synthesis, a diverse group of transposable elements, the stress related sigD alternative sigma factor (MSMEG_1599), and the error-prone DNA polymerase IV (MSMEG_2748) (Appendix Table 8). Viability Discussion The results of this study show that the processing of bacteria into a dry powder state affects overall fitness and ultimately survivability. It is important that fitness, or the ability to respond appropriately to specific stress conditions, not require processing conditions that inhibit the bacteria's ability to flourish in normal growth or other environments. In this light it is important that the bacteria show improved viability over time when grown to stationary phase and exposed multiple times to accelerated stability conditions and the spray drying process. Although the cycled M. smegmatis doubles at a slightly slower rate, 3.12 hours vs. 2.36 hours, both times are well within the literature reported values of the bacteria's doubling time under normal growing conditions [11,12]. Furthermore, we found little evidence in the gene expression data to suggest that the observed variability in growth rate was related to transcriptional differences. There was no differential expression observed in genes central to growth or maintenance and limited differential expression overall. However, the genes that were differentially expressed were heavily skewed in number towards the cycled bacteria. The additional expression in cycled bacteria could represent a small increased energy demand in which case the observed slower metabolism might be a genuine consequence of our formulation process. Gene Expression Differences Our expression data illustrate that the transition to growth phase from stationary phase is a smooth and highly orchestrated switch in metabolic profile. Stationary phase is a natural response to stressful conditions and bacteria have robust systems in place to counter environmental challenges. In stationary phase of both non-cycled and cycled bacteria we observed increased expression of products that are used to fight stress. These products (Appendix Table 3) included those that combat reactive oxygen species [13], compensate for nitrogen limitation [14], facilitate the utilization of alternative carbon sources [15], and provide for metabolic scavenging [16]. The upregulation of these [NiFe] hydrogenase related genes suggests a response to oxygen limitation ([NiFe] hydrogenases have been shown to be strongly upregulated in hypoxic conditions [17]). Intriguingly, L-lysine-epsilon aminotransferase has been shown to be 40-fold up-regulated in models of the persistent/latent infection of M. tuberculosis [18]. It is probable then that the observed increase in dry powder viability of stationary phase cultures over log phase cultures is a consequence of bacteria being better suited to resist harsh conditions. In a similar vein, our data suggest that in repeatedly stressing bacteria we have enriched the capacities by which bacteria can survive new and specific stress conditions. Interestingly, these capacities seem to be manifested such that the cycled bacteria "anticipate" future stress. For example, the over-production of trehalose biosynthetic enzymes (trehalose is an excellent osmoprotectant), catalases (to neutralize reactive oxygen species), and glutathiones (for alternative carbon utilization and antioxidant activity) occurs in both log and stationary phases of cycled bacteria. Glycerol kinase and glycerol-3-phosphate dehydrogenase 2 are both down-regulated in stationary phase in cycled bacteria. Since both of these enzymes are involved in processing of glycerol, the down-regulation of these two enzymes has the likely effect of increasing intracellular glycerol concentrations. Given that glycerol is another highly effective osmoprotectant (and water substitute), accumulation undoubtedly helps protect against the osmotic forces at work in the drying process and in the dry powder state. Likewise, AmiB, which plays a role in maintenance and disassembly of the extra-cellular polysaccharide capsid, is also down-regulated in stationary phase in cycled bacteria. It may make "survival-sense" for bacteria to reduce degradation of an all important cell barrier if stress is on the horizon. Moreover, a very interesting result was that of the starvation-induced DNA protecting enzyme which is over-produced beginning in log phase growth. This protein is known to exist in two multimeric forms with the extended polymeric form conferring the principle protection of DNA [19]. The transition from the limited multimeric form to the extended polymeric form is temperature dependent, occurring at 40 °C. Since our spray drying was carried out at +40°C and powders subsequently incubated at 40 °C for extended periods of time, it is possible to speculate that the observed increase in expression is a direct response to our processing conditions. That is, since there is a significant amount of DNA to protect in the event of heat stress, and our processing occurs rapidly, it clearly benefits the organism to accumulate this protein preemptively. Carotenoids One striking observation in our study was the marked orange color and continued rapid orange transformation of the cycled bacteria. It was observed however that a fraction of colonies from wild type cultures would also undergo a similar color transformation. It is known that stock cultures of M. smegmatis often contain pigmented colonies (as well as other variants) suggesting multiple subpopulations exist or arise naturally in the mc 2 155 strain [20]. In our case this phenotype emerged dominantly when large populations were repeatedly spray dried and placed in the stressful environment of a heated dry powder suggesting the orange phenotype may be related to a selective advantage. Carotenoids are a class of isoprenoid metabolites synthesized de novo in bacteria. The carotenoid pathway ultimately results in pigmented complex polyterpene lipids including-carotene and lycopene whose functions are in part to act as free radical scavengers and protect cells from light induced oxygen species [21]. The carotenoids are also known to be able to contribute to enhancing the strength of the cell wall due to their lipophilic nature and intercalation into the cell membrane [22]. The presence of gene products that catalyze the formation of these compounds almost certainly explains the pigmentation appearing in the multiply cycle bacteria including the observed increase in color intensity when exposed to light and dry air on the benchtop. Since carotenoids are robust antioxidants and fortifiers of cellular barriers they would be beneficial for withstanding the shear and osmotic stress in the dry powder formulation procedure. In fact, the buff colored mc 2 155 strain of M. smegmatis is known to be less robust relative to the naturally pigmented wild-type strains, having seen ongoing usage as a model organism, in part, for its high transformation efficiency [23,24]. Thus, we feel the putative over-production of these compounds in cycled bacteria would support our hypothesis that prestressed bacteria are more robust. Analysis of the microarray data showed that the entire carotenoid biosynthetic operon is upregulated in the cycled bacteria in both log and stationary phases (Appendix Table 9). We note that the pathway is not differentially expressed in the dry powder state, however, the high signal intensity over both the cycled and non-cycled samples (all five genes in the operon had expression levels two standard deviations or higher than the median expression level) suggests that it is highly expressed in both cases. Importantly, previous work conducted in our lab investigated the effects of adding the commercial adjuvants titermax and titermax gold in attempts to increase immunity and antigenicity in spray dried bacteria. It turns out that the major component of the commercial adjuvant formulations are squalene derivatives. These structures have highly similar structure properties with the naturally occurring mycobacteria carotenoids such as zeta-carotene ( Figure 4). Remarkably these adjuvant/bacteria formulations also showed a 1-5 log improvement in viability over time in the dried powder state (unpublished data). This suggests that cartenoid and squalene derivatives may play a critical role in increasing viability of organisms in formulation processes and in the dry powder state over time. Stress Response Gene Cluster The observation that the gene cluster [MSMEG_1750 to MSMEG_1804] is up-regulated in cycled bacteria is a significant observation. Several genes in this cluster are thought to be related to or regulated by the alternative sigma factors sigF and sigD, including three copies of UsfY (upstream of sigma F protein Y). This cluster of genes is highly similar to a cluster of stress related genes (also containing UsfY) that is implicated in the latency and persistence of M. tuberculosis [25,26]. It has been postulated that UsfY is an anti-anti-sigma factor directed at sigF [25]. Sigma factors act as critical regulators of gene expression in bacteria by recognizing their cognate promoters and controlling the different programs that bacteria employ in response to environmental stimuli. Antisigma factors bind to sigma factors to down-regulating specific transcriptional activity. In turn antianti-sigma factors bind to anti-sigma factors and thus dampen their regulatory activity. Thus, upregulation of UsfY would help explain the increased levels of sigF-dependent transcripts in stressed bacteria. It has been shown that carotenoid biosythesis genes are regulated by sigF in M. smegmatis [21]. Given the high level of gene expression we observed in the carotenoid biosynthetic pathway, as well as in the cluster of genes related to sigF, we did a simple promoter search in the M. smegmatis genome for the sigF consensus promoter sequence -10 (GGGTTT) [26]. The results were striking. A large number of genes that were seen to be either differentially expressed in the cycled bacteria (log and/or stationary phase), or highly expressed in the dry powder state, appear to be directly regulated by sigF (Appendix Table 10). In addition, it appears that the MSMEG_1777 locus of UsfY is itself regulated by sigF. Since sigF itself was not seen to be differentially expressed in any of the experiments, including the non-spray dried log versus stationary phase comparison, higher levels of sigF controlled products in the cycled bacteria was puzzling. One possibility is that higher levels of these products could have arisen from increased UsfY expression, at other locus not under control of sigF, combined with basal sigF expression. This, by itself, might account for the observed improvement in viability of cycled bacteria. However, the high expression levels and postulated anti-anti-sigF activity of UsfY, along with the positive regulation by sigF (at least at the MSMEG_1777 locus) may provide for a mechanism by which the cycled bacteria produce larger quantities of important products in a just-intime manner, thus conserving resources while simultaneously being prepared to better survive the dry powder formulation. The mechanism may be that UsfY acts like a positive gain in a control circuit. That is, since UsfY is positively regulated by sigF, higher levels of sigF lead to higher the levels of UsfY, and because of the anti-anti sigma factor activity, higher levels of UsfY lead to higher activity of sigF and consequently higher levels of stress related products (e.g. carotenoids). This feedback control, along with the coordinated anti-sigma factor activity, is a well established regulation mechanism for transcriptional control used in bacteria. However, our results suggest that the multiply cycled bacteria may constitutively express higher levels of UsfY and by doing so they likely introduce positive gain into the system. At higher initial levels, UsfY is positioned to shift the equilibrium away from anti-sigF factors as they are produced. Stress signals that increase sigF levels (such as drying stress) would be rapidly amplified since any concomitantly produced anti-sigF factors would be immediately sequestered. In this way, multiply cycled bacteria can not only respond more robustly to stress stimuli but also faster. We feel the latter is an exceedingly important point as our spray drying procedure imposes an extreme change in environment over a very short timeframe. The lack of differential expression of the UsfY cluster of genes in the dry powder further supports the idea that increased expression is more beneficial prior to the actual drying phase. In other words, strengthening of the cell wall, or accumulating a pool of antioxidants, or preparing for osmotic stresses, is best done proactively because once in the dry powder state energy may be required for other processes (such as repair). This postulate is evidenced by the overall transcriptional responses in dry powder. In the absence of "preparative" gene expression, the non-cycled bacteria appear to have increased expression of genes related to basic metabolic needs. This could reflect a slightly heightened response to the nutrient limited conditions, a last ditch effort to produce energy and acquire necessary components for maintenance, or an attempt at repair. In any case, the increased expression of these products appears to be insufficient (based on differences in viability) and too limited given the extreme urgency needed in adaptation to the harsh and resource-poor environment. In contrast, genes upregulated in the cycled bacteria suggest an attempt to cope with extreme stress with extreme measures. The increased expression of error-prone DNA polymerase IV, which provides a mechanism for adaptive mutagenesis, suggests this is the case while the number of transposases expressed indicates that dry powder environment is, in fact, catastrophic for the bacteria. Transposases facilitate the "jumping" of DNA segments randomly across genome in an effort to form new recombinant proteins to help combat a new stress. In our data we see that the IS1096 transposable element is highly and differentially expressed in cycled bacteria in the dry powder state. In addition, IS096 related transcripts (Appendix Table 8) include hypothetical proteins that have the IS1096 transposon partially overlapping on the complimentary strand. Transposons are known to contain complimentarily coded regulatory sequences (i.e. sigma factor binding sites) and the fact that these hypothetical proteins are being expressed in the dry powder state makes it highly likely that transposon mediated mutagenesis is in fact occurring. Our promoter analysis identified at least one copy of the IS096 TnpR transcript (MSMEG_4791) as being regulated by sigF and thus higher expression of IS096 in cycled bacteria is consistent with the cycled bacteria's UsfY augmented sigF response. Thus, "preparative" expression in cycled bacteria may be conferring an adaptive advantage in that an organism that can devote more energy and cellular resources to recombination, over one that has to scavenge more resources for maintenance and repair, has a substantially higher probability of surviving extreme duress. In summary, our data suggests that the acquisition of enhanced cartenoid synthesis enhances post spray-drying dry powder viability. This enhanced synthesis could potentially result from a mutation in sigF or possibly from IS1096 transposition into regulatory sequences. Further work will be required to determine if the multiply spray dried phenotype, which we have designated MSDsigf(+) ( Table 1), and the high cartenoid phenotypes share a common mutation. In particular, sequencing of the sigF region of the chromosome will be of high priority. Solution Preparation Spray drying solutions were prepared by pelleting cultures, washing them with PBS/0.05% Tween 80, and resuspending them in an equal volume of 0.05% Tyloxapol (Sigma). The final solution was mixed with an equal volume of 8 mg/mL L-leucine (Sigma) for a final concentration of 4 mg/mL Lleucine and 0.025% Tyloxapol. All solutions were used immediately after preparation. Spray Drying Conditions Spray drying was carried out in a Buchi B-290 mini spray dryer using a high performance cyclone and a 0.7 mm pressure nozzle tip (Buchi, Flawil, Switzerland). Solutions were spray dried at a feed rate of 7 mL/min with a drying air flow rate of 35 liters/hr. Outlet temperature was kept between 42-45 °C by varying the inlet temperature from 115-125 °C. The day-to-day variation was due to differences in ambient relative humidity. Powder was collected immediately and placed into amber scintillation vials. The vials were then stored in a desiccator placed in either a 40 °C/ 75% or 25 °C/ 60% relative humidity chamber. Viability Serial dilution plating followed by CFU determination was used to assess the number of viable M. smegmatis bacteria in cell suspensions before spray drying and in the powders post spray drying. Briefly, powders were resuspended in PBS/0.05% Tween 80 and vortexed to homogeneously disperse the samples. Samples were then serially diluted and placed on Middlebrook 7H10 agarose with 10% OADC, 0.5% glycerol and supplemented with 50 g/mL hygromycin. Plates, once inoculated, were wrapped in foil and incubated at 37 °C for three days. In order to assess the stability of the bacteria over time, powders were placed in storage conditions and plated at regular intervals. RNA Extraction RNA was extracted from either 25 mL of culture or 200 mg of powder. Powder was first resuspended in 25 mL of DEPC water (Ambion, Austin, TX). Both solutions were then pelleted by centrifuging at 10,000 rpm for 1 min. Extraction was then carried out as described in Managan et al. [28]. Briefly, 0.4 mL DEPC H 2 O and 1 mL of detergent solution (Tween-80, SDS (Sigma), 0.5M Sodium Acetate, DEPC H 2 O) were added to the pellet and gently mixed. The mixture was added to 4 mL of 0.1 mm silica/ceramic beads in a 7mL screw-top beadbeater tubes. Phenol: chloroform: isoamyl alcohol 125:24:1 (Sigma) and chloroform: isoamyl alcohol 24:1 (Sigma) were then added to the tubes. The mixture was bead beat on high for 45 sec on a Biospec Mini-Bead Beater™. The broken cells were placed on ice for 10 min. The liquid was transferred to 2 mL screw-capped tubes and centrifuged at 16 X rpm for 10 min. The aqueous phase was removed and transferred to a fresh 2 mL screw-cap tube, equal volume chloroform isoamyl alcohol was then added. The solution was briefly centrifuged and the aqueous phase was once more removed. An equal volume of isopropanol solution was then added. Tubes were placed in −80 °C freezer overnight. The tubes were centrifuged at 16 X rpm for 15 min, the supernatant was poured off, and the pellet dried for 45 min on the bench top. RNA cleanup was carried out using a Qiagen RNeasy® Mini Kit with DNAse digestion. Total RNA was eluted in 60 L and concentration was determined on a NanoDrop ND-1000. RNA content was visually verified by running samples on precast agarose gels (Sigma) in a mini gel electrophoresis unit with ethidium bromide staining. cDNA Synthesis and Aminoallyl-labeling cDNA was synthesized by adding 2 g of total RNA to 2 L of random hexamers (Invitrogen, Grand Island, NY) and nuclease free water (Ambion) to achieve a final volume of 18.5 L. Samples were incubated at 70 °C for 10 minutes, snap-frozen on ice and then centrifuged at 10,000 rpm. The solution was then added to 6 L first strand buffer (5X) (Invitrogen), 3 L 0.1 M DTT, 0.6 L 25 mM dNTP/aa-UTP labeling mix, and 2 L PowerScript RT (Invitrogen). The mixture was then incubated in a 42 °C water bath overnight. RNA was hydrolyzed by adding 10 L 0.5 M EDTA (Ambion), 10 L 1 M NaOH, and then incubating at 65 °C for 15 minutes. Next, 25 L 1 M TRIS (pH 7.0) (Ambion) was added in order to neutralize the pH. Unincorporated aa-DUTP and free amines were removed with a Qiagen MiniElute PCR purification kit. cDNA was eluted in 60 L and the concentration determined on a NanoDrop ND-1000. The cDNA was then dried in a speed vac. Samples were resuspended in 4.5 L 0.1 M sodium carbonate buffer pH 9.3 and added to 4.5 L of either Cy3 or Cy5 dye (Amersham). The solutions were allowed to incubate in the dark at room temperature for 1 hour. After coupling had finished, 35 L of 100 mM NaOAc pH 5.2 was added and the samples were purified using a Qiagen MiniElute PCR purification kit used according to the manufacturer instructions. Dye incorporation was assessed using the NanoDrop ND-1000 microarray analysis settings. Microarray Preparation and Hybridization M. smegmatis microarrays were generously provided by The Institute for Genomic Research (TIGR). Hybridization of labeled cDNA probes was carried out using the TIGR SOP M007/8. Briefly, microarray slides were incubated in a prehybridization solution at 42 °C in coplin jars for 1 hour. Slides were then transferred to a glass staining dish and washed 10X with 200 mL nuclease free water. The slides were then rinsed for 2 min in a staining dish filled with isopropyl alcohol and then centrifuged at 1000 rpm for 10 min to dry. A 40% formamide hybridization buffer was then prepared and 50 L was added to the cy3/cy5 probe. The probe mixture was placed on a 95 °C heat block for 5 min, vortexed and then heated for another 5 min. Prehybridized microarray slides were placed in a hybridization chamber with a clean LifterSlip (Erie Scientific, MA) and the probe mixture was added. A small amount of unused hybridization solution was added to each of the small wells located at either end of the microarray slide. The chamber was wrapped in foil and incubated in a 42 °C water bath overnight. After hybridization, slides were sequentially washed in 500 mL low stringency, medium stringency and high stringency buffers. Each wash step was carried out twice in glass staining dishes. Slides were rinsed briefly in 500 mL Millipore water and centrifuged for 2 min at 1000 rpm and scanned. Image Scanning and Data Analysis Microarrays were scanned using an Axon scanner and data was acquired using Genepix Pro 5.1.0.19 software. Data was analyzed using Bioconductor bioinformatic software with the Limma statistical package [29]. Data was filtered to exclude poor spots (Flag > −50). Background was corrected using the backgroundCorrect command and data was normalized using the normalizeWithinArrays command. Adjusted data was then fit to linear and Bayesian models using the lmFit and eBayes commands. Intensity Ratios, Average Median Intensity and p-values were taken from the logFC, AveExpr, and the more stringent Adj. P.Val. in the output file and then averaged over the three gene replicates present on each microarray. Probability of Differential Expression was calculated using the Limma log-odds score (B) and equation (1). Conclusions Our results suggest that relevant stressing of bacteria, such as M. smegmatis, can lead to highly stable dry powder formulations with remarkable room temperature stability characteristics. Repeated spray drying and selective pressures in dry powders may enrich for strains which can persist in harsh conditions. It is likely we have selected a natural population most fit for long term survival in dry powders which in theory could make for more stable vaccines. However, it is clear that the dry powder state is exceedingly harsh and may induce recombination events. In applying our methodology to more relevant vaccine strains it will be important to ensure they retain immunogenicity and remain safe. We have demonstrated a new approach useful in the formulation of live whole-cell vaccines. This approach centers on the biochemistry of the organism rather than the chemical and physical parameters often the focus of vaccine formulation efforts. The approach has not only provided insight into mechanisms that influence viability, but has also led us to specific compounds that may prove advantageous in the dry powder formulation of other important organisms.
2016-03-14T22:51:50.573Z
2010-04-01T00:00:00.000
{ "year": 2010, "sha1": "fb12280ea05b00a1387cf928eb6fe03733176681", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1996-1944/3/4/2684/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "fb12280ea05b00a1387cf928eb6fe03733176681", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Materials Science" ] }
4564648
pes2o/s2orc
v3-fos-license
Design of Hybrid Power System for Remote Area This paper describes the results of designing hybrid power system (HPS) for remote area. The design of this system combines three sources of renewable energy, namely wind energy, photovoltaic, and biomass. Design using HOMER software that analyzes system performance from optimization and economic aspect. The design recommendations indicate that all electrical energy demand in the study area is met with 80.27% of total production. The existence of batteries in the system is able to minimize the value of excess electricity. Introduction In recent years, the concept and movement of renewable energy as a solution to isolated areas has become a warm agenda for discussion. The main motivation is the growing awareness of human to protect the environment by reducing emissions gas and rising fuel prices that drive price increases [1]. In addition, current technological advances have shown that the cost for renewable energy devices has been drastically reduced in recent years [2]. So that renewable energy becomes the right choice to launch electrification program in isolated area in Indonesia especially with Smart Grid concept. Smart grid is believed to be a future grid that offers increased efficiency, reliability, and environmental friendliness in power generation, transmission, distribution, consumption and management with the advancement of integration in information and communications technology [3]. Smart grids have many new features and advanced capabilities including price dynamics that depend on demand-side management and the high development of the distribution of renewable energy sources [4]. Basically this renewable energy source is dynamic, so the full utilization in renewable energy sources can be achieved by applying hybrid systems. The renewable hybrid system (SEHT) is composed of two or more conventional energy sources and renewable energy sources or no conventional energy sources that are interconnected in a grid or standalone [5]. Renewable energy systems such as PV, biomass, and wind, or hybrid systems from all three can become a standalone system on the isolated area. This system in addition to reduce the cost of generation and cost of electricity consumption can also create an efficient independent region. From a wide range of existing software, Hybrid Optimization Model for 2 Electric Renewable (HOMER) is becoming a widely used software for hybrid system optimization. This software can be used for operating strategies of complex system optimization in an easy and economically accurate way [6]. In 2014, this software is used to make electrification design of an isolated area in Karnataka, India. This design incorporates three renewable sources of PV-Biogas-Biomass in a hybrid system [7]. Another research using HOMER is the economic modelling of SEHT for remote areas of Ethiopia by incorporating renewable energy sources of PV-Wind [8], SEHT's research for electrification at an 1234567890''"" isolated area in Algeria, North Africa in terms of production power, lifecycle system costs and emissions reductions Greenhouse gases by combining wind-diesel renewable energy sources [9]. Thus, this software is the right software for smart grid system design involving hybrid system from various renewable energy as well as renewable energy with conventional energy. The study also describes a design by designing a smart grid system model involving optimized hybrid generation at a resort or resort in an isolated area. This research is expected for the development of tourism facilities in an isolated island to improve the welfare of the island community. Methods The HOMER software can display 3 main principles of simulation, optimization and sensitivity analysis [10]. Hybrid system simulations show the system optimized for different system sensitivity variables [11]. This optimization model in the software allows designers to evaluate offered designs of alternative system configurations based on technical and economic feasibility [12]. The variables that designers can evaluate on optimization and sensitivity analysis algorithms are in the economic and technical aspects of system configuration, uncertain cost calculations, the existence of sources, and other variables [13] [14]. HOMER also checks for emissions, system control variables, economics and constraints during hybrid simulations. Research Design This research begins by identifying the load on the case study area. By identifying the load profile, a hybrid system configuration can be determined that will be designed to meet the load requirements based on existing potential. Next, specify the components of the configuration to be designed, the resources, and the load. In addition, there are also operating system requirements such as parameters of economics, optimizations, constraints and control systems that must be met. The next step is to include sensitivity analysis parameters of system configuration in the form of capital cost, replacement cost, and operation and maintenance cost. The last step is to do a calculation that will result in optimization and sensitivity analysis based on predetermined parameters. Schematic system The modelling of this hybrid system consists of several generating components by utilizing existing renewable energy potentials, namely wind turbines, photovoltaic modules, biomass generators, converters, and powered by batteries as energy storage devices (see figure 1). Load profile This loads comes from an energy audit on designing a Hotel for lodging and tourism needs (see figure 2). Fulfilling the need for electrical energy at Hotel will be managed independently from the Converter. The recommended converter in design is a two-way or two-directional converter, which can be operated either as an inverter or a rectifier. Figure 4.7 shows the input details of the converter. The converter efficiency is assumed 95% for the inverter and 85% for the rectifier. The initial cost for the converter is $ 900 / kW at an O / M cost of $ 10 / year. It is based on information from WHOLESALE SOLAR that was updated in May 2016 (see figure 9). Variable optimization and sensitivity It takes a lot of information to get the simulation results, in addition to the previously described parameters. To produce the sensitivity analysis, the economic parameters are so influential that the accuracy of the cost of each component is very important. The research project is designed for 25 years, with the nominal discount rate and inflation rate of 6.75% and 3.33% (Bank Indonesia, 2016), respectively. For constraints systems, the maximum annual shortage capacity is assumed to be 20% with minimum renewable energy penetration of about 10%. This assumption is because the designed hybrid system does not connect the available grid, in other words the role of renewable energy becomes the main resource as a power supplier at Hexagon Hotel (see Table 1). Optimization result Based on the calculations made by HOMER software, six hybrid power plant configurations were obtained with only one sensitivity analysis result. From the simulation results, some of the offered configurations are sorted by the lowest NPC. The best option of this analysis is aimed at the configuration of PV, wind and biomass systems equipped with a system of overlays and batteries (see figure 11). Sensitivity analysis Can be seen from the best configuration in Figure 12, the NPC of the system is $ 759,478 or Rp 10.121.176.381.65 with Cost of Electricity (CoE) of $ 0.0341 / kWh and Operating and Maintenance Cost (O / M) of $ 2,132 per year. Based on the sensitivity analysis, the increase of electricity requirement will affect the operational cost, total NPC and also CoE. So it is with its resources. The greater the potential resources used, the fewer components used. This will reduce the total number of NPCs in the system. Electrical analysis The amount of electricity generated reaches 1,648,385 kWh per year. PV became the largest supplier in this case, as it produced 1.301.158 kWh per year or 78.93% of total electricity production. PV is expected to be a substitute for the grid that supplies the load demand on the object of study continuously. The electricity contributed by the biomass generator reaches 250,468 kWh per year, equivalent to 15.19% of the total electricity production from the system. On the other hand, biomass generator is considered to supply the load demand during peak load time, ie at 02.00 WIB until 06.00 WIB and at 19.00 WIB until 21.00 WIB. While the wind turbine into a complementary supply that produces 96,759 kWh per year which is equivalent to 0.058% of the total production of electrical systems. In addition, there is energy stored in the battery that is equal to 708,072 kWh per year. While the total main load that needs to be met is 1,323,217 kWh per year. Therefore, the overall load can be fulfilled by a system with an excess power of 118,337.6 kWh per year which is equivalent to 7.17% of the total production wasted into excess electricity. Economic analysis The best configuration shows that the CoE reaches $ 0.0341 / kWh or about Rp 454.43 / kWh. This figure is far below the current base rate of Indonesian electricity which is priced at $ 0. 11 that needs to be issued is Rp 62.422.966,25 / month. That is, the system is able to suppress and save electricity budget of Rp 139,130,528,3 / month. This shows that the use of renewable energy as an alternative energy with the configuration generated in the case study area can reduce the electricity cost of the grid. Therefore, the best configuration is feasible to be realized in order to meet the needs of electrical load on the object of study. Conclusion The design recommendations indicate that all electrical energy demand in the case study area can be fulfilled with 80.27% of total production and can in the presence of a battery of the system can store energy so as to suppress the excess electricity value. In addition to optimization results, the proposed system is also based on sensitivity analysis that is influenced by operational costs, capital cost, and also CoE. The greater the potential of resources used, the fewer components necessary so that the total NPC will decrease significantly. From an economic point of view, this simulation can reduce CoE to 1/3 or up to only 30.9% of the electricity base rate in Indonesia today. This shows that this system can be realized, because it has benefits both on aspects of electricity and financial aspects.
2018-04-04T06:50:49.321Z
2018-01-01T00:00:00.000
{ "year": 2018, "sha1": "792a20efb77a4caa1cbebf0f3c4d16d74d38317f", "oa_license": null, "oa_url": "https://doi.org/10.1088/1757-899x/288/1/012010", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "34c6a9c544bed12f299cc19ac9ad244b70e75777", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [ "Environmental Science" ] }
5760403
pes2o/s2orc
v3-fos-license
Extended Durability of a Cloth-Covered Star-Edwards Caged Ball Prosthesis in Aortic Position The Starr-Edwards caged ball valve is one of the oldest cardiac valve prosthesis and was widely used all around the world in the past decades. Despite the long-term results that have been reported there are only a few cases reported that exceed 30 years of durability. Here in, we report a 53-year-old patient with a well-functioning 35-year-old aortic Starr-Edwards caged ball prosthesis. Introduction The Starr-Edwards caged ball heart valve marked a new era in the treatment of valvular heart disease. Complications including thrombosis, thromboebolic events, paravalvular leakage, and pannus formation that occurred in variable frequencies were reported regarding to this artificial valve. Several modifications of Starr-Edwards valves were implanted over 200,000 cases all round the world, and long-term results with these valves have been reported showing satisfactory results with reliable durability and safety [1][2][3]. For elimination of the larger metal surface and variance in the original silastic ball and aiming to reduce thromboembolism, cloth-covered model of the Starr-Edwards stellite-ball valve was introduced. We report a 53year-old patient with well functioning cloth-covered Star-Edwards caged ball prosthesis implanted 35-years-ago. Case Report A-53-year old man admitted to our institution with exertional dyspnea and angina. He had had aortic valve replacement with Starr-Edwards 2320 model caged ball mechanical valve prosthesis in 1974 and suffered a thromboembolic serebrovascular accident two years ago. His physical examination revealed left hemiparesis with blood pressure 120/80 mmHg, heart rate 92 beats/min and regular. Cardiac auscultation revealed a 4/6 systolic murmur and rales were heard at the base of the lungs. Transesophageal echocardiography showed a functioning Starr-Edwards caged ball mechanical prosthesis with 30 mmHg gradient and rheumatic mitral valve disease with peak transmitral gradient of 20 mmHg and valve area 1.8 cm 2 and 3-4 degree mitral insufficiency with a left ventricular ejection fraction of 30% with peak pulmonary artery pressure of 68 mmHg, besides there was a pannus formation in the surrounding valvular tissue. Coronary arteriography showed a 90% stenosis at proximal portion of LAD and well functioning caged ball prosthesis ( Figure 1). The patient underwent a reoperation of double valve replacement and one vessel coronary artery bypass grafting through a median sternotomy by the help of cardiopulmonary bypass at mild hypothermia. Macroscopic inspection of the excised Starr-Edwards caged ball significant cloth wear and dislodgement were observed inside the struts, showing bare cage metal (Figure 2). At reoperation the mitral valve was replaced with a bileaflet prosthesis, St. Jude Medical 29 mm, and an aortotomy was applied because of the previous serebrovascular accident and to examine the pannus formation in the surrounding tissue. After aortotomy we saw that the cloth surrounding the outside was detached from struts and loosened and the Starr-Edwards caged-ball prosthesis was replaced with St. Jude Medical 21 mm bileaflet valve prosthesis to achieve a better hemodynamic performance and a lower INR ratio in the follow-up period. The postoperative course was uneventful. Echocardiogram before discharge confirmed that the prosthetic valves were functioning normally. Discussion The first Starr-Edwards caged-ball valve was used in mitral position in 1960 and was introduced by Albert Starr in 1961 [1]. Despite complications such as high-pressure gradient, paravalvular leakage, pannus formation, thrombosis and thoromboembolic events that occur in variable frequencies, this artificial valve have been used worldwide in the past decades. With reported long term durability and good hemodynamic performance approximately 200,000 Starr-Edwards caged ball valves with several modifications have been implanted [2][3][4][5][6]. Nevertheless potential complications of this artificial valve remained a permanent problem. The design of the valve leads to absence of central flow and allows only a lateral flow which results in higher transvalvular gradients. Therefore, the combination of nonphysiological surfaces of the valve and stasis due to higher gradients creates a predisposition for fibrous pannus or thrombus formation. Thrombosis and pannus formation mostly occur at the apex or at the base of the cage either leading to stenosis or stuck valve. Some authors reported evidence of ball variance for Starr-Edwards caged ball prosthesis [7,8]. In our case the patient had still normally functioning Starr-Edwards caged ball valve but at the excision time the cloth surrounding the outside was detached from struts and loosened. Structural valve deterioration and thromboembolism rates were reported to be higher in aortic group by Godje and colleagues who performed a study in patients who had aortic or mitral valve replacement with Starr-Edwards caged ball prosthesis [3]. Here we report a patient in whom Starr-Edwards caged ball prosthesis functioned well for 35 years and was still functioning well although structurally cloth-cover of the prosthesis was worn. We replaced this functioning valve with a bileaflet mechanical valve prophylactically to achieve a better hemodynamic performance and for a lower INR ratio in the follow-up period.
2014-10-01T00:00:00.000Z
2010-02-03T00:00:00.000
{ "year": 2009, "sha1": "1f55d4aa7e7b6f582a3d4594a67cc3e225a8eade", "oa_license": "CCBY", "oa_url": "http://downloads.hindawi.com/journals/crim/2009/165858.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "3946b335c7aacded707e34ff208e418add2534d2", "s2fieldsofstudy": [ "Engineering", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
8713148
pes2o/s2orc
v3-fos-license
A non-invasive tool for detecting cervical cancer odor by trained scent dogs Background Cervical Cancer (CC) has become a public health concern of alarming proportions in many developing countries such as Mexico, particularly in low income sectors and marginalized regions. As such, an early detection is a key medical factor in improving not only their population’s quality of life but also its life expectancy. Interestingly, there has been an increase in the number of reports describing successful attempts at detecting cancer cells in human tissues or fluids using trained (sniffer) dogs. The great odor detection threshold exhibited by dogs is not unheard of. However, this represented a potential opportunity to develop an affordable, accessible, and non-invasive method for detection of CC. Methods Using clicker training, a male beagle was trained to recognize CC odor. During training, fresh CC biopsies were used as a reference point. Other samples used included cervical smears on glass slides and medical surgical bandages used as intimate sanitary pads by CC patients. A double-blind procedure was exercised when testing the beagle’s ability to discriminate CC from control samples. Results The beagle was proven able to detect CC-specific volatile organic compounds (VOC) contained in both fresh cervical smear samples and adsorbent material samples. Beagle’s success rate at detecting and discriminating CC and non-CC odors, as indicated by specificity and sensitivity values recorded during the experiment, stood at an overall high (>90%). CC-related VOC in adsorbent materials were detectable after only eight hours of use by CC patients. Conclusion Present data suggests different applications for VOC from the uterine cervix to be used in the detection and diagnosis of CC. Furthermore, data supports the use of trained dogs as a viable, affordable, non-invasive and, therefore, highly relevant alternative method for detection of CC lesions. Additional benefits of this method include its quick turnaround time and ease of use while remaining highly accurate and robust. Background Cervical cancer (CC) represents a serious public health concern worldwide among the female cancer spectrum. In Mexico, its incidence levels stand at an alarming 15.5% per year with a mortality rate of 12.8% [1]. Widely accepted in the scientific community, infection by Human Papillomavirus (HPV) is the main risk factor for CC development but its presence, however, is not sufficient for malignant transformation. In fact, a broad variety of co-factors and a significant number of molecular events exert influence in such process [2,3]. Furthermore, the reprogramming of energy metabolism is now part of the hallmarks of cancer [4], undoubtedly comprised of important biological capabilities acquired during the multi-step development of human tumors and constituting an organizing principle for rationalizing the complexities of neoplastic disease. Current standards for cancer diagnosis rely heavily on biopsy. In the case of CC, the standard extends to cytological and colposcopy procedures in addition to early detection of precursor lesions. With tests taking up to 1 month to return results, current diagnosis standards for cancer present an area of opportunity, particularly for developing countries and marginalized areas that face more severe issues such as the lack of proper medical and testing facilities. The present work had as a goal the introduction and improvement of a non-invasive tool to aid in detection of cervical cancer, to test detection of CC-associated VOC by a sniffer dog, and testing different methods (both invasive and non-invasive) of harboring such compounds. Methods Research met all ethical guidelines and practices, as overseen by the Comisión Nacional de Investigación Científica (Scientific Research National Committee) at the Instituto Mexicano del Seguro Social (Mexican Institute for Social Security, IMSS). A total of 20 fresh biopsies, 50 CC smear samples, and 30 healthy cervical smears samples were employed in this research. All biopsy samples were collected from patients who attended Brachytherapy Service at the Oncology Hospital, CMN-SXXI-IMSS, in Mexico City. Normal cervices without HPV infection or precancerous lesions were also collected for use as control samples from routine gynecological examination patients at the Colposcopy Clinic. Additionally, a total of 70 patients affected by invasive CC used medical adsorbent surgical bandages as intimate sanitary pads to be used as samples. Commercial intimate feminine sanitary pads with nanomaterial that absorbs odor and fluids and with added scent, such as Aloe vera or chamomile were also admitted after approximately 8 h of use. Usage of surgical bandages by healthy women lasted 1 h, 8 h, 12 h, or 24 h. Inclusion criteria for women participating in the study include: use of intimate vaginal scents and/or vaginal douches, being on any diet, alcohol or drug consumption, and age >20 years; women in their early or late phase menstrual period; women using oral contraceptives; one woman with diabetes; women who smoke; women with bacterial vaginosis; and three pregnant women of 3, 5, and 7 months. The majority of women resided in Mexico City, although others were from the state of México (1 h distance by car), Taxco, Guerrero (3 h distance by car), and Tuxtla Gutierrez, Chiapas (1.5 h distance by plane). Female participants, therefore, represent different ethnic groups from different environments with different lifestyles and diets which may have included a variety of spices and meats. All patients participating in this research provided prior approval and signed an informed consent form. Control samples, on the other hand, included, sink water, saline buffer, HPV vaccine (Gardasil [virus-like particles, VLP] 1 μg), plasmid DNA cloning of different fragments of the HPV genome, the CaSki cell line, aerosol tissue fixative, stem cell extracts (commercial products), white blood cells (WBC), red blood cells (RBC), earth/soil, exhaled breath, and sweaty finger. Biopsies were sectioned and paraffin embedded. Hematoxylin and eosin (H&E) stained sections were analyzed to confirm tumor presence of at least 50% per sample, including squamous cervical carcinoma and adenocarcinomas. The first scrape of smear samples was obtained using a cervical brush for routine cytological examination while the remaining material was deposited in 50 ml Falcon tube. First round of H&E smears was always subjected to evaluation by a pathologist to determine and confirm cytological status. Because of the possibility of having mixed cell types in biopsies, expectation was to get fewer amounts of exfoliated cells. All tests were carried out in a double-blind procedure. The two canine handlers participating in the study were experienced and present at all test times. They were responsible for recording results and did so wearing disposable polyethylene gloves at all times, exchanging them for a clean pair every two samples to avoid sample contamination. Dog training, positive conditioning clicker method. For the purpose of this research, one three-year-old male beagle was trained to identify and discriminate CC odor. To accomplish this, a 15-min training routine was conducted every morning, 5 days a week. During training, the beagle had his own cell to rest in, was allowed to play with other dogs without restriction and his regular diet composed of typical dog food remained the same. The beagle had no previous scientific experience but had been formerly trained for drug detection, however. Two groups of artifacts were used during training trials: 1) ten 20 cm cubic wood boxes with a 5 cm hole in the upper side, and 2) ten steel cylindrical containers measuring 15 cm of diameter and 30 cm high. The two groups were used interchangeably, but only one group would be used during a given trial. Artifacts were placed on the floor and arranged in a circle and about 50 cm apart from each other. Each artifact contained one sample. Samples were randomly arranged to include 9 healthy ones and 1 CC. Each trial comprised two runs. During the first run, the beagle was directed by the handlers to sniff all samples freely. The second time, the dog was directed to move towards the CC sample to instruct him which one was to be considered his target sample. This detection routine ( Fig. 1) remained the same throughout research. Upon accurate display of desired behavior, handlers would indicate his success to the beagle using traditional clicker training techniques and then rewarded him with food. In this manner, the beagle learned to identify the odor of a fresh CC biopsy as the target sample and sit down in front of it to indicate his findings. The target sample became a referential marker for further experiments and considered as the "CC-scent." Upon completion of the training program, the beagle was presented with a new challenge: discrimination between healthy and cervical cancer smears. With the intention of properly assessing detection probability, CC cells were decreased in amount with different exposure methods, replacing CC biopsies with cytological smears, which contain a significantly lower amount of cells. Finally, the medical surgical bandages (with no added perfume scent) were introduced as samples during trials. Afterwards, a new series of 270 new surgical bandages were analyzed (group 1): 170 from healthy women as controls, and 100 from patients with CC (some patients with endometrial cancer were included in this group). In order to avoid any pitfall molecule, present in the hospital-or clinic-rooms, all surgical bandages were used overnight in-home, and after that, each female subject deposited it into individual seal packaging bag and carried-out to the hospital or clinic to collect them. Under no circumstance were the bags opened in the hospital-or clinic-rooms and were always cleaned before use, to avoid any aromatic contamination. False Alerts (FA) were called in and recorded every time the beagle marked a "healthy" sample as the target in a clear manner. Each of these occurrences was followed by proper, random repositioning of the target sample before running a new trial. Data analysis Sensitivity is the primary parameter for measuring the dog's success in marking a sample (bandage used by the oncologic patient) as target. Specificity is used instead to measure the dog's performance in identifying patients without the disease [18]. Sensitivity and specificity values were calculated with 95% Confidence intervals (95%CI); positive and negative prognostic values were calculated by using Epidat v3.1 software. The gold standard was the biopsy (for cervical cancer sample) or cervical cytology (for healthy subjects). Positive and negative predictive values were adjusted to the CC prevalence by using Bayes theorem. Results Training to smell the fresh CC biopsy A vital challenge to this research was training the beagle to identify CC's volatile compounds. During his training, he worked only with a variety of CC biopsies of epidermoid and adenocarcinoma types. Healthy smears were only introduced later. The beagle's first three trials exhibited various FAs. His training routine, explained earlier, endured for 4 months before he was able to fully and unequivocally identify the odor, showing no signs of hesitation when pointing target samples. Only then were cervical smears introduced into the sample pool, decreasing the amount cells both healthy and transformed in magnitude by several orders. The smears samples challenge Smear samples, as it was to be expected, presented vaginal mucus and cervical cells from oncological patients' blood. Interestingly, when introduced into the sample pool, only the first trial presented a FA. Again, only when the beagle succeeded in fully and unequivocally identifying the target sample as such were medical adsorbent materials introduced into the sample pool. The surgical bandages challenge Like in smear samples, vaginal mucus and cervical cells were both present in all medical adsorbent material samples. However, traces of urine and blood were also detected. Surprisingly, neither represented an obstacle for the beagle to identify target samples. Flying solo, discerning CC scent volatile compounds Research produced 873 test results from nearly 100 trials just from 97 cervical cancer smears and 776 from healthy women samples. Meanwhile, for surgical bandages, 495 test results from nearly 60 trials just from 55 oncological patients and 440 from healthy subjects. Table 1 illustrates results obtained from trials where smears and adsorbent material samples were used. Sensitivity registered from both sample types was of 92,78 and 96,36%, respectively; corresponding specificity values stood at 99,1 and 99,55%; predictive values at 92,78 and 96,36%; and the negative predictive values at 99,1 and 99,55%. The false negative rate registered was notably lower in the case of adsorbent material samples (Table 1), suggesting this type of sample might be more efficient for medical applications to identify CC odor. Unexpectedly, the beagle displayed interest in all samples containing endometrial cancer cells (n = 10), risking proper identification of CC odor and suggesting similarities in their volatile compounds. Commercial intimate feminine sanitary pads with added scent also triggered a particular interest in the beagle. Exhaustive analysis indicates the pad's material produced a false positive result. Hence, its inclusion as a capture method has been discarded. On the other hand, the beagle's performance seemed unaltered by inert or chemical materials in control samples like vaginal douches, glass, cotton, lotions, cloned HPV DNA, the VLP vaccine, or live material such as blood or cells. Subjects' places of origin and lifestyle conditions were not a significant indicator in FA results. Regarding the group 1 of samples analyzed, no FAs were registered for control samples while CC samples were marked correctly. Discussion Cervical Cancer is one of the most important health concerns among women in Mexico [19]. This research and, as such, the development of alternative methods for early and prompt detection of CC, represents a huge medical improvement for individuals and health institutions alike, potentially refocusing research efforts to other areas. In addition, it can provide great benefits to developing countries and marginalized regions exhibiting deficits in health services and facilities, which explain partially the high incidence of Cervical Cancer among their populations due to their extremely low income and widespread taboos and prejudices. Because of how damaging CC can be to female populations in these countries, addressing this health problem should be a priority. Inspired to contribute to global medical communities' efforts, our team suggests the use of medical adsorbent surgical bandage as a fast, inexpensive, simple, safe, easy to use, non-invasive tool for capturing CC volatile compounds when used as a sanitary pad by patients. Its ability to collect several of the body's metabolites such as scraped cells and mucus, urine drops, personal odor, sweat and sebum, which can be used to detect molecules related with cancer, in a manner that is acceptable for use by the patient makes it a viable solution (Fig. 2). Different methods for harboring CC odor were also explored in the development of a strategy to discriminate between healthy and cancerous samples. Having as a premise dog's ability to learn to identify cancer odor as specific organic compounds or "fingerprints," our team concluded CC biopsies harboring mostly transformed cells (including other kinds of cells) represent the best starting point for training a dog. The amount of cells per sample supported our original theory that CC biopsies harbored enough information for a dog to identify and learn cancer odor. Furthermore, biopsies could harbor CC's "fingerprint" without contaminant odors. Dog training procedures like the one employed in this research have been used before in successful detection of several cancer odors [5,8,12], supporting our original theory and conclusion. Our team was challenged with reducing the number of cells from samples to measure any correlation between the number of cells contained within a sample and a dog's ability to "pick up" the scent. Research results indicate there is none. Moreover, the beagle employed in this research displayed enormous ability in identifying CC odor regardless of the number of cells contained in the sample. Even more surprising, the beagle identified CC odor specifically rather than the subject or patient, discriminating and detecting specific CC samples. Research data, ultimately, supports the impressive ultra-fine olfactory system attributed to dogs [20,21]. In present day, the scent of a number of diseases, including cancer, can be detected through the use of lab methods such as chromatography [8,[19][20][21][22][23][24][25][26]. There are great differences, however, between using chromatography and a dog, having a detection threshold of parts per billion (ppb) and parts per trillion (ppt) respectively [27]. Our research team thinks that first screenings (presumptive and rapid prescreening) in isolated communities will become far more accessible if carried out by a trained dog rather than by sophisticated equipment. Obviously, this is presented as an alternative and caution is a must, as are additional studies on the subject. In general, after CC detection during the first screening, patients should be clinically evaluated. Our suggested method allows for minimization of time, resources, and money invested by the patient without sacrificing accuracy or robustness in test results. Based in current studies and data available, our research team suggests pad-based detection to be used only as a first screening test. Chromatography from surgical bandages validates the presence of several volatile compounds in cervical cancer (Fig. 3). However, cervico-vaginal odor results only partially due to the cellular decomposition by microorganisms present in this area. Eight hours of use of adsorbent material produces a complex mixture of molecules from different physio and/or pathological fluids, sweat, urine, etc. Because of this, our research team considered that cancerous odor could be masked or even completely imperceptible. However, the beagle's ability to identify CC odor seemed unaffected. In fact, the beagle was able to recognize specific substances related to cancer as memorized odors and even detect CC-scent from different types of samples. In other words, samples collected from both invasive and noninvasive methods work for presenting "cervical cancerous odor" to a trained sniffer dog. Our findings are supported partially by a work in which a vaginal self-sampling at home showed efficacy and cost-effectiveness for HPV detection in CC screening [28]. Additionally, our team considers medical surgical bandages can become an environment-friendly tool for detecting cancer odor in the near future. Finally, the beagle displayed ability to identify samples containing endometrial cancer. This could be explained partially by the biological capabilities acquired during multi-step development of the tumors [4] and that these two gynecological tumors could share common volatile compounds. The latter is supported by a report by Horvath et al. (2008) which describes a dog trained to detect ovarian cancer was also able to detect other types of cancer, ultimately resulting in a potential drop in specificity values [9]. Our research team is currently performing additional studies to provide clarity to the subject. If Fig. 3 Comparison of gas chromatograph-mass spectrometry (volatile organic compounds) of adsorbent bandages used by healthy and Cervical Cancer-affected women. After usage of the adsorbent pads during 8 hours, these were subjected to an analysis by gas chromatography-mass spectrometry. The compounds were obtained by using the experimental conditions: hexane at 4°C with DB-column of 1.25 mm×60 m×0.25-μm and Helium gas carrier by employing Agilent gas chromatography-mass spectrometry equipment. The upper chromatogram represents a healthy woman, and the lower chromatogram from a Cervical Cancer-affected patient. The x-axis represents the time retention in minutes, while the y-axis the curve area. The graph is showing an example of mass spectrum of the following organic compounds: Oxirane; 2-methyl-3-propyl-trans; 5 H Tetrazol-5-amine; Eicosane and Dibutyl phthalate (DP) presented in the healthy women (Healthy chromatogram), while the 3 Ethyl-3-methyl heptane; 3,3 Dimethyl-1 [2 carboxyphenyl] triazine; and DP in the cancer patient (Cervical Cancer chromatogram), where a clear difference in DP concentrations between women was observed (vertical red rectangle). Chemical structure of the organic compounds is also showed our theory is correct, the applications for surgical bandages as non-invasive tool could broaden to include the detection of endometrial cancer and other tumors like ovary and breast cancers. Very recently, we have determined that sanitary pads are also able to collect VOC related to ovary cancer samples, as well as from breast samples. We are also employing a mask to collect volatile compounds from exhaled breath for several cancers of the upper aero-digestive tract (data not shown). Conclusions Our research team is convinced that the trainer-dog partnership and the use of surgical bandages as the means to collect samples, both described in this research, are viable alternative tools for exploring and detecting cervical cancerous odor, even more when paired together. The benefits of these tools include inexpensiveness, accuracy, ease of use, non-invasiveness, and high sensitivity and specificity. Applications for these tools extend to providing much needed medical attention for women from cultural backgrounds imposing several prohibitions, deep-rooted cultural taboos, or lack of health coverage. Additionally, the use of a trained dog for screening facilitates prevention campaigns in areas of difficult access, saving money, labor, and the loss of lives due to late diagnosis. Finally, surgical bandages could also be used to detect endometrial cancer and potentially other tumors like ovary and breast cancer. Acknowledgments We thank all women afflicted with cervical cancer and those who are not afflicted who participated in the present work, and we also thank Mr. Edgar Hernández-Rico and Emmanuel Salcedo for help to write the manuscript. MS is a recipient of Fundación IMSS A.C. fellowship. Funding Coordinación de Investigación en Salud-Instituto Mexicano del Seguro Social R-785-003, Availability of data and materials All data generated or analysed during this study are included in this published article. Authors' contributions HG-F, AS-P, DF-V, and IH-G were involved in dog care and training, and AB-C, RG-P, TR-S, OG-V, and TA-G provided biological samples and clinical data. VS-A, OM, PR-M, and RL-R conducted the data analysis, and CB, K-T, and DM-R performed statistical analysis. VA-C and JB-R performed the cytology and pathology analysis, and MM-R, MR-E, AM-C, VH-P, and J-RG provided support in the management of the samples and carried out databases, while MS participated in the logistics, design, and writing of the manuscript. All authors read and approved the final manuscript. Competing interests The authors declare no competing interests. All authors have agreed to authorship and order of authorship for this manuscript and that all authors have the appropriate permissions and rights to the reported data. Consent for publication Not applicable. Ethics approval and consent to participate This study was approved by the ethics committee of Comisión Nacional de Investigación Científica, Comisión de Ética y Científica, Coordinación de Investigación en Salud at the Instituto Mexicano del Seguro Social R-785-003. All samples were taken after signed informed consent from patients. Animal Ethics The Animal Care and Use Committee acted according to the Animal Protection Law (of the Government of the Distrito Federal, Mexico) through the Basic Level Dog Training Law (NTCL CSPV0648.01), Behavioral Security Dog Unit and Management and Care of Dogs (NTCL CSPV0403.02) of SEP (Secretaria de Eduacion Publica, or Ministry of Public Education in English). The trainers are personnel certified by SEP and by The National Council of Normalization and Certification of Work Competencies (EC0058, CONOCER, www.conocer.gob.mx), the government office supporting the National Competencies System. Also the trainers are training dogs for detection of different elements including drugs, natural disasters and weapons. To our knowledge, this is the first research on cancer odor in Mexico, and all of the procedures were conducted according to the previously mentioned Laws to avoid any potential harm to the animal. Author details 1
2017-08-03T01:46:22.692Z
2017-01-26T00:00:00.000
{ "year": 2017, "sha1": "16a0547ce4e466f9c4fdbc8d1079bed054d31610", "oa_license": "CCBY", "oa_url": "https://bmccancer.biomedcentral.com/track/pdf/10.1186/s12885-016-2996-4", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "7f44480fa67176211b45ce090c21651a7b55ab67", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
9928033
pes2o/s2orc
v3-fos-license
Review of BisoNet Abstraction Techniques . BisoNets represent relations of information items as networks. The goal of BisoNet abstraction is to transform a large BisoNet into a smaller one which is simpler and easier to use, although some information may be lost in the abstraction process. An abstracted BisoNet can help users to see the structure of a large BisoNet, or understand connections between distant nodes, or discover hidden knowledge. In this paper we review different approaches and techniques to abstract a large BisoNet. We classify the approaches into two groups: preference-free methods and preference-dependent methods. Introduction Bisociative information networks (BisoNets) [2] are a representation for many kinds of relational data. The BisoNet model is a labeled and weighted graph G = (V, E). For instance, in a BisoNet describing biological information, elements of the vertex set V are biological entities, such as genes, proteins, articles, or biological processes. Connections between vertexes are represented by edges E, which have types such as "codes for", "interacts with", or "is homologous to", and have weights to show how strong they are. BisoNets are often large. One example is Biomine 1 . It currently consists of about 1 million vertices and 10 million edges, so that it is difficult for users to directly visualize and explore it. One solution is to present to a user an abstract view of a BisoNet. We call this BisoNet abstraction. The goal of BisoNet abstraction is to transform a large BisoNet into one that is simpler and therefore easier to use, even though some information is lost in the abstraction process. An abstracted view can help users see the structure of a large BisoNet, or understand connections between distant nodes, or even discover new knowledge difficult to see in a large BisoNet. This chapter is a literature review of applicable approaches to BisoNet abstraction. An abstracted BisoNet can be obtained through different approaches. For example, a BisoNet can be simplified by removing irrelevant nodes or edges. Another example is that a BisoNet can be divided into several components, or some parts of a BisoNet can be replaced by general structures. Furthermore, user preference can be considered during abstraction. For instance, a user can specify which parts of a BisoNet should retain more details. Structure of the review. Although this chapter reviews potential techniques with the goal to abstract large BisoNets, the techniques present here are also applicable to general networks. In the rest of this chapter, we therefore use the general term "network" instead of "BisoNet". We first review methods which do not take user preference into account in Section 2, and then review methods in which a user can specify preference in Section 3. We conclude in Section 4. Preference-Free Methods In this section, we discuss network abstraction methods where the user has no control over how specific parts of the graph are handled (but there may be numerous other parameters for the user to set). Relative Neighborhood Graph The Relative Neighborhood Graph (RNG) [3,4] only contains edges whose two endpoints are relatively close: by definition, nodes a and b are connected by an edge if and only if there is no third node c which is closer to both endpoints a and b than a and b are to each other. RNG has originally been defined for points, but it can also be used to prune edges between nodes a and b that do have a shared close neighbor c. The relative neighborhood graph then is a superset of the Minimum Spanning Tree (MST) and a subset of Delaunay Triangulation (DT). According to Toussaint [3], RNG can in most cases capture a perceptually more significant subgraph than MST and DT. Node Centrality The field of social network analysis has produced several methods to measure the importance or centrality of nodes [5][6][7][8]. Typical definitions of node importance are the following. 1. Degree centrality simply means that nodes with more edges are more central. 2. Betweenness centrality [9][10][11] measures how influential a node is in connecting pairs of nodes. A node's betweenness is the number of times the node appears on the paths between all other nodes. It can be computed for shortest paths or for all paths [12]. Computation of a node's betweenness involves all paths between all pairs of nodes of a graph. This leads to high computational costs for large networks. 3. Closeness centrality [13] is defined as the sum of graph-theoretic distances from a given node to all others in the network. The distance can be defined as mean geodesic distance, or as the reciprocal of the sum of geodesic distances. Computation of a node's closeness also involves all paths between all pairs of nodes, leading to a high complexity. 4. Feedback centrality of a vertex is defined recursively by the centrality of its adjacent vertices. 5. Eigenvector centrality has also been proposed [14]. Node centrality measures focus on selecting important nodes, not on selecting a subgraph (of a very small number of separate components). Obviously, centrality measures can be used to identify least important nodes to be pruned. For large input networks and small output networks, however, the result of such straightforward pruning would often consist of individual, unconnected nodes, not an abstract network in the intended sense. Methods in the following subsections (2.3 and 2.4) are similar in this sense: they help to rank nodes individually based on their importance, but do not as such produce (connected) subgraphs. PageRank and HITS In Web graph analysis, PageRank algorithm [15,16] is proposed to find the most important web pages according to the web's link structure. The process can be understood as the probability of a random walk on a directed graph; the quality of each page depends on the number and quality of all pages that link to it. It emphasizes highly linked pages and their links. A closely related link analysis method is HITS (Hyperlink-Induced Topic Search) [17,18], which also aims to discover web pages of importance. Unlike PageRank, it has two values for each page, and is processed on a small subset of pages, not the whole web. Haveliwala [19] discusses the relative benefits of PageRank and HITS. In their basic forms, both PageRank and HITS value a node just according to the graph topology. An open question is to add edge weights to them. Birnbaum's Component Importance Birnbaum importance [20] is defined on (Bernoulli) random graphs where edge weights are probabilities of the existence of the edge. The Birnbaum importance of an edge depends directly on the overall effect of the existence of the edge. An edge whose removal has a large effect on the probability of other nodes to be connected, has a high importance. The importance of a node can be defined in terms of the total importance of its edges. This concept has been extended for two edges by Hong and Lei [21]. Graph Partitioning Inside a network, there often are clusters of nodes (called communities in social networks), within which connections are stronger, while connections between clusters are weaker and less frequent. In such a situation, a useful abstraction is to divide the network into clusters and present each one of them separately to the user. A prevalent class of approaches to dividing a network into small parts is based on graph partitioning [22,23]. The basic goal is to divide the nodes into subsets of roughly equal size and minimize the sum of weights of edges crossing different subsets. This problem is NP-complete. However, many algorithms have been proposed to find a reasonably good partition. Popular graph partitioning techniques include spectral bisection methods [24,25] and geometric methods [26,27]. While they are quite elegant, they have some downsides. Spectral bisection in its standard form is computationally expensive for very large networks. The geometric methods in turn require coordinates of vertices of the graph. Another approach is multilevel graph partitioning [28,29]. It first collapses sets of nodes and edges to obtain a smaller graph and partitions the small graph, and then refines the partitioning while projecting the smaller graph back to the original graph. The multilevel method combines a global view with local optimization to reduce cut sizes. An issue with many of these partitioning methods is that they only bisect networks [30]. Good results are not guaranteed by repeating bisections when more than two subgroups are needed. For example, if the graph essentially has three subgroups, there is no guarantee that these three subgroups can be discovered by finding the best division into two and then dividing one of them again. Other methods take a rough partitioning as input. A classical representative is Kernighan-Lin (K-L) algorithm [31]. It iteratively looks for a subset of vertices, from each part of the given graph, so that swapping them will lead to a partition with smaller edge-cut. It does not create partitions but rather improves them. The first (very!) rough partitioning can be obtained by randomly partitioning the set of nodes. A weakness of the The K-L method is that it only has a local view of the problem. Various modifications of K-L algorithm have been proposed [32,33], one of them dealing with an arbitrary number of parts [32]. Hierarchical Clustering Another popular technique to divide networks is hierarchical clustering [34]. It computes similarities (or distances) between nodes, for which typical choices include Euclidean distance and Pearson correlation (of neighborhood vectors), as well as the count of edge-independent or vertex-independent paths between nodes. Hierarchical clustering is well-known for its incremental approach. Algorithms for hierarchical clustering fall into agglomerative or divisive class. In an agglomerative process, each vertex is initially taken as an individual group, then the closest pair of groups is iteratively merged until a single group is constructed or some qualification is met. Newman [35] indicates that agglomerative processes frequently fail to detect correct subgroups, and it has tendency to find only the cores of clusters. The divisive process iteratively removes edges between the least similar vertices, thus it is totally the opposite of an agglomerative method. Obviously, other clustering methods can be applied on nodes (or edges) as well to partition a graph. Edge Betweenness One approach to find a partitioning is through removing edges. This is similar to the divisive hierarchical clustering, and is based on the principle that the edges which connect communities usually have high betweenness [36]. Girvan and Newman define edge betweenness as the number of paths that run along that given edge [35]. It can be calculated using shortest-path betweenness, random-walk betweenness and current-flow betweenness. The authors first use edge centrality indices to find community boundaries. They then remove high betweenness edges in a divisive process, which eventually leads to a division of the original network into separate parts. This method has a high computational cost: in order to compute each edge's betweenness, one should consider all paths in which it appears. Many authors have already proposed different approaches to speed up that algorithm [37,38]. Frequent Subgraphs A frequent subgraph may be considered as a general pattern whose instances can be replaced by a label of that pattern (i.e., a single node or edge representing the pattern). Motivation for this is two-fold. Technically, this operation can be seen as compression. On the other hand, frequent patterns possibly reflect some semantic structures of the domain and therefore are useful candidates for replacement. Two early methods for frequent subgraph mining use frequent probabilistic rules [39] and compression of the database [40]. Some early approaches use greedy, incomplete schemes [41,42]. Many of the frequent subgraph mining methods are based on the Apriori algorithm [43], for instance AGM [44] and FSG [45,46]. However, such methods usually suffer from complicated and costly candidate generation, and high computation time of subgraph isomorphism [47]. To circumvent these problems, gSpan [47] explores depth-first search in frequent subgraph mining. CloseGraph [48] in turn mines closed frequent graphs, which reduces the size of output without losing any information. The Spin method [49] only looks for maximal connected frequent subgraphs. Most of the methods mentioned above consider a database of graphs as input, not a single large graph. More recently, several methods have been proposed to find frequent subgraphs also in a single input graph [50][51][52][53]. Preference-Dependent Methods In this section, we discuss abstraction methods in which a user can explicitly indicate which parts or aspects are more important, according to his interests. Such network abstraction methods are useful when providing more flexible ways to explore a BisoNet. Relevant Subgraph Extraction Given two or more nodes, the idea here is to extract the most relevant subnetwork (of a limited size) with respect to connecting the given nodes as strongly as possible. This subnetwork is then in some sense maximally relevant to the given nodes. There are several alternatives for defining the objective function, i.e., the quality of the extracted subnetwork. An early approached proposed by Grötschel et al. [54] bases the definition on the count of edge-disjoint or vertex-disjoint paths from the source to the sink. A similar principle has later been applied to multi-relational graphs [55], where a pair of entities could be linked by a myriad of relatively short chains of relationships. The problem in its general form was later formulated as the connection subgraph problem by Faloutsos et al. [56]. The authors also proposed a method based on electricity analogies, aiming at maximizing electrical currents in a network of resistors. However, Tong and Faloutsos later point out the weaknesses of using delivered current criterion as a goodness of connection [57]: it only deals with query node pair, and is sensible to the order of the query nodes. Thus, they propose method to extract a subgraph with strong connections to any arbitrary number of nodes. For random graphs, work from reliability research suggests network reliability as suitable measure [58]. This is defined as the probability that query nodes are connected, given that edges fail randomly according to their probabilities. This approach was then formulated more exactly and algorithms were proposed by Hintsanen and Toivonen [59]. Hintsanen and Toivonen restrict the set of terminals to a pair, and propose two incremental algorithms for the problem. A logical counterpart of this work, in the field of probabilistic logic learning, is based on ProbLog [60]. In a ProbLog program, each Prolog clause is labeled with a probability. The ProbLog program can then be used to compute the success probabilities of queries. In the theory compression setting for ProbLog [61], the goal is to extract a subprogram of limited size that maximizes the success probability of given queries. The authors use subgraph extraction as the application example. Detecting Interesting Nodes or Paths Some techniques aim to detect interesting paths and nodes, with respect to given nodes. Lin and Chapulsky [62] focus on determining novel, previously unknown paths and nodes from a labeled graph. Based on computing frequencies of similar paths in the data, they use rarity as a measure to find interesting paths or nodes with respect to the given nodes. An alternative would be to use node centrality to measure the relative importance. White and Smyth [63] define and compute the importance of nodes in a graph relative to one or more given root nodes. They have also pointed out advantages and disadvantages of such measurement based on shortest paths, k-short paths and k-short node-disjoint paths. Personalized PageRank On the basis of PageRank, Personlized PageRank (PPR) is proposed to personalize ranking of web pages. It assigns importance according to the query or user preferences. Early work in this area includes Jeh and Widom [64] and Haveliwala [19]. Later, Fogaras et al. [65] have proposed improved methods for the problem. An issue for network abstraction with these approaches is that they can identity relevant individual nodes, but not a relevant subgraph. Exact Subgraph Search Some substructures may represent obvious or general knowledge, which may moreover occur frequently. Complementary to the approach of Subsection 2.8 where such patterns are identified automatically, here we consider user-input patterns or replacement rules. We first introduce methods that find all exact specified subgraphs. Finding all exact instances of a graph structure reduces to the subgraph isomorphism problem, which is NP-complete. Isomorphisms are mappings of node and edge labels that preserve the connections in the subgraph. Ullmann [66] has proposed a well-known algorithm to number the isomorphisms with a refinement procedure that overcomes brute-force tree-search enumeration. Cordella et al. [67] include more selective feasibility rules to prune the state search space of their VF algorithm. A faster algorithm, GraphGrep [68], builds an index of a database of graphs, then uses filtering and exact matching to find isomorphisms. The database is indexed with paths, which are easier to manipulate than trees or graphs. As an alternative, GIndex [69] relies on frequent substructures to index a graph database. Similarity Subgraph Search A more flexible search is to find graphs that are similar but not necessarily identical to the query. Two kinds of similarity search seem interesting in the context of network abstraction. The first one is the K-Nearest-Neighbors (K-NN) query that reports the K substructures which are the most similar to the user's input; the other is the range query which returns subgraphs within a specific dissimilarity range to user's input. These definitions of the problem imply computation of a similarity measure between two subgraphs. The edit distance between two graphs has been used for that purpose [70]: it generally refers to the cost of transforming one object into the other. For graphs, the transformations are the insertion and removal of vertices and edges, and the changing of attributes on vertices and edges. As graphs have mappings, the edit distance between graphs is the minimum distance over all mappings. Tian et al. [71] propose a distance model containing three components: one measures the structural differences, a second component is the penalty associated with matching two nodes with different labels, and the third component measures the penalty for the gap nodes, nodes in the query that cannot be mapped to any nodes in the target graph. Another family of similarity measures is based on the maximum common subgraph of two graphs [72]. Fernandez and Valiente [73] propose a graph distance metric based on both maximum common subgraph and minimum common supergraph. The maximum percentage of edges in common has also been used as a similarity measure [74]. Processing pairwise comparisons is very expensive in term of computational time. Grafil [74] and PIS [75] are both based on GIndex [69], indexing the database by frequent substructures. The concept of graph closure [70] represents the union of graphs, by recording the union of edge labels and vertex labels, given a mapping. The derived algorithm, Closure-tree, organizes graphs in a hierarchy where each node summarizes its descendants by a graph closure: efficiency of similarity query may improve, and that may avoid some disadvantages of path-based and frequent substructure methods. The authors of SAGA (Substructure Index-based Approximate Graph Alignment) [71] propose the FragmentIndex technique, which indexes small and frequent substructures. It is efficient for small graph queries, however, processing large graph queries is much more expensive. TALE (Tool for Approximate Subgraph Matching of Large Queries Efficiently) [76] is another approximate subgraph matching system. The authors propose to use NH-Index (Neighborhood Index) to index and capture the local graph structure of each node. An alternative approach uses structured graph decomposition to index a graph database [77]. Conclusion There is a large literature on methods suitable for BisoNet abstraction. We reviewed some of the most important approaches, classified by whether they allow user focus or not. Even though we did not cover the literature exhaustively, we can propose areas for further research based on the gaps and issues observed in the review. First, we noticed that different node ranking measures (Sections 2.2-2.4) are useful for picking out important nodes, as evidenced by search engines, but the result is just that -a set of nodes. How to better use those ideas to find a connected, relevant subBisoNet is an open question. Second, although there are lots of methods for partitioning a BisoNet (Section 2.5-2.7), the computational complexity usually is prohibitive for large BisoNets, such as Biomine, with millions of nodes and edges. Obviously, partitioning would be a valuable tool for BisoNet abstraction there. Third, we observed that some more classical graph problems have been researched much more intensively for graph databases consisting of a number of graphs, rather than for a single large graph. This holds especially for frequent subgraphs (Section 2.8) and subgraph search (Section 3.5). Finally, a practical exploration system needs an integrated approach to abstraction, using several of the techniques reviewed here to complement each other in producing a simple and useful abstract BisoNet.
2017-10-31T23:28:07.892Z
2012-01-01T00:00:00.000
{ "year": 2012, "sha1": "01d4c03fc92939bfc94afb6bc0df1079e61fb0bd", "oa_license": "CCBYNC", "oa_url": "https://link.springer.com/content/pdf/10.1007/978-3-642-31830-6_12.pdf", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "01d4c03fc92939bfc94afb6bc0df1079e61fb0bd", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
215770580
pes2o/s2orc
v3-fos-license
Conditional Rényi Divergences and Horse Betting Motivated by a horse betting problem, a new conditional Rényi divergence is introduced. It is compared with the conditional Rényi divergences that appear in the definitions of the dependence measures by Csiszár and Sibson, and the properties of all three are studied with emphasis on their behavior under data processing. In the same way that Csiszár’s and Sibson’s conditional divergence lead to the respective dependence measures, so does the new conditional divergence lead to the Lapidoth–Pfister mutual information. Moreover, the new conditional divergence is also related to the Arimoto–Rényi conditional entropy and to Arimoto’s measure of dependence. In the second part of the paper, the horse betting problem is analyzed where, instead of Kelly’s expected log-wealth criterion, a more general family of power-mean utility functions is considered. The key role in the analysis is played by the Rényi divergence, and in the setting where the gambler has access to side information, the new conditional Rényi divergence is key. The setting with side information also provides another operational meaning to the Lapidoth–Pfister mutual information. Finally, a universal strategy for independent and identically distributed races is presented that—without knowing the winning probabilities or the parameter of the utility function—asymptotically maximizes the gambler’s utility function. Introduction As shown by Kelly [1,2], many of Shannon's information measures appear naturally in the context of horse gambling when the gambler's utility function is expected log-wealth. Here, we show that under a more general family of utility functions, gambling also provides a context for some of Rényi's information measures. Moreover, the setting where the gambler has side information motivates a new Rényi-like conditional divergence, which we study and compare to other conditional divergences. The proposed family of utility functions in the context of gambling with side information also provides another operational meaning to the Rényi-like mutual information that was recently proposed by Lapidoth and Pfister [3]: it measures the gambler's gain from the side information as measured by the increase in the minimax value of the two-player zero-sum game in which the bookmaker picks the odds and the gambler then places the bets based on these odds and her side information. Deferring the gambling-based motivation to the second part of the paper, we first describe the different conditional divergences and study some of their properties with emphasis on their behavior under data processing. We also show that the new conditional Rényi divergence relates to the Lapidoth-Pfister mutual information in much the same way that Csiszár's and Sibson's conditional divergences relate to their corresponding mutual informations. Before discussing the conditional divergences, we first recall other information measures. The Kullback-Leibler divergence (or relative entropy) is an important concept in information theory and statistics [2,[4][5][6]. It is defined between two probability mass functions (PMFs) P and Q over a finite set X as where log(·) denotes the base-2 logarithm. Defining a conditional Kullback-Leibler divergence is straightforward because, as simple algebra shows, the two natural approaches lead to the same result: P(x) D(P Y|X=x Q Y|X=x ) (2) = D(P X P Y|X P X Q Y|X ), where supp(P) {x ∈ X : P(x) > 0} denotes the support of P, and in (3) and throughout P X P Y|X denotes the PMF on X × Y that assigns (x, y) the probability P X (x) P Y|X (y|x). The Rényi divergence of order α [7,8] between two PMFs P and Q is defined for all positive α's other than one as A conditional Rényi divergence can be defined in more than one way. In this paper, we consider the following three definitions, two classic and one new: D s α (P Y|X Q Y|X |P X ) D α (P X P Y|X P X Q Y|X ), where (5) is inspired by Csiszár [9]; (6) is inspired by Sibson [10]; and (7) is motivated by the horse betting problem discussed in Section 9. The first two conditional Rényi divergences were used to define the Rényi measures of dependence of Csiszár I c α (X; Y) [9] and of Sibson I s α (X; Y) [10]: where the minimization is over all PMFs on the set Y. (Gallager's E 0 function [11] and I s α (X; Y) are in one-to-one correspondence; see (65) below.) The analogous minimization of D l α (·) leads to the Lapidoth-Pfister mutual information J α (X; Y) [3]: where (11) is proved in Proposition 5. The first part of the paper is structured as follows: In Section 2, we discuss some preliminaries. In Sections 3-5, we study the properties of the three conditional Rényi divergences and their associated measure of dependence. In Section 6, we express the Arimoto-Rényi conditional entropy H α (X|Y) and the Arimoto measure of dependence I a α (X; Y) [12] in terms of D l α (P X|Y U X |P Y ). In Section 7, we relate the conditional Rényi divergences to each other and discuss the relations between the Rényi dependence measures. The second part of the paper deals with horse gambling under our proposed family of power-mean utility functions. It is in this context that the Rényi divergence (Theorem 9) and the conditional Rényi divergence D l α (·) (Theorem 10) appear naturally. More specifically, consider a horse race with a finite nonempty set of horses X , where a bookmaker offers odds o(x)-for-1 on each horse x ∈ X , where o : X → (0, ∞) [2] (Section 6.1). A gambler spends all her wealth placing bets on the horses. The fraction of her wealth that she bets on Horse x ∈ X is denoted b(x) ≥ 0, which sums up to 1 over x ∈ X , and the PMF b is her "betting strategy." The winning horse, which we denote X, is drawn according to the PMF p, where we assume p(x) > 0 for all x ∈ X . The wealth relative (or end-to-beginning wealth ratio) is the random variable Hence, given an initial wealth γ, the gambler's wealth after the race is γ S. We seek betting strategies that maximize the utility function where β ∈ R is a parameter that accounts for the risk sensitivity. This optimization generalizes the following cases: (a) In the limit as β tends to −∞, we optimize the worst-case return. The optimal strategy is risk-free in the sense that S does not depend on the winning horse (see Proposition 8). If β = 0, then we optimize E[log S], which is known as the doubling rate [2] (Section 6.1). The optimal strategy is proportional betting, i.e., to choose b = p (see Remark 4). (c) If β = 1, then we optimize E[S], the expected return. The optimal strategy is to put all the money on a horse that maximizes p(x) o(x) (see Proposition 9). In general, if β ≥ 1, then it is optimal to put all the money on one horse (see Proposition 9). This is risky: if that horse loses, the gambler will go broke. (e) In the limit as β tends to +∞, we optimize the best-case return. The optimal strategy is to put all the money on a horse that maximizes o(x) (see Proposition 10). Note that, for β = 0 and η 1 − β, maximizing U β is equivalent to maximizing which is known in the finance literature as Constant Relative Risk Aversion (CRRA) [13,14]. We refer to our utility function as "power mean" because it can be written as the logarithm of a weighted power mean [15,16]: Because the power mean tends to the geometric mean as β tends to zero [15] (Problem 8.1), U β is continuous at β = 0: Campbell [17,18] used an exponential cost function with a similar structure to (15) to provide an operational meaning to the Rényi entropy in source coding. Other information-theoretic applications of exponential moments were studied in [19]. The second part of the paper is structured as follows: In Section 8, we relate the utility function U β to the Rényi divergence (Theorem 9) and derive its optimal gambling strategy. In Section 9, we consider the situation where the gambler observes side information prior to betting, a situation that leads to the conditional Rényi divergence D l α (·) (Theorem 10) and to a new operational meaning for the measure of dependence J α (X; Y) (Theorem 11). In Section 10, we consider the situation where the gambler invests only part of her money. In Section 11, we present a universal strategy for independent and identically distributed (IID) races that requires neither knowledge of the winning probabilities nor of the parameter β of the utility function and yet asymptotically maximizes the utility function for all PMFs p and all β ∈ R. Preliminaries Throughout the paper, log(·) denotes the base-2 logarithm, X and Y are finite sets, P XY denotes a joint PMF over X × Y, Q X denotes a PMF over X , and Q Y denotes a PMF over Y. An expression of the form P X P Y|X denotes the PMF on X × Y that assigns (x, y) the probability P X (x) P Y|X (y|x). We use P and Q as generic PMFs over a finite set X . We denote by supp (P) {x ∈ X : P(x) > 0} the support of P, and by P (X ) the set of all PMFs over X . When clear from the context, we often omit sets and subscripts: for example, we write ∑ for P X (x), and P(y|x) for P Y|X (y|x). When P(x) is 0, we define the conditional probability P(y|x) as 1/|Y |. The conditional distribution of Y given X = x is denoted by P Y|X=x , thus We denote by 1{condition} the indicator function that is one if the condition is satisfied and zero otherwise. In the definition of the Kullback-Leibler divergence in (1), we use the conventions In the definition of the Rényi divergence in (4), we read P(x) α Q(x) 1−α as P(x) α /Q(x) α−1 for α > 1 and use the conventions For α being zero, one, or infinity, we define by continuous extension of (4) The Rényi divergence for negative α is defined as (We use negative α in the proof of Proposition 1 (e) below and in Remark 6. More about negative orders can be found in [8] (Section V). For other applications of negative orders, see [20] (Proof of Theorem 1 and Example 1).) The Rényi divergence satisfies the following basic properties: Proposition 1. Let P and Q be PMFs. Then, the Rényi divergence D α (P Q) satisfies the following: (h) (Data-processing inequality.) Let A X |X be a conditional PMF, and define the PMFs Then, for all α ∈ [0, ∞], Proof. See Appendix A. All three conditional Rényi divergences reduce to the unconditional Rényi divergence when both P Y|X and Q Y|X are independent of X: Remark 1. Let P Y , Q Y , and P X be PMFs. Then, for all α ∈ [0, ∞], Proof. This follows from the definitions of D c α (·), D s α (·), and D l α (·) in (5)-(7). Csiszár's Conditional Rényi Divergence For a PMF P X and conditional PMFs P Y|X and Q Y|X , For α ∈ (0, 1) ∪ (1, ∞), which follows from the definition of the Rényi divergence in (4). For α being zero, one, or infinity, we obtain from (21)-(23) and (2) Augustin [21] and later Csiszár [9] defined the measure of dependence Augustin used this measure to study the error exponents for channel coding with input constraints, while Csiszár used it to study generalized cutoff rates for channel coding with composition constraints. Nakiboglu [22] studied more properties of I c α (X; Y). Inter alia, he analyzed the minimax properties of the Augustin capacity where A ⊆ P (X ) is a constraint set. The Augustin capacity is used in [23] to establish the sphere packing bound for memoryless channels with cost constraints. The rest of the section presents some properties of D c α (·). Being an average of Rényi divergences (see (29)), D c α (·) inherits many properties from the Rényi divergence: Let P X be a PMF, and let P Y|X and Q Y|X be conditional PMFs. Then, , then D c α (P Y|X Q Y|X |P X ) = 0 if and only if P Y|X=x = Q Y|X=x for all x ∈ supp(P X ) . Proof. These follow from (29) and the properties of the Rényi divergence (Proposition 1). For Parts (f) and (g), recall that a nonnegative weighted sum of concave functions is concave. We next consider data-processing inequalities for D c α (·). We distinguish between processing Y and processing X. The data-processing inequality for processing Y follows from the data-processing inequality for the (unconditional) Rényi divergence: Theorem 1. Let P X be a PMF, and let P Y|X and Q Y|X be conditional PMFs. For a conditional PMF A Y |XY , define Then, for all α ∈ [0, ∞], Proof. See Appendix B. The following data-processing inequality for processing X holds for α ∈ [0, 1] (as shown in Example 1 below, it does not extend to α ∈ (1, ∞]): Let P X be a PMF, and let P Y|X and Q Y|X be conditional PMFs. For a conditional PMF B X |X , define the PMFs Then, for all α ∈ [0, 1], Note that P X , P Y|X , and Q Y|X in Theorem 2 can be obtained from the following marginalizations: Proof of Theorem 2. See Appendix C. As a special case of Theorem 2, we obtain the following relation between the conditional and the unconditional Rényi divergence: Corollary 1. For a PMF P X and conditional PMFs P Y|X and Q Y|X , define the marginal PMFs Then, for all α ∈ [0, 1], Proof. See Appendix D. Consider next α ∈ (1, ∞]. It turns out that Corollary 1, and hence Theorem 2, cannot be extended to these values of α (not even if Q Y|X is restricted to be independent of X, i.e., if Q Y|X = Q Y ): Then, for every α ∈ (1, ∞], there exists an ∈ (0, 1) such that where the PMF P Y is defined by (46) and, irrespective of , satisfies P Y (0) = P Y (1) = 0.5. Proof. See Appendix E. The concavity and convexity properties of D s α (·) and I s α (X; Y) were studied by Ho-Verdú [24]. More properties of I s α (X; Y) were collected by Verdú [25]. The maximization of I s α (X; Y) with respect to P X and the minimax properties of D s α (·) were studied by Nakiboglu [26] and Cai-Verdú [27]. The conditional Rényi divergence D s α (·) was used by Fong and Tan [28] to establish strong converse theorems for multicast networks. Yu and Tan [29] analyzed channel resolvability, among other measures, in terms of D s α (·). From (61) we see that Gallager's E 0 function [11], which is defined as is in one-to-one correspondence to Sibson's measure of dependence: Gallager's E 0 function is important in channel coding: it appears in the random coding exponent [30] and in the sphere packing exponent [31,32] (see also Gallager [11]). The exponential strong converse theorem proved by Arimoto [33] also uses the E 0 function. Polyanskiy and Verdú [34] extended the exponential strong converse theorem to channels with feedback. Augustin [21] and Nakiboglu [35,36] extended the sphere packing bound to channels with feedback. The rest of the section presents some properties of D s α (·). Because D s α (·) can be written as an (unconditional) Rényi divergence (see (54)), it inherits many properties from the Rényi divergence: Proposition 3. Let P X be a PMF, and let P Y|X and Q Y|X be conditional PMFs. Then, Proof. These follow from (54) and the properties of the Rényi divergence (Proposition 1). We next consider data-processing inequalities for D s α (·). We distinguish between processing Y and processing X. The data-processing inequality for processing Y follows from the data-processing inequality for the (unconditional) Rényi divergence: Theorem 3. Let P X be a PMF, and let P Y|X and Q Y|X be conditional PMFs. For a conditional PMF A Y |XY , define Then, for all α ∈ [0, ∞], The data-processing inequality for processing X similarly follows from the data-processing inequality for the (unconditional) Rényi divergence: Theorem 4. Let P X be a PMF, and let P Y|X and Q Y|X be conditional PMFs. For a conditional PMF B X |X , define the PMFs Then, for all α ∈ [0, ∞], Proof. See Appendix G. As a special case of Theorem 4, we obtain the following relation between the conditional and the unconditional Rényi divergence: Let P X be a PMF, and let P Y|X and Q Y|X be conditional PMFs. Define the marginal PMFs Then, for all α ∈ [0, ∞], Proof. This follows from Theorem 4 in the same way that Corollary 1 followed from Theorem 2. New Conditional Rényi Divergence Let P X be a PMF, and let P Y|X and Q Y|X be conditional PMFs. where (78) follows from the definition of the Rényi divergence in (4). (Except for the sign, the exponential averaging in (77) is very similar to the one of the Arimoto-Rényi conditional entropy; compare with (147) below.) For α being zero, one, or infinity, we define by continuous extension of (77) This conditional Rényi divergence has an operational meaning in horse betting with side information (see Theorem 10 below). Before discussing the measure of dependence associated with D l α (·), we establish the following alternative characterization of D l α (·): Proposition 4. Let P X be a PMF, and let P Y|X and Q Y|X be conditional PMFs. Then, for all α ∈ [0, ∞], Proof. We first treat the case α ∈ (0, 1) ∪ (1, ∞). Some algebra reveals that, for every PMF Q X , where the PMF Q * (α) X is defined as The right-hand side (RHS) of (82) is thus equal to the minimum over Q X of the RHS of (83). Since (Proposition 1 (a)), this minimum is equal to the second term on the RHS of (83), which, by (78), equals D l α (P Y|X Q Y|X |P X ). For α = 1 and α = ∞, (82) follows from the same argument using that, for every PMF Q X , where the PMF Q * (∞) X is defined as For where (88) follows from the definition of D 0 (P Q) in (21), and (91) follows from (79). Tomamichel and Hayashi [37] and Lapidoth and Pfister [3] independently introduced and studied the dependence measure (For some measure-theoretic properties of J α (X; Y), see Aishwarya-Madiman [38].) The measure J α (X; Y) can be related to the error exponents in a hypothesis testing problem where the samples are either from a known joint distribution or an unknown product distribution (see [37] (Equation (57)) and [39]). It also appears in horse betting with side information (see Theorem 11 below). Similar to I c α (X; Y) in (34) and I s α (X; Y) in (60), the measure J α (X; Y) can be expressed as a minimization involving the new conditional Rényi divergence: Proposition 5. Let P XY be a joint PMF. Denote its marginal PMFs by P X and P Y and its conditional PMFs by P Y|X and P X|Y , so P XY = P X P Y|X = P Y P X|Y . Then, for all α ∈ [0, ∞], where (95) follows from Proposition 4, and (96) follows from (92). Swapping the roles of X and Y establishes (94): where (97) follows from Proposition 4, and (98) follows from (92). The rest of the section presents some properties of D l α (·). Proposition 6. Let P X be a PMF, and let P Y|X and Q Y|X be conditional PMFs. Then, Proof. We prove these properties as follows: The nonnegativity of D l α (·) now follows from the nonnegativity of the Rényi divergence (Proposition 1 (a)). If P Y|X=x = Q Y|X=x for all x ∈ supp(P X ) , then P X P Y|X = P X Q Y|X . Hence, using Q X = P X on the RHS of (99), D l α (P Y|X Q Y|X |P X ) equals zero. Conversely, if α ∈ (0, ∞] and D l α (·) = 0, then P X P Y|X = Q X Q Y|X for some Q X by Proposition 1 (a), which implies This follows from the definitions in (77) and (79)-(81) and the conventions in (20). is continuous because it is, by its definition in (77), a composition of continuous functions. The continuity at α = 1 follows from a careful application of L'Hôpital's rule. We conclude with the continuity at α = ∞. Observe that where (109) follows from the definition in (77), and (111) follows from the continuity of the Rényi divergence (Proposition 1 (c)) and the definition of and because the pointwise minimum preserves the monotonicity, the mapping By Proposition 4, By the nonnegativity of the Rényi divergence (Proposition 1 (a)), the RHS of (113) is nonnegative for α ∈ (0, 1] and nonpositive for α ∈ (1, ∞). Hence, it suffices to show separately that the mapping α → 1−α α D l α (P Y|X Q Y|X |P X ) is nonincreasing on (0, 1] and on (1, ∞). This is indeed the case: the mapping α → 1−α α D α (P X P Y|X Q X Q Y|X ) on the RHS of (113) is nonincreasing on (0, ∞) (Proposition 1 (e)), and the monotonicity is preserved by the pointwise minimum and maximum, respectively. (Proposition 1 (f)) and because the pointwise minimum preserves the concavity, the mapping This follows from Proposition 1 (g) in the same way that Part (f) followed from Proposition 1 (f). We next consider data-processing inequalities for D l α (·). We distinguish between processing Y and processing X. The data-processing inequality for processing Y follows from the data-processing inequality for the (unconditional) Rényi divergence: Theorem 5. Let P X be a PMF, and let P Y|X and Q Y|X be conditional PMFs. For a conditional PMF A Y |XY , define Then, for all α ∈ [0, ∞], Proof. We prove (117) for α ∈ (0, 1) ∪ (1, ∞); the claim will then extend to α ∈ [0, ∞] by the continuity of D l α (·) in α (Proposition 6 (c)). For every x ∈ supp(P X ), we can apply Proposition 1 (h) with the substitution of A Y |Y,X=x for A Y |Y to obtain For α ∈ (0, 1) ∪ (1, ∞), (117) now follows from (77) and (118). Processing X is different. Consider first Q Y|X that does not depend on X. Then, writing Q Y|X = Q Y , we have the following result (which, as shown in Example 2 below, does not extend to general Q Y|X ): Theorem 6. Let P X and Q Y be PMFs, and let P Y|X be a conditional PMF. For a conditional PMF B X |X , define the PMFs Then, for all α ∈ [0, ∞], Once we provide the operational meaning of D l α (·) in horse betting with side information (Theorem 10 below), Theorem 6 will become very intuitive: it expresses the fact that preprocessing the side information cannot increase the gambler's utility; see Remark 8. Note that P X and P Y|X in Theorem 6 can be obtained from the following marginalization: Proof of Theorem 6. We show (122) for α ∈ (0, 1) ∪ (1, ∞); the claim will then extend to α ∈ [0, ∞] by the continuity of D l α (·) in α (Proposition 6 (c)). Consider first α ∈ (1, ∞). Then, (122) holds because where (124) (127) is reversed [16] (III 2.4 Theorem 9). Because now α−1 α < 0, (122) continues to hold for α ∈ (0, 1). As a special case of Theorem 6, we obtain the following relation between the conditional and the unconditional Rényi divergence: Corollary 3. Let P X and Q Y be PMFs, and let P Y|X be a conditional PMF. Define the marginal PMF Then, for all α ∈ [0, ∞], Proof. This follows from Theorem 6 in the same way that Corollary 1 followed from Theorem 2. Consider next Q Y|X that does depend on X. It turns out that Corollary 3, and hence Theorem 6, cannot be extended to this setting: Define the PMFs P X , P Y|X , and Q Y|X as Then, for α = 0.5 and for α = 2, where the PMFs P Y and Q Y are given by Proof. Numerically, D 0.5 (P Y Q Y ) ≈ 1.11 bits, which is larger than D l 0.5 (P Y|X Q Y|X |P X ) ≈ 0.93 bits. Similarly, D 2 (P Y Q Y ) ≈ 2.95 bits, which is larger than D l 2 (P Y|X Q Y|X |P X ) ≈ 2.75 bits. Relation to Arimoto's Measures Before discussing Arimoto's measures, we first recall the definition of the Rényi entropy. The Rényi entropy of order α [7] is defined for all positive α's other than one as For α being zero, one, or infinity, we define by continuous extension of (141) where H(X) denotes Shannon's entropy. The Rényi entropy can be related to the Rényi divergence as follows: where U X denotes the uniform distribution over X . Relations Between the Conditional Rényi Divergences and the Rényi Dependence Measures In this section, we first establish the greater-or-equal-than order between the conditional Rényi divergences, where the order depends on whether α ∈ [0, 1] or α ∈ [1, ∞]. We then show that this implies the same order between the dependence measures derived from the conditional Rényi divergences. Finally, we remark that many of the dependence measures coincide when they are maximized over all PMFs P X . Proposition 7. For all Proof. This holds because where (157) follows from Proposition 4, and (159) follows from the definition of D s α (·) in (54). Despite I c α (X; Y), I s α (X; Y), I a α (X; Y), and J α (X; Y) being different measures, they often coincide when maximized over all PMFs P X : Theorem 8. For every conditional PMF P Y|X and every α ∈ (0, 1) ∪ (1, ∞), In addition, for every conditional PMF P Y|X and every α ∈ [ 1 2 , 1) ∪ (1, ∞), For α ∈ (0, 1 2 ), the situation is different: there exists a conditional PMF P Y|X such that, for every α ∈ (0, 1 2 ), Proof. Equation (174) For α ∈ [ 1 2 , 1), (178) holds because The sets of all PMFs over X and over Y are convex and compact; the function f is jointly continuous in the pair (Q Y , P X ) because it is a composition of continuous functions; for every Q Y ∈ P (Y ), the function f is linear and hence convex in P X ; and it only remains to show that the function f is concave in Q Y for every P X ∈ P (X ). Indeed, for every λ, λ ∈ [0, 1] with λ + λ = 1, every Q Y , Q Y ∈ P (Y ), and every P X ∈ P (X ), where (193) follows from the reverse Minkowski inequality [16] (III 2.4 Theorem 9) because α ∈ [ 1 2 , 1); and (195) holds because the function z → z (1−α)/α is concave for α ∈ [ 1 2 , 1). The justification of (185) is very similar to that of (181); here, we apply the minimax theorem to the function g : P (Y ) × P (X ) → R, Compared to the justification of (181), the only essential difference lies in showing that the function g is concave in Q Y for every P X ∈ P (X ): here, this follows easily from the concavity of the function z → z 1−α for α ∈ [ 1 2 , 1). We conclude the proof by establishing (177). Let X = Y = {0, 1}, and let the conditional PMF P Y|X be given by P Y|X (y|x) = 1{y = x}. (This corresponds to a binary noiseless channel.) Then, denoting by U X the uniform distribution over X , where (199) follows from (61). On the other hand, for every α ∈ (0, 1 2 ) and every PMF P X , where (200) follows from [3] (Lemma 11); (201) follows from (144); and (202) holds because α ∈ (0, 1 2 ). Inequality (177) now follows from (199) and (202). Horse Betting In this section, we analyze horse betting with a gambler investing all her money. Recall from the introduction that the winning horse X is distributed according to the PMF p, where we assume p(x) > 0 for all x ∈ X ; that the odds offered by the bookmaker are denoted by o : X → (0, ∞); that the fraction of her wealth that the gambler bets on Horse x ∈ X is denoted b(x) ≥ 0; that the wealth relative is the random variable S b(X) o(X); and that we seek betting strategies that maximize the utility function Because the gambler invests all her money, b is a PMF. As in [47] . Using these definitions, the utility function U β can be decomposed as follows: Theorem 9. Let β ∈ (−∞, 1), and let b be a PMF. Then, where the PMF g (β) is given by (207) Thus, choosing b = g (β) uniquely maximizes U β among all PMFs b. The three terms in (206) can be interpreted as follows: 1. The first term, log c, depends only on the odds and is related to the fairness of the odds. The odds are called subfair if c < 1, fair if c = 1, and superfair if c > 1. 2. The second term, D 1/(1−β) (p r), is related to the bookmaker's estimate of the winning probabilities. It is zero if and only if the odds are inversely proportional to the winning probabilities. 3. The third term, −D 1−β (g (β) b), is related to the gambler's estimate of the winning probabilities. It is zero if and only if b is equal to g (β) . Remark 4. For β = 0, (206) reduces to the following decomposition of the doubling rate E[log S]: (This decomposition appeared previously in [47] (Section 10.3).) Equation (208) implies that the doubling rate is maximized by proportional gambling, i.e., that E[log S] is maximized if and only if b is equal to p. Remark 5. Considering the limits β → −∞ and β ↑ 1, the PMF g (β) satisfies, for every x ∈ X , where the set S is defined as S ] . It follows from Proposition 8 below that the RHS of (209) is the unique maximizer of lim β→−∞ U β ; and it follows from the proof of Proposition 9 below that the RHS of (210) is a maximizer (not necessarily unique) of U 1 . ≤ log c. Observe that if b(x) = c/o(x) for all x ∈ X , then S = c with probability one, i.e., S does not depend on the winning horse. Proof of Proposition 8. Equation (219) holds because where (222) holds because, in the limit as β tends to −∞, the power mean tends to the minimum (since p is a PMF with p(x) > 0 for all x ∈ X [15] (Chapter 8)). We show (220) by contradiction. Assume that there exists a PMF b that does not satisfy (220), thus for all x ∈ X . Then, where (224) holds because b is a PMF; (225) follows from (223); and (226) follows from the definition of c in (204). Because 1 > 1 is impossible, such a b cannot exist, which establishes (220). It is not difficult to see that (220) holds with equality if b(x) = c/o(x) for all x ∈ X . We therefore focus on establishing that if (220) holds with equality, then b(x) = c/o(x) for all x ∈ X . Observe first that, if (220) holds with equality, then, for all x ∈ X , We now claim that (227) holds with equality for all x ∈ X . Indeed, if this were not the case, then there would exist an x ∈ X for which b(x ) o(x ) > c, thus (224)-(226) would hold, which would lead to a contradiction. Hence, if (220) holds with equality, then b(x) = c/o(x) for all x ∈ X . Proposition 9. Let β ≥ 1, and let b be a PMF. Then, Equality in (228) can be achieved by choosing b(x) = 1{x = x } for some x ∈ X satisfying Remark 7. Proposition 9 implies that if β ≥ 1, then it is optimal to bet on a single horse. Unless |X | = 1, this is not the case when β < 1: When β < 1, an optimal betting strategy requires placing a bet on every horse. This follows from Theorem 9 and our assumption that p(x) and o(x) are all positive. Proposition 10. Let b be a PMF. Then, Equality in (236) can be achieved by choosing b(x) = 1{x = x } for some x ∈ X satisfying Proof. Equation (235) holds because where (239) holds because in the limit as β tends to +∞, the power mean tends to the maximum (since p is a PMF with p(x) > 0 for all x ∈ X [15] (Chapter 8)). Inequality (236) holds because b(x) ≤ 1 for all x ∈ X . It is not difficult to see that (236) holds with equality if b(x) = 1{x = x } for some x ∈ X satisfying (237). Horse Betting with Side Information In this section, we study the horse betting problem where the gambler observes some side information Y before placing her bets. This setting leads to the conditional Rényi divergence D l α (·) discussed in Section 5 (see Theorem 10). In addition, it provides a new operational meaning to the dependence measure J α (X; Y) (see Theorem 11). We adapt our notation as follows: The joint PMF of X and Y is denoted p XY . (Recall that X denotes the winning horse.) We drop the assumption that the winning probabilities p(x) are positive, but we assume that p(y) > 0 for all y ∈ Y. We continue to assume that the gambler invests all her wealth, so a betting strategy is now a conditional PMF b X|Y , and the wealth relative S is As in Section 8, define the constant The following decomposition of the utility function U β parallels that of Theorem 9: Theorem 10. Let β ∈ (−∞, 1). Then, where the conditional PMF g (β) X|Y and the PMF g X|Y (x|y) Thus, choosing b X|Y = g (β) X|Y uniquely maximizes U β among all conditional PMFs b X|Y . Remark 8. It follows from Theorem 10 that, if the gambler gambles optimally, then, for β ∈ (−∞, 1), Operationally, it is clear that preprocessing the side information cannot increase the gambler's utility, i.e., that, for every conditional PMF p Y |Y , where p X|Y and p Y are derived from the joint PMF p XYY given by p XYY (x, y, y ) = p Y (y) p X|Y (x|y) p Y |Y (y |y). This provides the intuition for Theorem 6, where (254) is shown directly. The extreme case is when the preprocessing maps the side information to a constant and hence leads to the case where the side information is absent. In this case, Y is deterministic and p X|Y equals p X . Theorem 9 and Theorem 10 then lead to the following relation between the conditional and unconditional Rényi divergence: where the marginal PMF p X is given by This motivates Corollary 3, where (256) is derived from (254). The last result of this section provides a new operational meaning to the Lapidoth-Pfister mutual information J α (X; Y): assuming that β ∈ (−∞, 1) and that the gambler knows the winning probabilities, J 1/(1−β) (X; Y) measures how much the side information that is available to the gambler but not the bookmaker increases the gambler's smallest guaranteed utility for a fixed level of fairness c. To see this, consider first the setting without side information. By Theorem 9, the gambler chooses b = g (β) to maximize her utility, where g (β) is defined in (207). Then, using the nonnegativity of the Rényi divergence (Proposition 1 (a)), the following lower bound on the gambler's utility follows from (206): We call the RHS of (258) the smallest guaranteed utility for a fixed level of fairness c because (258) holds with equality if the bookmaker chooses the odds inversely proportional to the winning probabilities. Comparing (258) with (259) below, we see that the difference due to the side information is J 1/(1−β) (X; Y). Note that J 1/(1−β) (X; Y) is typically not the difference between the utility with and without side information; this is because the odds for which (258) and (259) hold with equality are typically not the same. Proof. For this choice of b X|Y , (259) holds because where (260) follows from Theorem 10, and (262) follows from Proposition 5. Fix now c > 0, letr * X achieve the minimum on the RHS of (261), and choose the odds Then, (261) holds with equality because r X =r * X by (241) and (242). Horse Betting with Part of the Money In this section, we treat the possibility that the gambler does not invest all her wealth. We restrict ourselves to the setting without side information and to β ∈ (−∞, 0) ∪ (0, 1). (For the case β = 0, see [47] (Section 10.5).) We assume that p(x) > 0 and o(x) > 0 for all x ∈ X . Denote by b(0) the fraction of her wealth that the gambler does not use for betting. (We assume 0 / ∈ X .) Then, b : X ∪ {0} → [0, 1] is a PMF, and the wealth relative S is the random variable As in Section 8, define the constant We treat the cases c < 1 and c ≥ 1 separately, starting with the latter. If c ≥ 1, then it is optimal to invest all the money: Proposition 11. Assume c ≥ 1, let β ∈ R, and let b be a PMF on X ∪ {0} with utility U β . Then, there exists a PMF b on X ∪ {0} with b (0) = 0 and utility U β ≥ U β . On the other hand, if β < 1 and the odds are subfair, i.e., if c < 1, then Claim (c) of the following theorem shows that investing all the money is not optimal: Theorem 12. Assume c < 1, let β ∈ (−∞, 0) ∪ (0, 1), and let b * be a PMF on X ∪ {0} that maximizes U β among all PMFs b. Defining the following claims hold: (a) Both the numerator and denominator on the RHS of (270) are positive, so Γ is well-defined and positive. . ., the set S thus has a special structure: it is either empty or equal to {x 1 , x 2 , . . . , x k } for some integer k. To maximize U β , the following procedure can be used: for every S with the above structure, compute the corresponding b according to (270)-(273); and from these b's, take one that maximizes U β . This procedure leads to an optimal solution: an optimal solution b * exists because we are optimizing a continuous function over a compact set, and b * corresponds to a set S that will be considered by the procedure. Having established that, for all β ∈ (−∞, 0) ∪ (0, 1), a strategy b is optimal if and only if (274) and (275) hold, we next continue with the proof. Let β ∈ (−∞, 0) ∪ (0, 1), and let b * be a PMF on X ∪ {0} that maximizes U β . By the above discussion, (274) and (275) are satisfied by b * for some µ ∈ R. The LHS of (274) is positive, so µ > 0. We now show that for all To this end, fix and the RHS of (282) is equal to the RHS of (281) because, being equal to b * (x), it is positive. If b * (x) = 0, then (275) implies so the RHS of (281) is zero and (281) hence holds. Universal Betting for IID Races In this section, we present a universal gambling strategy for IID races that requires neither knowledge of the winning probabilities nor of the parameter β of the utility function and yet asymptotically maximizes the utility function for all PMFs p and all β ∈ R. Consider n consecutive horse races, where the winning horse in the ith race is denoted X i for i ∈ {1, . . . , n}. We assume that X 1 , . . . , X n are IID according to the PMF p, where p(x) > 0 for all x ∈ X . In every race, the bookmaker offers the same odds o : X → (0, ∞), and the gambler spends all her wealth placing bets on the horses. The gambler plays race-after-race, i.e., before placing bets for a race, she is revealed the winning horse of the previous race and receives the money from the bookmaker. Her betting strategy is hence a sequence of conditional PMFs b X 1 , b X 2 |X 1 , b X 3 |X 1 X 2 , . . . , b X n |X 1 X 2 ···X n−1 . The wealth relative is the random variable S n n ∏ i=1 b(X i |X 1 , . . . , X i−1 ) o(X i ). We seek betting strategies that maximize the utility function We first establish that to maximize U β,n for a fixed β ∈ R, it suffices to use the same betting strategy in every race; see Theorem 13. We then show that the individual-sequence-universal strategy by Cover-Ordentlich [48] allows to asymptotically achieve the same normalized utility without knowing p or β (see Theorem 14). For a fixed β ∈ R, let the PMF b * be a betting strategy that maximizes the single-race utility U β discussed in Section 8, and denote by U * β the utility associated with b * . Using the same betting strategy b * over n races leads to the utility U β,n , and it follows from (295) and (296) that U β,n = nU * β . Appendix F. Proof of Theorem 3 Observe that, for all x ∈ X and all y ∈ Y , P X (x ) P Y |X (y |x ) = ∑ x,y P X (x) P Y|X (y|x)1{x = x} A Y |XY (y |x, y), Hence, (68) follows from (54) and D α (P X P Y |X P X Q Y |X ) ≤ D α (P X P Y|X P X Q Y|X ), which follows from the data-processing inequality for the Rényi divergence by substituting 1 X =X A Y |XY for A X Y |XY in Proposition 1 (h). Appendix G. Proof of Theorem 4 Observe that, for all x ∈ X and all y ∈ Y, P X (x ) P Y|X (y |x ) = ∑ x,y P X (x) P Y|X (y|x) B X |X (x |x)1{y = y}, (A32) Hence, (73) follows from (54) and D α (P X P Y|X P X Q Y|X ) ≤ D α (P X P Y|X P X Q Y|X ), which follows from the data-processing inequality for the Rényi divergence by substituting B X |X 1 Y =Y for A X Y |XY in Proposition 1 (h).
2020-03-19T10:54:04.478Z
2020-03-01T00:00:00.000
{ "year": 2020, "sha1": "a19ddb689df6b0ca0c3527d5f0163e66cf698afe", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1099-4300/22/3/316/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "f1e98a8b661969157574a3d441af63d8d3af5ada", "s2fieldsofstudy": [ "Mathematics", "Computer Science" ], "extfieldsofstudy": [ "Mathematics", "Computer Science", "Medicine" ] }
238777402
pes2o/s2orc
v3-fos-license
A Conceptual Foundation for Blockchain Development: The Contribution of Ibn Khaldun Blockchain is a game-changing technology that has the ability to solve plenty of real-world issues in the digital age. Blockchain is a subject of huge interest in many industries and academia in terms of discovering technology and classifying challenges and innovative practical application for the industry. This study addresses the challenges that are of main concern in designing a Blockchain platform. In this regard, the problems such as privacy, regulation, security, lack of adequate skill sets, energy consumption, inefficient technology design, the criminal connection, scalability, energy consumption, and public view are discovered to be important. Due to such challenges, the blockchain technologies have emitted a negative impression due to its incapability to be successfully applied while, at the same time, its benefits could not be fully gained by its investors. The objective of this study, hence, is to assess the blockchain advantages and growth in light of the eight foundations for economic development as advocated by Ibn Khaldun. Expending Ibn Khaldun’s philosophy, each challenge is deliberated and investigated to find the answers and solutions for addressing and overcoming the afore-mentioned challenges. INTRODUCTION The Fourth Industrial Revolution or Industry 4.0 is a technological transformation that is changing how people work and interact in general. It will have an impact on the regulation and governance of all technology-based activities and transactions (Mahmood and Mubarik, 2020). The excitement for blockchain is that the potential benefits of blockchain extend beyond economics to the political, humanitarian, social, and scientific domains, and that the technological capacity of Blockchain is already being harnessed by specific groups to solve real-world problems (Sakız and Gencer, 2019). The goal of blockchain technology is to create an open, universally accessible decentralised ledger that can be used to establish confidence in an insecure environment without relying on a third party. The ledger holds the blockchain shared and agreed-upon state, as well as an unchangeable list of all previous transactions. It can also be used in conjunction with other technologies, such as encryption, business rules, and identity management, to make technology more suitable for the difficulties at hand (Subramanian, Chaudhuri et al., 2020). The term "blockchain" belongs to a collection of technologies that includes the blockchain structured data, public-key cryptography, distributed ledgers, and consensus processes. A blockchain is made up of blocks, each of which contains data (something of value), its hash value (a unique cryptographic value combining characters and numbers generated by a sophisticated computer algorithm), and other information (Rabbani, Khan et al., 2020). In this regard, a well-designed blockchain, as a distributed, tamper-proof ledger, not only eliminates intermediaries, reduces costs, and increases speed and reach, but also provides more transparency and traceability for many business operations. By 2030, it is estimated that blockchain would be able to generate more than $3 trillion in yearly corporate value (Chang, Iakovou et al., 2020). Furthermore, Blockchain can also be classified as public, private or hybrid variants, depending on their application (Sultan, Ruhi et al. 2018). To explain, public blockchains have no single owner; they are visible to everyone and accessible by anyone; their consensus process is open to all who intend to participate in, and they are fully decentralized. Bitcoin is a first practical example of blockchain implementation. Bitcoin is a digital currency that was first introduced in January of 2009 (Segendorf, 2014), (Böhme, Christin et al., 2015). It is based on ideas presented in a whitepaper by Satoshi Nakamoto, a mysterious and pseudonymous figure. The identity of the individual or people behind the technology is still unknown. Bitcoin promises reduced transaction fees than existing online payment methods, and it is controlled by a decentralised authority, unlike governmentissued currencies. The bitcoin system consists on a network of computers (also known as "nodes" or "miners") that execute bitcoin code and store its blockchain (Bouoiyour and Selmi, 2015). A blockchain can be viewed of as a collection of blocks metaphorically. Each block contains a set of transactions. No one can trick the system because all computers running the blockchain have the same list of blocks and transactions and can watch these fresh blocks being filled with fresh bitcoin transactions transparently (Narayanan, Bonneau et al., 2016). Bitcoin is an example of a public blockchain. Private blockchains (also called permissionless), on the other hand, use privileges to control who could read from and write to the blockchain (Pieters and Smith, 2021). Consensus algorithms and mining usually aren't required as a single entity usually has ownership and controls block creation. Hybrid (also known as a consortium), is the blockchain that is made public only to a privileged group (Benedetti, 2021). The consensus process is controlled by known, privileged servers using a set of rules agreed to by all parties. Copies of the blockchain are only distributed among entitled participants; the network is therefore only partly decentralized (Komalavalli, Saxena et al., 2020). It is also called a federated blockchain, which operates under the control of a specific group of organizations that are allowed to perform the role of full nodes (Pieters and Smith, 2021). There are many challenges in the growth of blockchain technology (Patel, Khatiwala et al., 2020). To explain, Government interferences, underdeveloped ecosystem infrastructure, unclear regulations, privacy and lack of standards, human resources, data privacy, managing storage capacity, standardisation challenges and social challenges, scalability, cost issues, long-term security, and software infrastructure are just a few of the issues raised by the report (Zakaria, Kunhibawa et al. 2018). As a result, these issues have harmed the development of blockchain technology as well as investor confidence. In this regard, a dynamic blockchain development strategy must be formed in order to promote blockchain technology and reap its benefits. Hence, this study recommends implementing Ibn Khaldun's economic expansion ideas into blockchain development methods in order to boost the benefits of the technology. METHODOLOGY In order to achieve these goals, a qualitative research methodology that includes three methodologies was used: exploratory study, observation, and doctrinal analysis (Hutchinson, 2015). Mainly using library and online databases, exploratory study and observation analysis were conducted in which primary and secondary data sources, primarily articles, and textbooks, were examined and analysed in blockchain challenges on Ibn khaldun economic thought, elucidating the subject of blockchain and Ibn khaldun economic thought (Mayer, 2015). Furthermore, using a doctrinal approach, content analysis was carried out by carefully studying the data from the specified sources to identify concerns that might arise in comprehending blockchain challenges and solutions Cownie and Bradney, (2013). The study is divided into five sections. Following the introduction, second section two is on understanding blockchain literature. This section reviews blockchain, its nature, possible implementation, and its inherent risks. The section also explains the literature on Ibn Khaldun economic thought. The third section identifies and examines the challenges that blockchain technology will face. The fourth section explains the idea Ibn Khaldun economic idea, whereas the fifth section offers some solutions for blockchain challenges and direction for future research. The sixth section concludes the paper. Literature Review The review of the literature here would be focusing on two points, namely blockchain adoption challenges and Ibn Khaldun economic development model, in which firstly, the blockchain challenges' literature review is presented below: A study by Zhang, Zargham et al., (2020) have assessed Blockchain networks have grown in popularity as a means of creating cryptocurrencies and decentralised economies based on peer-to-peer protocols. On the other hand, the complexity of the dynamics and feedback processes within these economic networks has made reasoning about their growth and evolution difficult. Therefore, adequate mathematical frameworks are required to model and understand the behaviour of blockchain-enabled networks. In this regard, a model of a generic token economy is created to demonstrate our concept, in which miners supply a commodity service to a platform in return for a cryptocurrency, and users consume services from the platform. We simulate and test two different block reward systems to demonstrate the dynamics of token economies. As well, Berg, Davidson et al., (2019) have evaluated the uses of transaction cost economic paradigm developed by Ronald Coase and Oliver Williamson to explain why. Nonhierarchical commercial organisations attempted to avoid contractual complications in the face of potentially opportunistic behaviour. Trust technologies are an institutional instrument for preventing opportunism on the margins of trust. The study claims that blockchain technology may be used to coordinate economic activities and implement the electronic marketplace hypothesis. Another study by Aoyagi and Adachi, (2018) described the smart contract in the blockchain protocol mitigates uncertainty in an economy with asymmetric information. Because the blockchain, as a novel trading platform, causes market segmentation and differentiation of agents on both the sell and purchase sides of the market, it reconfigures asymmetric information and generates asset price and quality spreads between itself and existing platforms. We show that the smart contract's marginal innovation and sophistication have non-monotonic impact on the trading value. Whilst, Mazlan, Daud et al., (2020) have highlighted a number of initiatives aimed at maximising the use of blockchain-based technologies in the healthcare system. Not only that, but this research has also identified many workflows for improved data management that have been developed inside the healthcare ecosystem using blockchain technology. In this way, the Ethereum blockchain platform has been used to design and implement a variety of medical workflows, including sophisticated medical processes like surgery and clinical trials. Many parties in the medical system would benefit from this, since it would allow them to better cater to healthcare facilities while also reducing costs. Besides, Kolb, AbdelBaky et al. (2020) have described the extensive examination of Ethereum and its smart contracts, and explained the core blockchain ideas that are driven by Bitcoin as a case study. The capabilities and limitations of blockchains are then contrasted to older distributed architectures. Following that, they began a high-level investigation into four key blockchain challenges: improving consensus effectiveness, making blockchains more scalable, ensuring strong privacy in transactions, and confirming the security of smart contracts without losing the blockchain essential components. Similarly, Schuetz and Venkatesh (2020) have also emphasised financial inclusion, adoption, and blockchain in India, arguing that the four obstacles of geographical access, high cost, inadequate banking products, and financial illiteracy must be overcome in order to resolve financial exclusion. In this regard, they have also said that blockchain technologies have the potential to overcome the majority of these issues. To add, another study by Ningrat (2018) has portrayed poverty elimination as the ultimate aim of the United Nations' Sustainable Development Goal (SDG 2030), which was endorsed by 193 countries. Meanwhile, there is a $2.5 trillion funding shortfall for the Sustainable Development Goals, and Zakat and Waqf are seen as a good solution in Islamic finance to close this gap, despite significant obstacles in their collection, management, and distribution. In this sense, Blockchain technology may be able to shed light on the issue because it can trace where your transactions are going, when they arrive, and where they are being used. Furthermore, (Zakaria, Kunhibawa et al. 2018) have looked into the benefits and drawbacks of using blockchain technology. This study discovered that this technology has the potential to improve political, social, and economic efficiency and transparency. The study, on the other hand, discovered several hurdles in terms of cost and IT issues. Additionally, this study mentions incidents such as criminal trade, extortion, money laundering, terrorist financing, online gambling, and get-rich-quick schemes, all of which are linked to blockchain-based Cryptocurrencies. On the same note, Siyal, Junejo et al. (2019) have also talked about how blockchain technology can be used in a variety of ways. However, some technological issues such as scalability, privacy leakage, and selfish mining continue to exist. As such, one approach by Herian (2018) has advocated that blockchains be viewed as the next step in the ongoing mass socioeconomic digitalization, in which blockchain-based applications might enhance the Internet stack's capabilities while refocusing the energies of existing network technologies and electronic working methods. The study also identified some barriers, including issues with blockchain cost, speed, scalability, and long-term security, as well as environmental and climatic costs due to massive growth in computing power demand, which is essential for enabling blockchain-based applications to operate on an industrial scale, as they do (much like many traditional means of production). In contradiction, Maghdeed, n.d. (2019) has presented the idea of the Sukuk chain, which is a framework for running a programmable security token or smart contract within a network pool maintained by Blockchain communities like Ethereum. In current era of blockchain technology development, the evolution of Smart Contracts has shown a significant improvement. However, blockchain still has several technical limits and issues that need to be handled thoroughly, such as transaction speed, integration of modules with various business cases, securities, and consistent standards across countries. In relation Rabbani, Khan et al., (2020) have described Smart contracts, cloud storage, digital currencies, Zakat collection, greater Waqf utility, efficient Halal supply chain, cryptocurrency remittance, Takaful, Smart Sukuk, and other blockchain-based applications. Government involvement, underdeveloped ecosystem infrastructure, unknown rules, security, privacy, and a lack of standards, human resource and cost constraints, immature middleware and tools, and scalability are all existing and unavoidable barriers. In particular, asiablockchainreview (2019) has emphasized on the Halal food market, where blockchain allows users to track the origins, traceability, and quality control of food throughout the whole supply chain. Consumers will be more assured as they will be able to verify food quality simply by using or scanning the QR code using mobile applications. The halal industry's market value is anticipated to be $2.6 trillion till 2023, which might be utilised to track legal and tax operations. In this way, rather than focusing solely on cryptocurrency, the study tried to present a broader view of blockchain technology uses. Further elaboration of these factors by Iredale, (2020) has highlighted the challenges of Blockchain implementation that are slowing down the system; Inefficient Technological Design, The Criminal Connection, Scalability, Energy Consumption, Privacy, Regulation, Security, Lack of Adequate Skill Sets, Blockchain Can Be Slow and Public Percept. Inefficient Technological Design While the benefits of blockchain technology are undeniable, the system's shortcomings in many technological aspects must be addressed (Pedersen, Risius et al., 2019). A code fault is an important factor to consider in this regard. Bitcoin, as many people are aware, is on the cutting edge in this sense, but the entire system reeks of wasteful design. Despite the fact that the Ethereum blockchain is working hard to address bitcoin's flaws, the effort is still insufficient. The Criminal Connection The unique feature of blockchain technology, namely anonymity, has drawn not only experts but also criminals. Criminals are using cryptocurrency to buy illegal equipment and payment methods in this case. Not only that, but they have asked for cryptocurrency in exchange for the purchase charge. The only way to deal with this negative link is to prohibit illicit connections and create a high-quality blockchain implementation (Li, Jiang et al., 2020). Scalability Blockchains, a Distributed Ledger Technology, are thought to work very well for a small number of users. What would happen if there was a mass amalgamation? This is a topic that should be considered. Ethereum and Bitcoin now have the largest number of users on the network, and they are having difficulty dispensing and managing with the situation. As a result, these flaws must be corrected immediately, as they are causing the entire system to become repetitive (Mazlan, Daud et al., 2020). Energy Consumption Another barrier to blockchain adoption is energy consumption. Nearly other blockchain technologies have adopted bitcoin's structure and used Proof of Work as a consensus mechanism in this regard. Mining necessitates the use of your computer to solve difficult equations; as a result, once you begin mining, your computer will consume more and more electricity to cope with this undesirable circumstance. Other consensus approaches may have been used to authenticate the transitions instead of blockchain (Sedlmeir, Buhl et al., 2020). Privacy The privacy concerns associated with Blockchain technology do not work well. Because the public ledger system promotes the system, total privacy is no longer an issue. In the case of cryptocurrencies, this is a vital need. On the other hand, this position has resulted in certain potential disputes between governments and businesses. For a variety of reasons, governments and businesses must always preserve and restrict access to their records in this situation (Mohanta, Jena et al., 2019). Regulation Many corporations have begun to use blockchain technology as a form of transaction, and certain commodities have become reliant on it. However, there are currently no explicit regulations governing the usage of blockchain technology. As a result, when it comes to the blockchain rules, consumers are left in the dark. As a result, in order to overcome these provocations, the government and key industries should begin the process of enacting legislation, expressly for blockchain technology (Pashkov and Soloviov, 2019). Security One of the most crucial factors in the adoption of Blockchain technology is security. Many people are aware that each blockchain technology has its own level of security. However, much like any other technology, blockchain has a few security rings to contend with. One of the security flaws is the 51 percent assault on the network. In this battle, hackers might seize control of the network, altering the transaction process and preventing others from forming blocks. Additional security is required at the protocol layer to address these concerns (Nabben, 2021). Lack of Adequate Skill Sets It is needed of qualified individuals to manage blockchain technology in addition to software and hardware, as it's still a relatively new advanced technology of the twenty-first century that's still fast evolving. Only a small number of people currently have the expertise required to maintain such technology. The blockchain, like any other technological innovation, will undoubtedly continue to grow. Disagreements on this subject are natural, but they are not impediments to the advancement of Blockchain technology (Kaal, 2021). Blockchains Can Be Slow The blockchain technology is definitely complicated, as transactions take longer to complete. Similarly, the system's encryption has made it even sluggish. Despite the fact that they have been claimed to be faster than traditional payment systems, the claim is currently debatable. In this sense, completing a transaction could take several hours, causing individuals to become anxious, even if they merely wish to pay for a cup of tea. This explains why it is critical for technology providers to make it possible for people to conduct large transactions without being constrained by time constraints. Of course, there is a level of danger involved. Nevertheless, despite increasing equity, it is still vital to eradicate the 'unsecured' types of blockchains. In this context, financial institutions such as banks have gained significant profits through intermediaries, resulting in lower costs than traditional methods. As a result, if blockchain can take over and solve this problem, the expenses will undoubtedly be substantially cheaper (Fitzi, Gaži et al., 2020). Public Perception The general public is still uninformed of the existence of blockchain technology and its potential applications. In this environment, blockchain adoption is only conceivable if it gains public acceptance. Even while technology has a good track record, it is still insufficient to attract additional buyers. The difference between bitcoins, other cryptocurrencies, and blockchain should be understood by all members of the community. This is critical in order to eliminate Bitcoin's negative consequences and keep the technology alive. As a result, the deployment of blockchain technology should be accompanied by a greater willingness on the part of the community (Fridgen, Guggenmoos et al., 2018). Importantly, this study deliberated the top ten major blockchain challenges, because these challengers are most serious issues that hinder the blockchain development and implementation. It is fact that, there is many other challenges which affects the blockchain application and growth (Upadhyay, 2020). (Iredale, 2020) The next literature would be addressing Ibn Khaldun's Economic Thought on Development, as followed: A study by Abdolhamid and Esmaeili (2020), have considered, the ten modules of Sustainable Development-Asabiyyah, which focus on nature, good governance, citizenship rights, rationality, population growth rate, poverty reduction and welfare generation, scientific growth, and justice, were extracted from Ibn Khaldun's perspective. The author ignores Ibn Khaldun's eight smart economic growth models, which are a central purpose of this research. In recent study by Zakaria (2020), it is discovered that Khaldunian economic opinions have paid special attention to observing the essential features of three thoughts of kasb maʻāsh and jibāyah (Kasb-maʻāsh) motivation is a self-evident move of work to earning money, whereas jibāyah concept is expressing about collecting custom and advising how a nation can economically develop). as to how these three objects, if established and refined in an equally unique structure, would create a nation that flourishes and blossoms, while avoiding any flaws, particularly in rigging jibāyah (taxation) rules, which could destroy business production, causing the respective country to grow disastrously and vanish from the face of civilization. In particular, Listiana, Alhabshi et al. (2020) have provided Ibn Khaldun's perspective on the role of the state in socioeconomic development, in which Ibn Khaldun's stance on how to support moral principles that require a sustainable and rational condition in society is analysed and discussed. So far, Jarrar (2020) has advocated to link contemporary art market marketing models to Ibn Khaldun's economic viewpoint because of the significant theoretical and revolutionary role he has played in the remarkable advancement of essential economic concepts that are still applicable in understanding recent diverse marketing implementation. Another study by Gueye (2019), looked at Ibn Khaldun's successes while investigating at the building procedures of nations in North Africa during the Middle Ages. In this regard, he has evaluated the compliance of states founded in Morocco and North Africa throughout the Middle Ages to Ibn Khaldun's state concept, with a particular focus on the Marabouts. In this regard, Ibn Khaldun has specified and presented various dynamic and specific terms of the theory; as a result, we are learning terminology like Asabiyyah, irtizaq, Hadarî, bedevi, and their effects within the context of the Marabout state. Furthermore, Süngü (2019) evaluated Ibn Khaldun's perspective on judging men's capacity to bring a nation to the advanced level of humanity. Inspired by the visions of his previous Islamic philosophers on the workings of the mind, Ibn Khaldun has stated a diverse and quite unusual manner of thinking known as the smart mind, empirical, and intelligent mind in the pursuit of creation development. He claims that using Agent Intelligence to gain actual knowledge is impossible in this world. This is because human rationality has a limit and cannot achieve and acquire actual knowledge beyond logical beliefs. Notwithstanding that, Babacan and Yılmaz (2019) have emphasized that the combination of traditional and modern, which supports the validity of Turkey's social growth, can be found in Khaldunian's idea of Asabiyyah, as well as in Fukuyama, Coleman, Putnam, and Bourdieu's well-known concept of social capital. Despite the fact that modern lifestyles have atomized individuals in society, social capital has emerged as a new means of reuniting people in order to restore social harmony and collaboration. On the other hand, Barut and Duran (2019) have also clarified that Ibn Khaldun's insights are linked to a variety of topics, including general education, curriculum, and instruction, as well as teacher certification and child education. In this regard, Samuel's suggestion of a "Clash of Civilizations" has been used to investigate Ibn Khaldun's educational ideas. Evidently, Mohammad (2010) has discovered that Ibn Khaldun, as a historian and scientist, has highlighted the emergence and fall of civilization in a variety of issues. The sovereign or political authority, people, money or stock of resources, development and justice in a circular and interconnected manner, beliefs, and standards of behaviour or the Shari'ah; each influences the others and is influenced by them. In this regard, He has promoted an all-encompassing and fruitful economic concept that includes the socioeconomic environment in this regard. Besides that, Huda (2016) has divided Ibn Khaldun's economic thought into two categories: first, a sociological approach, which is a detailed description of Ibn Khaldun's observations and examinations of various ongoing economic activities in society, and second, a juridical approach, in which legal provisions are used to validate and enable economic activities to function properly. On a side note, according to the study Mujahidin, (2018), Ibn Khaldun was the Muslim scholar and father of the world economy, who invented many ideas in the field of economics, including such as the law of supply and demand, tax and public expenditure macroeconomics, trade cycle, agriculture, property rights and prosperity, doctrine of values, division of labour, pricing system, production, money, capital creation, and population increase. These ideas, as proposed by Ibn Khaldun, are extremely important in ensuring a nation's progress, and his social welfare judgments are truly unique. The idea of Ibn Khaldun Economic Development Ibn Khaldun, a well-known Muslim intellectual scholar, has made numerous contributions in subjects such as civilisation, politics, culture, and economics. With this in mind, the purpose of this study is to consider Ibn Khaldun's conceptual underpinning for blockchain development using expressive and evocative analytical methodologies. Ibn Khaldun's ideas in this area are undeniably important to intellectual progress, as his viewpoints had a considerable impact on the future development of blockchain (Irijanto, Shah et al. 2013). Ibn Khaldun is regarded not just as a historian and sociologist, but also as the father of economic sciences, as most of his economic ideas predate and transcend those of Adam Smith and Ricardo (Mohammad, 2010). To describe, Ibn Khaldun made significant contributions of economics, income, spending, demand and supply, the price mechanism, and development studies, necessitating Ibn Khaldun's status as a forerunner to Malthus, Khan, and Keynes (Mohammad 2010). Before, Ronald Reagan, the American President at the time, was wrongly associated with the concept of "fewer taxes and greater revenue," which happens to be the conservative ideology in the United States. Therefore, this study examines Ibn Khaldun's economic thoughts on eight conceptual underpinnings for blockchain development in order to address ten blockchain problems, as follows. Al-Mulk (Collective Entity) Sovereign Ibn Khaldun thought that all successful financial development must meet certain essential characteristics, one of which is Al-Mulk (Collective Entity/Sovereignty) (Khaldun, 2012). In this respect, Ibn Khaldun has identified that the sovereign's (Al-Mulk) strength does not develop unless Sharīʾah is implemented, and Sharīʾah cannot be implemented unless the sovereign is present (Al-Mulk) (Chapra 2008). This issue is described as a collective (state). In this regard, he has stated that the development of a state, society, or economy is dependent on the support of the people, and that people's support is dependent on their feeding (interpreting state to mean the political cooperative plus its laws, organisations, or the civilization that State has as such, or a sort of national economy managed by State and society) (Mohammad 2010). Despite the enormous capability of blockchain technology, it is feasible that one of its issues is inefficient technological design (Chang, Iakovou et al., 2020). Regardless of the many benefits of blockchain technology, it currently needs some technological foundations, such as coding errors or gaps. Although Bitcoin was the first in this regard, the entire structure reeks of wasteful design. While Ethereum has attempted to support and cover all of Bitcoin's problems, the effort has been insufficient (Berg, Davidson et al., 2019). For example, in terms of decentralised implementation development, Ethereum has allowed developers to create Daaps based on their system, and many Daaps have emerged as a result. However, the majority of them have been discovered to have incorrect code and flaws, which anyone can use to quickly hack into the system (Nabben, 2021). Consequently, the concepts of information and safety are absent in this case. Henceforth, Ibn Khaldun Al-Mulk (collective entity) and expert (Ar-Riajl) have argued that people may be a solution to the problems that Blockchain development poses in terms of inefficient technology design. Things would become much more convenient and easy if we could solve the Blockchain adoption problem (Khaldun, 2012). Sharīʾah (Rules and Regulations) The Sharīʾah, Rules, and Regulations have never been ignored in Ibn Khaldun's development strategy (Gule 2015). Ibn Khaldun has always considered the function of law and justice in defining their significance to the evolution of society as a philosopher, and his conceptions of the content of justice and law are widely Islamic (Gule 2015). To relate, the social goal of law and justice is to develop and maintain a stable social order, which is necessary for sedentary civilizations to flourish. The content of religion, Islamic law exists to assure not only a stable social order, but also the salvation and peaceful afterlife of believers. Both of these aspects of law and justice are present in Ibn Khaldun's concept. The most significant tool for ensuring the stability of global blockchain technology is the adaptation of Sharīʾah and rules-regulations (Laabdi, 2021). Several businesses have begun to adopt blockchain technology as a transactional tool because it offers high transactional efficiency (Ali, Ally et al., 2020). Nevertheless, there is a significant problem in blockchain technology that should be considered: the lack of legislation that should control the entity (Mohsin, 2021). It has been noticed that the necessary specific regulations for adopting blockchain are simply non-existent, which has led in many of the problems. While one of the benefits of blockchain is visibility, there is no guarantee of safety (Tatar, Gokce et al., 2020). To relate, the 51% attack on the network is one of the faults of the system, in terms of security. In this attack, hackers could cross the network and abuse it with their own technique. They could even change the transaction process and control other people from creating a block. This has portrayed that privacy and blockchain don't go well with each other (Sayeed and Marco-Gisbert, 2019). Because the system runs on a public ledger, perfect privacy is not a top priority; however, is it feasible for any organisation to function without privacy? The answer is simply no. Several businesses that deal with and manage sensitive information constantly want to maintain a strong feeling of privacy because their clients have placed their trust in them (Shrestha and Nam, 2019). Accordingly, if the information is stored in a public ledger, it is no longer deemed private. In the case of bitcoin and other cryptocurrencies, this is a crucial necessity. On the other hand, this has caused some worry among governments and businesses (Zhang, Xue et al., 2019). For a variety of reasons, governments and corporations must constantly protect and control access to their data. Thus, Ibn Khaldun's Shari'ah and law application would play a significant role in the development and stability of the blockchain market (Khaldun, 2012). Therefore, the government and other key sectors may need to adopt blockchain technology legislation. Al-Rijal (People/Nation) People have a significant part in a nation's growth, according to Ibn Khaldun, because civilization's rise and fall is dependent on their prosperity or misery (Khaldun, 2012). Ibn Khaldun stated that, people are the cause of economic germination, but only if they are motivated to engage in the economy (Chapra 2008). According to Ibn Khaldun's analysis, labour diversification and specialisation are more likely to occur in a well-regulated market where people can change and fulfil their needs; the more glorious the specialisation, the more distinguished the growth of money will be, as production and crafts will continue to thrive (Khaldun, 2012). Henceforth, today's corporate and cooperative enterprises can benefit from Ibn Khaldun's help idea. Ibn Khaldun believes that one cannot meet all of one's requirements on one's own, so one must collaborate with others in society to meet one's own and others' needs. This would also generate residual wealth, which could be used to acquire pleasures such as nice clothing (Khaldun, 2012). Besides, the sovereign would be unable to acquire power unless it came from the people (Al-Rijal), which would be achievable if wealth was accessible to the general public (Khaldun, 2012). As can be seen from the remarks above, standardisation of blockchain technology is still a difficult task. Blockchain technology is still in its early stages of development. At the moment, the number of people with the necessary abilities to support such technology is still limited, despite the high demand for such workers (Zhang and Zhou, 2020). Consequently, if one wishes to hire qualified workers, one must pay a promising and acceptable sum of money. The blockchain, like any other technological innovation, will undoubtedly advance. Despite the fact that challenges are unavoidable, they should not be regarded as roadblocks. In this regard, new law and standards must be implemented (Pashkov and Soloviov, 2019). Thus, the people's concept of Ibn Khaldun (Al-Rijal) is a significant concept in the development of blockchain for enabling individuals to learn skills or become "people of blockchain" and innovating constructing its institutions (Khaldun, 2012). Al-Adl (Justice) Another important pillar of development is fairness, which can distinguish between Islamic and non-Islamic development in a variety of areas (Gule 2015). "Within Ibn Khaldun's new science of civilization, his thoughts on law and justice form a coherent totality, integrating scientific analysis and values or a descriptive method with a normative viewpoint, both based on his Islamic beliefs." This demonstrates the significance of justice in the development of a country (Mohammad 2010). To illuminate, construction, industrial, trade, agriculture, and technology are all examples of development activities. According to a solid interpretation of Ibn Khaldun's concept, all such development operations must be fair and just for all groups of people (Khaldun and Rosenthal 1967). The blockchain technology's potential attracted not only authorities but also criminals. As a result, the network's nature is decentralised, and no one can know your specific identify. Because bitcoin is utilised as a money in the underground market and on the dark web, it is the primary target. This bad reputation has imposed logical concern among many people, which they need to think twice before looking into the whole system (Kethineni and Cao, 2020). Likewise, it is common for people who are aware to avoid any illicit connections. Criminals are now using cryptocurrency to purchase illegal equipment and payment methods. They also accept cryptocurrency as a form of payment (Brown, 2016). That is why, Ibn Khaldun believes that justice is the standard (Al-Mizan) by which God will judge humanity, and that the ruler is charged with carrying it out (Khaldun, 2012). As a result, Ibn Khaldun's (Sharīʾah) Rules and Regulations, as well as (Al-Adl) Justice theory, could be considered as a solution for illicit linkage in a blockchain transaction. Stopping illicit or black transactions is the only way to solve this problem, guaranteeing a better blockchain application (Khaldun, 2012). Al-Imarah (Development) The importance of Ibn Khaldun is primarily focused on the formation of the state, in which the development is incorporated (Abozeid, 2021). This includes major factors such as: (a) the establishment of acreage rights and freedom of enterprise, because a nation with relatively poor acreage rights will remain poor indefinitely; (b) the rule of law and the reliability of the judicial system for the establishment of justice, because a lack of justice will result in the extinction of the human species; and b) The safety of trade routes and the safety of the peace (d) Less bureaucracy and lower taxes to boost employment, manufacturing, and earnings (Mohammad 2010). (e) No government involvement in trade, production, or commercial concerns, as well as no price fixing by the government, and no monopoly by anyone in the market. (f) A stable monetary policy and an independent monetary authority that does not manipulate the currency's value. (g) "Encouragement of a larger population and market for more specialisation. (h) A creative educational system that promotes independent thinking and behavior, (i) as well as I a shared sense of duty and inner feeling for the establishment of a just system that promotes good actions and discourages vice (Hasan, 2020). This was highlighted by (Sandner and Schulden, 2019), who also mentioned another barrier to blockchain adoption: energy usage. The majority of blockchain technology is structured similarly to bitcoin and uses Proof of Work as a consensus process. Proof of Work, on the other hand, is not as impressive as it appears. To keep the system alive, it requires computational power. You'll need mining if you want to use your computer to solve complex equations. Hence, as you begin mining, your PC consumes more and more electricity to deal with the problem. In this case, it has been discovered that 0.2 percent of miners consume the majority of the entire electricity. If the current trend continues, miners will require far more power than the globe can supply by 2020. As a result, it has now become one of the network's primary challenges, leading in a slew of new institutions (Nartey, Tchao et al., 2021). Properly, blockchain might use alternative consensus systems to legitimate the transitions, such as consensus algorithms that only require a little amount of energy to process. The blockchain is complex, which is why it takes longer to complete any transactions. Similarly, the structure's encryption slows it down even further (Labazova 2019). In this regard, Zhang, Xie et al., (2020) claims that, although claiming that the technology would be faster than existing payment systems, they were unable to deliver in some circumstances. It may take several hours to complete a payment. As a result, even paying for a cup of coffee would cause anxiety. As a result, the act of doing large transactions should not take up too much time. The standard has theoretically expanded to blockchain systems, where the store value is ignored. For example, in an IoT setting, logging transactions or communications. These networks, and even computer documents, can become inefficient and unusable. However, this is not a permanent state, as the system only slows down when there are too many operators in the system (Iredale, 2020). It becomes slower as it rises higher. If not adequately addressed, these blockchain implementation concerns may pose a number of problems in the future. As a result, we should anticipate a quick settlement. Nonetheless, some modern technologies claim to be faster than older ones. The group has focused on the blockchain's failure. To be honest, it might well evaporate faster than we think. Therefore, finding a solution to these blockchain implementation issues is critical. Consequently, this scalability, energy consumption, and slow blockchain system problem could be overcome using Ibn Khaldun's (Al-Emarah) Development principle. The four pillars of Ibn Khaldun on this subject are: (1) Rule of law and the dependability of the judicial system for the development of justice as the absence of justice eliminates. (2) The safety of trade routes and the security of the peace. (3) Promoting a larger population and market for greater specialisation, as well as (4) collective responsibility and internal feelings for the establishment of a just system to inspire good deeds and prevent vice, can all help to overcome the blockchain challenges of scalability, energy consumption, and slow blockchain systems. These are the only ways we can truly turn blockchain technology into a blessing once more (Khaldun, 2012). Al-Mal (Wealth of economic empowerment) Another significant pillar of Ibn Khaldun's economic empowerment is his emphasis on the directions of a market mechanism, price determination, demand, supply, market structure, a theory of wage difference, and the role of government in blockchain growth (Hasan, 2020). Thus, this study will be founded on his ideas on demand and supply in order to improve the current Blockchain technology. In this regard, Ibn Khaldun stated that supply and demand determine product and service prices. When a product is scarce and in high demand, its price rises. Customers would buy things when they are "cheap" and plentiful, then "sell them at a high price" when they are rare and in high demand (Ali 2006). Ibn Khaldun also illustrated that in the market, costs are determined by the interaction of supply and demand. He's talked about how supply fluctuations affect the rate . Not only that, but Ibn Khaldun has considered the impact of demand on prices, labour division, and growth. This is what he refers to as coercion: "People need food, and the money they spend on it is imposed upon them; they had no choice but to spend it, which is a kind of compulsion" . Importantly, according to Codementor.io, developers charge two kinds of hourly prices: average and median. Average hourly rates range from $81 to $100, while median hourly rates range from $61 to $80 (Codementor.io, 2021). This sign has demonstrated that blockchain is capable of assisting almost all segments and assisting us in providing and storing data in a more brilliant manner. Nevertheless, what precisely can we do with blockchain, and how much will it cost? Unfortunately, there is no simple answer to this question. Because blockchain is a feature-dependent technology, the actual cost will vary depending on the project's needs. A blockchain venture's development costs might range from $5,000 to $200,000, according to some estimates (Codementor.io, 2021). Below figure 2 explain the blockchain developers hourly charges. (Codementor.io, 2021) Blockchain is an open-source electronic ledger that allows users to construct an unchangeable transaction record. Engineers working on blockchains must create protocols, account for adversarial incentives, and test a lot of assumptions. Blockchain developers were in second place among the top 20 fastest-growing occupational skills in 2017 (Codementor.io, 2021). There was a 115 percent increase in job posts for blockchain developers with experience in JavaScript, C++, Python, encryption, and/or machine learning between 2016 and 2017. On average, blockchain developers charge $81-100 per hour. As the demand for blockchain coders grows, so may their hourly rate (Codementor.io, 2021). When hiring Blockchain engineers, consider the differences in hourly pricing for different engagement types, such as temp, parttime, and freelance. Depending on whether you want someone on-site or completely remote, developers recruited for full-time employment may charge different prices (Codementor.io, 2021). Admittedly, the need for blockchain experts is still high (Chandra, 2021). Regardless of the bear market and current industry discharges, the quantity of blockchain job postings has been on the rise, and searches for roles involving blockchain, Ethereum, Bitcoin, and cryptocurrency have increased. Startups are offering top payment packages, in particular for blockchain developers, as they contest for talent in a production where supply is limited. As well-known firms, such as IBM, Amazon, Facebook, and new players develop blockchain technology and discover blockchain applications, the need for blockchain capacity has skyrocketed. Salaries in the blockchain industry have risen to be among the highest in the industry. As a result, the data below shows the blockchain wage by country and company (Daniel, 2019). The Big Picture: blockchain Inventor Salaries are as follows: • The average base salary for a blockchain developer in Asia is $87,500 per year, with a low base salary of $60,000 and a high base salary of $120,000. • The average base salary for a blockchain developer in Europe is $73,300 per year, with a low base salary of $55,000 and a high base salary of $91,000. The highest-paid incomes are located in New York and San Francisco, according to the data presented. When the bulk of blockchain jobs begin, this is discovered to be comparable. According to the World Economic Forum, blockchain will store 10% of global GDP in the future age. By eliminating middlemen, it has the potential to save the industry billions of dollars. It might also be used to improve supply chains and track down the origins of tainted foods (Daniel, 2019). Ibn Khaldun's Al-Mal (Wealth of Economic Empowerment) should be implemented and applied for its supply, demand, and high salary in order to support blockchain development internationally (Rusdi and Widiastuti, 2020). In this regard, the government and businesses should invest more in blockchain development in order to build appropriate blockchain talent and ensure scalability, privacy, and security, while also being able to offer high pay comparable to those offered by IBM, Facebook, Amazon, and Standard Chartered Bank (Daniel, 2019). Consequently, to overcome the existing blockchain issues, it is critical to boost blockchain investment in order to fulfil its economic facilities using Ibn Khaldun's notions. Law enforcement institution In this regard, Ibn Khaldun reinterprets the organization's law enforcement as a model of growth. on the other hand, he does not advocate for the establishment of open law enforcement organisations; instead, he emphasises the importance of looking for social order in five ways: the ruler, the law, and the principles of fairness Chapra 2008). Furthermore, Ibn Khaldun is said to have understood the significance of moral ethics as a fundamental method for achieving justice. These five values can all be thought of as modules of good authority that a state recognises. The aforementioned signs of effective governance can be linked to the three parts of Ibn Khaldun's thesis (administration, law, and justice) (Mohammad 2010). Another three dimensions in this regard are economic and social progress, social welfare, and civilization's rise and collapse. Three of these are the pillars of an active state that protects human rights and promotes economic growth. However, putting the legislation in place for blockchain poses significant hurdles (Mohammad 2010). According to a survey of hundreds of executives and entrepreneurs performed by the Chamber of Digital Commerce Canada and the Blockchain Research Institute, the most significant barrier for blockchain innovators is the lack of capital. Regulators now favour incumbents over innovators (Negara, Hidyanto et al., 2021). In this sense, Blockchain has sparked major debates among regulators over consumer and market protection, but the inflexibility with which regulators in the world's main economies have advanced blockchain has hampered innovation and progress (Yeung, 2019). Despite the study's amazing success, regulatory constraints for blockchain entrepreneurs are a reality that must be recognised throughout large economies. The effect would be a continuous "company drain" from Canada and other countries to more friendly jurisdictions (Kakavand, Kost De Sevres et al., 2017). Thus, the first large jurisdiction to properly integrate this new technology and establish a regulatory framework that encourages innovation while safeguarding customers will reap the benefits in terms of jobs and economic progress (Massarotto, 2019). Besides, the majority of countries do not yet have a blockchain law, and some countries' legal intuition to encourage blockchain technology is very poor. As a result, without law and justice, the state would be unable to adequately address blockchain concerns (Pashkov and Soloviov, 2019). Therefore, Ibn Khaldun's idea of law enforcement intuitions must be incorporated and developed within the blockchain technology to ensure the rights of customers, privacy, and the safety of financial institutions (Laabdi, 2021). Moral legitimacy Ibn Khaldun emphasises the moral importance of any powerful advancement. Ibn Khaldun is certainly aware of the importance of moral value (Khaldun, 2012). In this regard, he has advised the administration against immoral behaviour, as evidenced by economic collapse indications. He has described the examples of such activities which are immorality, misconduct, insincerity, and deception, as well as fabrication, gambling, deceiving, fraud, stealing, perjury, and usury (Mohammad 2010). In line with this, Ibn Khaldun has further explored that when civilization progresses and a large, urbanised population emerges, immorality becomes an inextricable feature of the urbanised society. This would then cause them to deteriorate. This incident serves as a reminder to both the government and its population of the importance of defining a moral duty to follow the law (Laabdi, 2021). Henceforth, As a result, money laundering, terrorism financing, arms deals, the dark web, and drug and smuggling transactions are all major issues in blockchain applications (Aitsam and Chantaraskul, 2020). Moral legitimacy should be encouraged among blockchain developers, companies, and consumers in order to break down these obstacles in blockchain. The majority of individuals are unaware of blockchain existence and its use. Blockchain must be widely embraced if it is to be effective. Despite the fact that technology is making history, it isn't enough to entice more consumers. However, the majority of people believe Bitcoin is the only blockchain system that exists. Except for the Bitcoin, no one else is aware of it. As a result, the price of Bitcoin has continued to increase to greater levels. Similarly, Bitcoin has been linked to criminal operations such as black-market trades, money laundering, and other unlawful activity. Members of the public must comprehend the differences between bitcoin, cryptocurrencies, and blockchain before the total implementation can begin (Aitsam and Chantaraskul, 2020). This would help in the removal of recent bad Bitcoin claims, allowing the technology to disappear on its own. In terms of public perception, this would lead to a higher willingness to employ the technology. Accordingly, to overcome the obstacles of the blockchain application, it is important to follow the moral legitimacy and (Al-Mulk) Collective Entity of ibn Khaldun (Abozeid, 2021). PROPOSED SOLUTIONS Previous studies have documented that, blockchain technology has facing many challenges as discussed earlier (Monrat, Schelén et al., 2019). This study provides solutions for overcome blockchain challenges in the light of Ibn Khaldun eight wise economic development model such as , (Khaldun, 2012): (1) "The power of the autonomous (Al-Mulk) does not appear but through the application of the Sharīʾah", and (2) "The Sharīʾah cannot be applied except by the sovereign (Al-Mulk)". Likewise, (3) "The sovereign cannot achieve power except through the people (Al-Rijal)", (4) "The people cannot be continued except by wealth (Al-Mal)", (5) "Wealth cannot be industrialized except through development (Al-Imarah)", and (6) Growth cannot be achieved except through justice (Al-Adl)". Ibn Khaldun universalized that (7) "Justice is the standard (Al-Mizan) by which Allah will assess mankind", for which (8) "The sovereign is charged with the accountability of realizing" (Chapra 2008). Below is the circle of the eight Ibn Khaldun's wise economic development model. In the Muqaddimah, which means "introduction," Ibn Khaldun has emphasises the importance of exploring all of these wise foundations, and has made an effort to explain the various events in history through a reasoned and effective connection, and to uphold scientifically the foundations that lie behind the rise and fall of a ruling dynasty or state (Dawlah) or civilization (Umran), (Khaldun, 2012), (Chapra 2008). The entire Muqaddimah is an enrichment of this advice, which consists of "eight smart principles (kalimat hikamiyyah) of political wisdom, each dovetailed with the other for mutual strength, in such a circular manner that the beginning and finish are indistinguishable," in Ibn Khaldun own words . Therefore, this study relooks the following key fundamentals of Ibn Khaldun economic growth model, which can be used to solve current blockchain challenges: 1) Ibn Khaldun theory of (Ar-Riajl) People and (Al-Mulk) Collective Entity can be used to overcome the problem of blockchain's "inefficient technology design." 2) The "criminal connection" difficulty of blockchain can be mitigated by using Ibn Khaldun's notion of (Sharīʾah) Rules and Regulations and (Al-Adl) Justice. 3) Ibn Khaldun's idea of (Al-Emarah) Development and (Ar-Riajl) People can be used to address the blockchain difficulty of "scalability." 4) Through Ibn Khaldun's idea of (Al-Mulk) Collective Entity and (Al-Emarah) Development, the blockchain challenge of "energy expenditure" can be overcome to achieve its highest position. 5) Ibn Khaldun concept of (Sharīʾah) Rules and Regulations and (Al-Adl) Justice can be used to address the blockchain difficulty of "privacy." 6) Ibn Khaldun view of (Sharīʾah) Rules and Regulations and (Al-Mulk) Collective Entity can be used to overcome the blockchain difficulty of "regulation." 7) Ibn Khaldun opinion of (Al-Mulk) Collective Entity and (Al-Adl) Justice can be used to defend blockchain "security." 8) Adopting Ibn Khaldun conception of (Ar-Riajl) People can overcome the blockchain difficulty of "lack of suitable skill sets." 9) Ibn Khaldun idea of (Al-Emarah) Development and (Ar-Riajl) People can be used to tackle the "slow" difficulty of blockchain. 10) Ibn Khaldun thoughts of (Sharīʾah) Rules and Regulations and (Al-Mulk) Combined Entity can help with the blockchain difficulty of "public's negative awareness." Recommendations For Future Research Directions The concept of centralised authorities has been changed by the blockchain. The combination of blockchain and IoT will serve as a springboard for new enterprises and applications. The future research directions of blockchain and IoT are discussed in this section (Atlam, Alenezi et al., 2018). Authorities and local administrative agencies produce regulatory laws to outline lawful ways of functioning with a product or technology inside a country or territory. As previously stated, blockchain is a new technology with no established legal or compliance framework. The following is the research question that must be answered in relation to blockchain legal and compliance issues: What are the global regulatory guidelines that assure the proper use of blockchain in IoT? For all new technology, security remains the most difficult problem to which researchers and organisations devote their attention. Integrating blockchain with IoT can increase security by validating transactions with the approval of the majority of parties, preventing spoofing and theft. IoT devices, on the other hand, have limited processing and storage capacity and so are unable to process large amounts of data (Ren, Wang et al., 2012). The following are some of the security research questions that need to be addressed: What is the best platform for IoT and blockchain integration? How can IoT devices with limited capabilities be overcome to build a safe IoT system? Finally, we may conclude that integrating blockchain with IoT can provide numerous benefits that ameliorate many IoT concerns while also posing new challenges that must be handled. More research is needed to investigate the implementation of blockchain with IoT in more depth. CONCLUSION Blockchain is a groundbreaking and innovative technology for variations of fields and facilities. Blockchain can be used for banking, digital identity, power and sustainability, government and the community, healthcare, sciences, international commerce and merchandises, food safety, notary, fundraising, Marine insurance, law, media and production, real estate sports and exports supply chain, Social Impact, energy market, intellectual property, and Crypto banking are some of the applications of blockchain technology (Mohamed and Al-Jaroodi, 2019). By allowing people to create safe digital relationships, blockchain is making the unthinkable possible. As a result of the blockchain introduction, data is now recorded, released, and secured in a different way (Monrat, Schelén et al., 2019). Therefore, blockchain is being used by various smart and advanced firms to improve their business procedures and eventually become the business leaders of their respective companies, thanks to its tremendous potential (Gunasekara, Sridarran et al., 2021). On the other side, some problems have hampered blockchain capabilities, including inefficient technology design, the criminal relationship, scalability, energy consumption, privacy, regulation, security, a lack of adequate skill sets, the fact that blockchain is slow, and public perception. These are the obstacles to Blockchain advancement, and it is therefore impossible to extract blockchain technology from them (Gupta, Sinha et al., 2020). In order to fully gain the benefits of blockchain, the study has presented eight intelligent Ibn Khaldun economic strategies (kalimat hikamiyyah) in order to completely gain the benefits of blockchain. These are People (Ar-Rijal), Wealth (Al-Mal), Development (Al-Emarah), Justice (Al-Adl), Law enforcement institutions (Law Establishments), and Moral legitimacy are the collective entities (Al-Mulk), rules and regulations (Sharīʾah), people (Ar-Rijal), wealth (Al-Mal), development (Al-Emarah), justice (Al-Adl), law enforcement institutions (Law establishment) (Khaldun, 2012), (Chapra 2008). If the suggestions have presented by this study are implemented, blockchain will undoubtedly reach its pinnacle of popularity on a huge scale. Hence, in order for the technology to progress, the blockchain challenges must be resolved.
2021-09-09T20:46:45.972Z
2021-07-29T00:00:00.000
{ "year": 2021, "sha1": "2de91f3c2e16c03dfe328acda0b29e209a4d5e78", "oa_license": null, "oa_url": "https://doi.org/10.32890/jtom2021.16.1.4", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "7c2bb90518139c0c57301eafc5c2fd0023829998", "s2fieldsofstudy": [ "Computer Science", "Business", "Engineering" ], "extfieldsofstudy": [ "Computer Science" ] }
46957266
pes2o/s2orc
v3-fos-license
Chemical Recruitment for Foraging in Ants ( Formicidae ) and Termites ( Isoptera ) : A Revealing Comparison All termites secrete trail pheromones from their sternal gland, whereas ants use a variety of glands for this purpose. This and the diversity of chemical compounds that serve as trail pheromones among ants, and the uniformity of chemicals among termite trails, suggest a different evolutionary historical dynamics for the development of chemical mass recruitment in both taxa. Termites in addition show pheromonal parsimony. This suggest a single evolutionary origin of pheromone trails in Isoptera, whereas chemical mass recruitment among Formicidae seems to have evolved many times and in different ways. Despite these very different evolutionary histories, both taxa evolved chemical recruitment systems involving attractants and orientation signals, and at least two divergent decision making system for recruitment. This evolutionary analogy suggests that chemical mass recruitment is constraint by fundamental physical dynamic laws. Artificial intelligence including “mass intelligence” and “ant intelligence”, emulates mass recruitment in interacting virtual agents in search of optimal solutions. This approach, however, has copied only the “Democratic” recruitment dynamics with a single compound pheromone. Ant and termite evolution shows more sophisticated recruitment dynamics which, if understood properly, will improve our understanding of nature and applications of artificial “swarm intelligence”. Introduction One of the great advantages of society is the use of large numbers of individuals to perform tasks that a lonely individual is unable to perform [2,3].One of the most studied group task in social insects is recruitment for food retrieval, after an individual discovers a food source that is much larger than what it can handle on its own.Some of the communication signals modulating this recruitment are based on auditory or visual signals, but the most important communication signal used in recruitment, in the great majority of ant and termite species, is chemical.In recruitment to food, these signals are at least of two different kinds as first detailed for ants [4] : one used to orient workers to the food source, that is trail pheromones; another to attract workers to the trail and thus to the food source, that is attractants for food recruitment.Some species use chemicals for only one of these signals and signal the other function by means of tactile or acoustic signals.An illustrative intermediate recruitment system is called "Tandem Running" [5], where the scout physically carries a nestmate to the food source.In tandem calling [6], the recruiting workers lead nestmates to the newly discovered food source by physically guiding them to the source, sometimes using chemical trails to help orientate to the food.Other species lay chemical trails that fulfill both functions, requiring different chemicals for attracting and orienting ants [7,8].These intermediate stages in the evolution of chemical mass recruitment, starting from individual foraging, allow us to suggest phylogenies for recruitment systems illuminating the possible evolutionary history of chemical mass recruitment.Such comparisons suggest that the evolution of chemical recruitment seems to have happened several times, at least in ants [9]. Termites seem also to use both type of chemicals, attractants and orientation signals, in their foraging trails [10,11], although the details of the chemical communication system used by termites are less well known than in ants. As both, ants and termites, are terrestrial and arboreal, and that both use chemical mass recruitment, we can compare the different chemical recruitment systems known Ants. Reports of the chemical nature of the communication signal used for recruitment in ants revealed an interesting pattern of chemical compounds.The summary of available data for ants is presented in Table 1.This table shows that, in many cases, the various compounds produced by a single species are very similar as they constitute small variants of a common chemical skeleton, as is the case for Monomorium pharaonis.We suggest that this might be due to the fact that in the biochemical process leading to the synthesis of one or a few active compounds, other chemicals are produced in the process.Indirect evidence for this suggestion comes from other insects where it was shown that synthesizing pure chemicals in pheromone secreting glands is very difficult, if not impossible [13].In other cases, an adaptive purposeful chemical diversity seems to be present, as chemicals from completely different biochemical pathways are produced as a substrate for the chemical recruitment signal.This is the case for the Atta and Acromyrmex species and Daceton armigerum.In these cases, as shown in Table 1, some compounds have high carbon numbers and low volatility, and others have high volatility, appropriate for the fulfillment of different communication functions such as orientation and attraction. The chemical survey presented in Table 1 reveals that the pattern of chemical compounds related to trail pheromones in ants correlates with what we know about the decision making behavior used during chemical mass recruitment to food [14].We know that ants use either one of two decision making systems regulating chemical mass recruitment.The "Democratic" mass recruitment was described in detail for Solenopsis invicta [15] and the "Autocratic" system first described for Atta cephalotes [16].The main difference is that in the Democratic system, all workers eventually perform all tasks as in Solenopsis; in the Autocratic system, workers specialize either in scouting or in food retrieval [17], as in Atta.The Democratic recruitment system is adapted for fast recruitment towards ephemeral food sources.Here all workers participating in the recruitment process have the same responsibility and add a fixed amount of recruitment pheromone to the trail.The more trail pheromone, the stronger the signal, the more workers are recruited.This leads to an increase in the workforce allowing engaging the maximum worker strength in the shortest possible time, so as to collect a scarce recourse (a recently discovered dead cockroach for example) before a competitor does. The Autocratic recruitment system is adapted for the simultaneous exploitation of a diversity of durable food sources.Here workers specialize in chemical communication or in food retrieval.Communication specialists then visit different food sources and signal the palatability, quality, or quantity of a food source with varying levels of chemical concentrations.Thus, a very good food source will trigger trail laying with plenty of an attractive chemical, whereas food sources of low quality will be signaled with low amounts of this chemical laid on the trail.This system allows for the fine tuning of sophisticated recruitment activity such as described for several Atta species, where one group of workers recruit nestmates to the tree canopy where they cut large leaves at their base, so that they fall whole to the ground.There, another group of workers is recruited to each of the leaves that accumulate on the ground, where the workers cut the leave in smaller pieces and transport these pieces to intermediate sites, from where another group of workers transport the leaf fragments to the nest [18]. In both cases, the trail needs to be marked with a chemical that will orient workers towards the food source.If the food source is ephemeral in its existence, an efficient chemical mark does not need to last long.As soon as the food has been collected, the chemical evaporates and the trail disappears.For the simultaneous exploitation of several food sources, however, several longer lasting chemical signals could be very useful, as the source could be revisited fast after spots of inactivity due to rain, heat, cold, or other daily rhythmic patterns.Yet a long lasting chemical signal is not appropriate if it has to work also as an attractant, as any changes in the required workforce will take a long time to achieve if the long lasting chemical need, to evaporate first.Therefore in this later case, highly volatile chemicals, together with some of low volatility, are required to modulate recruitment.Species using chemicals to only attract or orient ants need only one-or a few-chemical compounds to perform this function, whereas species using chemical trails for both, attraction and orientation of nestmates, have to produce a range of chemicals for these two purposes. As Table 1 shows, most ant species seem to use a few compounds as trail pheromone.Only 14 out of 57 species (25%) seem to use more than 3 chemicals, and only 10% of the species listed use six or more compounds.The use of a few compounds corresponds well to Tandem Calling or even to a Democratic recruitment system.In contrast, species such as the leaf cutting and fungus growing ants Atta, Trachymyrmex, and Acromyrmex secrete over six different chemicals on their trails.Other species using the Democratic system, such as Solenopsis, seem to produce much simpler trail pheromones from the standpoint of chemical diversity of compounds.The trail pheromone composition of 1, all these chemicals have the same chemical skeleton.Thus, the Autocratic chemical recruitment system could be associated to a more advanced chemical signaling.The case of the hunting and recruiting foragers of Daceton armigerum [19] that use a multitude of recruitment strategies is interesting.Table 1 shows that its trail pheromone has many chemical compounds, hinting to a sophisticated diverse chemical communication system.Many ant species in the subfamily Myrmicinae with large colonies and a sophisticated social structure, use carboxylates and pyrazines to lay their pheromone trail.These are semivolatile compounds.The Myrmicinae, Atta, and Acromyrmex, for example, need to constantly recruit many workers to supply big colonies with a great quantity of leaves which they use as a substrate to grow their fungus.In contrast, ants with less developed societies living in smaller colonies, such as species of the subfamily Ponerinae, use alcohols and acetate, which are more volatile and thus might serve as chemical attractants to trigger foraging to collect ephemeral food sources.Ponerinae individuals feed opportunistically on dispersed food items.This requires quick recruitment of workers, and, as a consequence, the compounds of the pheromone trail are more volatile and less permanent in time, compared to the carboxylates of the leaf cutter ants.In some species of Ponerinae, chemical trails also regulate nest moving [20]. The Formicinae ants are mostly predators but differ from Ponerinae by their greater social complexity, larger colonies, and more diverse worker castes or polymorphism.The trail pheromones of Formicinae species use a mix of compounds that are more complex than that of Ponerinae, probably due to a more elaborate recruitment system.Table 1 reflects this showing among Formicinae, compounds with elevated molecular weights, such as mullein, in addition to compound of low molecular weight and probably low volatility.Formicinae trail pheromone chemistry seems to be closer to the Myrmicinae than the Ponerinae.This suggests trails with both short-term attractant and longterm orientation function.In the case of Dolichoderinae species, the information is scarcer.In the Argentine ant, Linepithema humile, a tramp species with supercolonies of hundreds of thousands of workers, the trail pheromone has short-chain volatile aldehydes, suggesting a foraging strategy with fast short term bouts of recruitment.The continuous reinforcement of a trail made with short lasting volatiles can last long if it is reinforced by hundreds of workers. Termites. Termite species also show diverse ecological life types.We know species that live and feed in the same piece of wood, and species that have their nest separated from their food source [21].But even the "one-piece" life type species possess trail pheromones which they use to recruit workers for defense or nest moving.Termites of "one-piece" life type do not require orientation systems a priori.Secretions of their sternal gland are considered to function in the recruitment of nestmates to source disturbance within the nest.These termites might also use trail following pheromones to colonize new food sources to where they move their nest [22,23].Most termites forage on relatively durable food sources containing cellulose.In addition, most termite species forage on several food sources simultaneously, suggesting a recruitment system closer to the above described Autocratic chemical recruitment system, which seem to be the case in the only termite species where this has been explored so far [24].Table 2 presents what we know about the chemicals used in trail pheromones by termites.The available data shows that pheromone trails among each termite species are constructed with one or a few compounds among a total of 8 chemicals.For the families where chemical trail pheromones have been reported, the Rhinotermitidae, Termitidae, and Kalotermitidae seem to use mainly neocembrene and a dodecatrienol; Nasutitermes corniger uses in addition to these two compounds trinervitatriene; whereas Mastotermitidae and Termopsidae use a trimethylundecadienol for trail following.That is, all trail pheromones in Isoptera are synthesized from a much Psyche Psyche Discussion This paper is based only on published reports, and many more compounds used as trail pheromones are surely to be discovered in the future.For example, it is very likely that Atta texana uses a larger pool of compounds as trail pheromones as that reported in Table 1, as it is unlikely to differ very much from other Atta species in this regard.Thus, results in the Tables are biased towards species that have drawn more attention from researchers.Another cautionary remark regards the assessment of volatility based on chemical structure alone.In general, compounds of the same kind of lower molecular weight are more volatile than the ones of higher molecular weight or longer carbon chains.Biologically relevant volatility, however, depends not only on the compound but also on the substrate on which the chemical is secreted, on its concentrations on the substrate, and on the humidity and temperature of the surrounding air.Thus, simple direct correlations between molecular weight, assumed volatility, and behavioral function of a compound should be avoided. The work behind the literature used for this study, evidently, was not performed with our objectives in mind, but it is unlikely that methodological limitations explain the lack of more chemical compound associated with trail pheromones among termites than among ants.Despite many possible limitations of this study, the large extend of the research effort explored and the large number of species covered guarantee a minimum of robustness that makes drawing conclusion from these data reasonable. Despite these and other limitations of this paper, we might suggest two basic trends: (1) evolutionary history of the evolution of ant and termite trails is very different, and (2) the dynamics of interacting individuals achieving a recruitment process mediated by chemicals follow basic rules. Different Evolutionary Histories between Ants and Termites. The diversity of chemical structures among ant trail pheromones and the uniformity of chemical compounds among termite trails suggest a different evolutionary history for the development of chemical mass recruitment in both taxa.In termites, often trail pheromone compounds are synthesized also by other exocrine glands and are used as sex pheromones.This pheromonal parsimony seems to be characteristic of termites [12] and is not common among ants. Chemical mass recruitment among ants seems to have evolved at least 8 times [9], whereas chemical mass recruitment among termites seems to be a more conservative phenomenon where all species seem to share a common ancestor that had already developed chemical recruitment.This explains also the large difference between ants and termites in the glands responsible for the secretion of the trail pheromones.Many different glands are used by different species among ants [27], whereas only the sternal gland is used by termites [12].Another factor explaining this difference is the ecological diversity of ant species, each exploring different food source.Termites in contrast exploit more uniform ecological niches in their search for cellulose. Basic Rules Govern the Recruitment Dynamics. The main conclusion from this study is that despite the fact that the evolutionary history of the chemical mass recruitment of ants and termites is different, a similar recruitment dynamics has evolved in both groups.This evolutionary analogy suggests that chemical mass recruitment is constraint by basic physical-dynamic laws.This would explain the convergence to chemical mass recruitment in the two evolutionary processes studied.A third convergence towards similar solution for the modulation of mass recruitment dynamics is nowadays repeated in the development of artificial intelligence, where the "mass intelligence" of ants copied in the interaction of simple virtual computer agents is in search of optimal solutions.Artificial intelligence, however, has copied only the simple recruitment dynamics named here as the Democratic system with a single compound pheromone.More sophisticated modeling could bear fruits to artificial intelligence that might echo the fruits chemical mass recruitment that has brought to social insect species evolving them.
2018-06-08T17:00:18.452Z
2012-01-01T00:00:00.000
{ "year": 2012, "sha1": "b5c531c51a41c549376145734a3863212a372d0f", "oa_license": "CCBY", "oa_url": "https://downloads.hindawi.com/journals/psyche/2012/694910.pdf", "oa_status": "GREEN", "pdf_src": "Anansi", "pdf_hash": "368fbe99d08652f388cfb2113cfacac3f719a4d1", "s2fieldsofstudy": [ "Biology", "Environmental Science", "Chemistry" ], "extfieldsofstudy": [ "Biology" ] }
15450671
pes2o/s2orc
v3-fos-license
Mapping patient safety: a large-scale literature review using bibliometric visualisation techniques Background The amount of scientific literature available is often overwhelming, making it difficult for researchers to have a good overview of the literature and to see relations between different developments. Visualisation techniques based on bibliometric data are helpful in obtaining an overview of the literature on complex research topics, and have been applied here to the topic of patient safety (PS). Methods On the basis of title words and citation relations, publications in the period 2000–2010 related to PS were identified in the Scopus bibliographic database. A visualisation of the most frequently cited PS publications was produced based on direct and indirect citation relations between publications. Terms were extracted from titles and abstracts of the publications, and a visualisation of the most important terms was created. The main PS-related topics studied in the literature were identified using a technique for clustering publications and terms. Results A total of 8480 publications were identified, of which the 1462 most frequently cited ones were included in the visualisation. The publications were clustered into 19 clusters, which were grouped into three categories: (1) magnitude of PS problems (42% of all included publications); (2) PS risk factors (31%) and (3) implementation of solutions (19%). In the visualisation of PS-related terms, five clusters were identified: (1) medication; (2) measuring harm; (3) PS culture; (4) physician; (5) training, education and communication. Both analysis at publication and term level indicate an increasing focus on risk factors. Conclusions A bibliometric visualisation approach makes it possible to analyse large amounts of literature. This approach is very useful for improving one's understanding of a complex research topic such as PS and for suggesting new research directions or alternative research priorities. For PS research, the approach suggests that more research on implementing PS improvement initiatives might be needed. INTRODUCTION The use of internet has made large amounts of information easily available to researchers worldwide, while at the same time maintaining a structured overview of relevant information has become more and more challenging and time consuming. For many researchers in the biomedical field, PubMed is the search engine of preference. Although very useful for identifying individual publications relevant to one's information needs, search engines such as PubMed offer limited support in obtaining an overview of the structure of the literature on a particular research topic. Researchers need to go through large numbers of publications to find out which streams of literature can be distinguished, how different streams of literature relate to each other, and how literature has developed over time. Obtaining such an overview of the structure of the literature can be an extremely time-consuming process, especially in the case of complex research topics with publications appearing in multiple scientific fields. An example of such a complex research topic is patient safety. Patient safety is a multifactorial, multidimensional and cross disciplinary research topic which gained a lot of attention since the publication of the Institute of Medicine (IOM) report "To Err Is Human: Building a Safer Health System" in 1999. To describe patient safety in a framework, the WHO needed approximately 600 Strengths and limitations of this study ▪ This study gives insight into the structure of patient safety literature by analysing a large amount of literature using bibliometric data. ▪ This approach can be very useful for improving one's understanding of a complex research topic such as patient safety. ▪ This method of analysing literature may help to suggest new research directions or alternative research priorities. For patient safety research in particular, this method suggests that research on implementing patient safety improvement initiatives receives relatively limited attention. ▪ However, this method does not give detailed insight into the content of specific publications. concepts (International Classification for Patient Safety). 1 The continuously growing publication rate concerning patient safety and its complex character make it difficult to obtain a comprehensive overview of the patient safety literature. Conventional Review articles are available on specific patient safety topics, for example, reviews on patient safety in specific specialisms (eg, anaesthesia, paediatrics, etc) and reviews on surgical safety (eg, checklists, communication and teamwork in the operating theatre, etc). However, to the best of our knowledge, there are no review articles that give a high-level view of patient safety. This is due to the multifactorial, multidimensional and cross disciplinary character of the topic. Therefore, insight into the arrangement of patient safety literature is needed to give structure for future (literature) research in this field. Because the conventional approach does not give sufficient insight, an alternative approach is needed. The current study describes an alternative approach of searching, structuring and visualising large amounts of literature based on bibliographic data and uses this approach to analyse the literature on patient safety. METHODS The methods employed in this study originate from the fields of bibliometrics, text mining and information visualisation. From the bibliometrics literature the idea of using citation relations to establish links between publications is borrowed. Text mining literature discusses natural language processing techniques that are used to extract terms from publications. The mapping and visualisation techniques used in this study build on extensive literature in the fields of bibliometrics and information visualisation. 2 First, the data used in this study is discussed. Then the method for delineating patient safety literature as well as the methods for analysing this literature at the level of both publications and terms are discussed. Data The current study uses data from the Scopus database. Scopus is a bibliographic database produced by Elsevier that indexes almost 20 000 journals in all scientific disciplines. All journals indexed by PubMed are also covered by Scopus. Scopus is used instead of PubMed because Scopus provides data on the references publications give to other publications. Reference data, which is not available in PubMed, is a crucial element in our approach. Direct access to the raw Scopus data is used (without the need to use the Scopus web interface at http://www. scopus.com); therefore large quantities of reference data are easily processed. Delineation of the patient safety literature Owing to the complex nature of the topic 'patient safety', delineating the literature on this topic is far from straightforward. The WHO defines patient safety as "the reduction of risk of unnecessary harm associated with healthcare to an acceptable minimum," and has used around 600 concepts to describe this wide-ranging definition in more detail. 1 Delineating patient safety literature using criteria based on keywords or MeSH terms did not yield satisfactory results, therefore a more refined two-step approach is taken. 3 4 First, all publications with 'patient safety' in their title are selected, as well as all publications from the following journals with patient safety as their main topic: Joint Commission Journal on Quality and Patient Safety, Joint Commission Perspectives on Patient Safety, Journal of Patient Safety and Quality and Safety in Health Care. Many relevant publications are still missing after this step, for instance because they were published in general medical journals and do not have 'patient safety' in their title. Therefore, a second step is needed, in which all publications with at least four citations from or references to publications selected in the first step are identified. Together these two steps yield 8480 publications in the period 2000-2010, which is the period of analysis. In a random sample of 100 of the 8480 publications, four publications were not related to patient safety and eight publications were only weakly related. Analysis at the publication level To obtain an overview of patient safety literature at publication level, we first assess the relatedness of publications. This is performed based on direct and indirect citation relations between publications. Two publications have a direct citation relation if one publication cites the other, and they have an indirect citation relation if they both cite the same publication (bibliographic coupling 5 ) or are both cited by the same publication (co-citation 6 ). Bibliographic coupling relations and co-citation relations have equal weight. For each publication, an artificial citation from the publication to itself is created. In this way, a direct citation relation between two publications counts as both a bibliographic coupling relation and a co-citation relation. After assessing the relatedness of publications, a clustering technique is used to identify clusters of closely related publications, following the methodology documented in an earlier paper. 4 This provides a breakdown of the literature into a number of research areas or topics. It is noted that 693 publications cannot be assigned to a cluster. These are publications that have no or almost no citation relations with other publications. A more fine-grained overview of the literature can be obtained using a publication map. A publication map provides a representation of the literature in a twodimensional (2D) space. Publications are located in the map in such a way that the distance between publications gives an indication of their relatedness. The shorter the distance between publications, the stronger is their relation. A publication map is constructed of the 1462 most frequently cited publications within the delineation of the patient safety literature. Each of these publications has been cited at least 20 times. The locations of the publications in the map are determined using the VOS ('visualisation of similarities') mapping technique, 7 and a computer program called VOSviewer (http:// www.vosviewer.com) 8 is used to visualise the map. This program also offers extensive support for exploring the map in an interactive fashion. In addition to a publication map, a publication cluster map is constructed. Instead of individual publications, this map shows the aforementioned clusters of publications, thereby providing a more high-level overview of the patient safety literature. Analysis at the term level To analyse patient safety literature at term level, we begin by extracting terms from titles and abstracts of publications. This involves three steps. First, a part-of-speech tagger 9 is used to identify nouns and adjectives in the titles and abstracts of publications. Second, nouns and adjectives that belong together are combined into noun phrases. Plural noun phrases are converted into singular ones. In the third step, the 1000 most relevant noun phrases are selected as terms. The relevance of a noun phrase is assessed based on the degree to which the noun phrase clusters together with other noun phrases. 10 Only noun phrases that occur in at least 15 publications are considered. The relatedness of terms is determined by counting the number of times terms occur together in the titles and abstracts of publications. The larger the number of cooccurrences of two terms, the stronger their relation. Based on the relatedness of terms, terms are grouped together into clusters and a term map is constructed. A term map works in a similar way as a publication map. Terms are located in a 2D space, and the distance between terms serves as an indication of their relatedness. A ratio above one indicates a relative increase in publications over time, while a ratio below one indicates a relative decrease in publications. To identify changes over time in the terms that are used in the patient safety literature, the mean publication year is calculated for each term. A term's mean publication year indicates whether a term is used more in earlier years or more in later years within the period of analysis. Publication map In recapitulation, a total of 8480 publications were identified of which the 1462 most frequently cited publications were used to create a publication map ( figure 1A). Interactive versions of all produced maps are available online. The URLs of the interactive maps are provided in the figure captions. Please note that to access the online maps Java needs to be installed on your computer. The publication map illustrates the citation relations between highly cited publications and shows how publications cluster together. The clustering is illustrated in figure 1 by the use of different colours. The figure generally shows a clear separation of the different colours. The 19 clusters identified by our clustering technique were examined manually to assign an appropriate label to each of them. The labels and descriptions of the content of the clusters are given in table 1. Clusters 1 and 2 were given the same label because they seem to represent similar types of publications. In the publication map, these two clusters are more intermingled than the others, which is especially well visible when zooming in on the area of the two clusters ( figure 1B). Publication cluster map: the publication cluster map extracted from the publication map gives a more schematic overview of the 19 clusters of publications (figure 2A). Using our clustering technique, the 19 clusters can be grouped into three main categories, each of which is indicated by a different colour. Each category represents a field of patient safety research: category 1 represents research that identifies the magnitude of patient safety problems by measuring and reporting the amount of problems, category 2 represents research that focuses on identifying and understanding patient safety risk factors, and category 3 represents research that focuses on the implementation of solutions mostly on an organisational or national level. Category 1 contains the largest number of publications (N=3569), representing 42% of all publications included in the analysis, followed by category 2 (N=2616), which represents 31% of all publications. Category 3 contains the smallest number of publications (N=1602), representing a mere 19% of the total number of publications. Figure 2B shows an increase in publication rates mostly in category 2, the category dealing with research on patient safety risk factors. In categories 1 and 3, publication rates tend to decline or are stable, with the exception of the cluster on adverse drug events in category 1. The publication rate of this cluster has increased considerably over time (ratio of 1. (5) training, education and communication. Figure 3B provides a so-called density visualisation ( produced by the VOSviewer software) of the term map. The density visualisation indicates that terms grossly cluster together in two groups, dividing the map in a left and a right side. Terms on the left side of the map tend to be related to patient safety risk factors. In the publication cluster map, these terms are mostly used in category 2. Terms on the right side of the map mostly relate to measurable patient safety outcome parameters. In the publication cluster map, these terms can be found mainly in category 1, the category concerned with studying the magnitude of patient safety problems. Category 3 in the publication cluster map, which is the category that deals with the implementation of solutions, cannot be identified as a separate group of terms in the term map. When the term map is searched for terms relating to category 3, these terms are found mostly in the middle bottom part of the map. In the density visualisation (figure 3B), this area slightly lights up. Nevertheless, comparing the publication cluster map and the term map, it seems that research on the implementation of solutions does not have a unique vocabulary of terms that allows it to be distinguished from other types of patient safety research. Figure 4 shows the same term map as figure 3, but this time the colour of a term indicates the term's average publication year. Although more scattered than in the publication cluster map (figure 2B), figure 4 shows a similar increasing trend in publications related to patient safety risk factors, as the corresponding terms are mostly used in recent years. DISCUSSION When conventional literature research using criteria based on keywords or MeSH terms is unsuccessful due to the complexity and massiveness of the researched topic, analysis based on bibliometric data can give insight into the structure of a research field. There is an extensive body of research on information retrieval techniques that aim to simplify literature search in the biomedical sciences. 11 12 Although our work can be considered related to this line of research, our focus is not so much on retrieving individual scientific publications but more on obtaining a broad overview of the structure of the literature on a particular research topic. [13][14][15][16] Our approach seems especially useful when dealing with complex topics that cannot easily be represented by one or a few keywords or MeSH terms. The present dataset was validated with a random sample of 100 of the 8480 publications, of which only 4% was not related to patient safety, indicating a good representation of the field. With the clustering process 693 publications could not be assigned to a cluster because they have no or almost no citation relations with other publications. For this reason these publications are presumed to be of less importance to the field of patient safety. Excluding these publications from the rest of the analysis does not influence the results because only the most highly cited publications are used to create the publication map. It should be noted, however, that for some publications in the Scopus database no data on the references given to other publications is available. Publications for which this is the case are also more likely to be among the 693 publications excluded from the analysis. The publication cluster map shows that there are three main categories of patient safety literature. The publication rates of the categories are not equally divided. Research into the magnitude of the problem (category 1) is more highly represented and research into implementing solutions (category 3) is less represented. Research focusing on identifying and understanding patient safety risk factors (category 2) is also less represented than research on the magnitude of the problem, although there is an increase in publication rates in this category and therefore further growth can be expected. It is of concern though that a decline in publication rate is observed in the category 'implementing solutions', which is a category that already has a relatively small number of publications. This may be Terms cluster together in two groups, dividing the map in a left and a right side. Terms on the left side tend to be related to patient safety risk factors, while terms on the right side mostly relate to measurable patient safety outcome parameters. An interactive version of the map is available at http://www.vosviewer.com/maps/patient_safety/terms1/. considered especially problematic given the fact that improvement in patient safety can only be established by actual implementation of solutions, not only by identifying and understanding flaws in the system. The three main categories can be divided into 19 clusters each representing an area of patient safety research. The WHO patient safety research cycle describes five areas of patient safety research: (1) measuring harm, (2) understanding causes, (3) identifying solutions, (4) evaluating impact and (5) translating evidence into safer care. 17 These five areas can be matched quite well to the categories found in the publication cluster map, thereby supporting the clinical validity of the map. Category 1 contains research into area 1 (measuring harm), category 2 contains research into areas 2 and 3 (understanding causes and identifying solutions) and category 3 contains research into areas 4 and 5 (evaluating impact and translating evidence into safer care). The term map shows a gross division of terms into two sides, outcome parameters (right) and risk factors (left). This resembles a previously described framework of risk domains explaining patient safety in surgery according to a systems approach. This framework depicts patient safety as a balance between risk factors and measurable outcome parameters. 18 A number of limitations of our analysis need to be mentioned. First, the results of the analysis depend on the approach taken to delineate the patient safety literature. The use of alternative criteria for identifying patient safety publications might have led to a different view on patient safety literature. Various technical limitations need to be kept in mind as well. The publication map relies on citation relations between publications. Citations are given for a multitude of reasons. Some citations reflect a strong topical relatedness between the citing and the cited publication, but this is not the case for all citations, and we have not been able to distinguish between these different types of citations. In case of the term map, terms may sometimes be ambiguous due to problems with synonyms and homonyms. Furthermore, both the publication and the term map are restricted to a 2D space, which means that they may not always be able to represent the relatedness of publications or terms in the most accurate way. The clusters of publications or terms that were created have the restriction that each publication or term can belong to one cluster only, making it difficult to properly represent publications and terms that relate to multiple topics. In conclusion, large amounts of literature can be analysed using bibliometric data. Visualising this data using Figure 4 Term map with colours indicating the mean publication year in which a term was used. Terms that are used more towards 2010 are shown in red, while terms that are used more towards 2000 are shown in blue. An increasing trend in publications related to patient safety risk factors can be observed, as the corresponding terms are mostly used in recent years. An interactive version of the map is available at http://www.vosviewer.com/maps/patient_safety/terms2/. tools such as VOSviewer makes it possible to obtain a broad overview of the structure of the literature on a particular topic of interest. This approach can be very useful for improving one's understanding of a complex research topic such as patient safety. Other complex multidimensional research fields (eg, technology assessment) can be analysed in a similar way. This method of analysing literature may help to suggest new research directions or alternative research priorities. For patient safety research in particular, this method suggests that research on implementing patient safety improvement initiatives receives relatively limited attention. Contributors SPR has contributed to the conception and the design of the study, and to the analysis and interpretation of data. She has drafted the article and has given final approval of the version to be published. NJvE and LW have contributed to the conception and the design of the study, and to the analysis and interpretation of data. They have revised the manuscript critically for important intellectual content and have given final approval of the version to be published. FWJ has contributed to the analysis and interpretation of data. NJvE, LW and FWJ revised the manuscript critically for important intellectual content and have given final approval of the version to be published. Funding This research received no specific grant from any funding agency in the public, commercial or not-for-profit sectors. Competing interests None. Provenance and peer review Not commissioned; externally peer reviewed. Data sharing statement No additional data are available. Open Access This is an Open Access article distributed in accordance with the Creative Commons Attribution Non Commercial (CC BY-NC 3.0) license, which permits others to distribute, remix, adapt, build upon this work noncommercially, and license their derivative works on different terms, provided the original work is properly cited and the use is non-commercial. See: http:// creativecommons.org/licenses/by-nc/3.0/
2016-05-04T20:20:58.661Z
2014-03-01T00:00:00.000
{ "year": 2014, "sha1": "868ec155c94c3771dd20132795d703992c7595b9", "oa_license": "CCBYNC", "oa_url": "https://bmjopen.bmj.com/content/4/3/e004468.full.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "868ec155c94c3771dd20132795d703992c7595b9", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
236780693
pes2o/s2orc
v3-fos-license
A Closed-Form Solution to Planar Feature-Based Registration of LiDAR Point Clouds : Since pairwise registration is a necessary step for the seamless fusion of point clouds from neighboring stations, a closed-form solution to planar feature-based registration of LiDAR (Light Detection and Ranging) point clouds is proposed in this paper. Based on the Plücker coordinate-based representation of linear features in three-dimensional space, a quad tuple-based representation of planar features is introduced, which makes it possible to directly determine the difference between any two planar features. Dual quaternions are employed to represent spatial transformation and operations between dual quaternions and the quad tuple-based representation of planar features are given, with which an error norm is constructed. Based on L2-norm-minimization, detailed derivations of the proposed solution are explained step by step. Two experiments were designed in which simulated data and real data were both used to verify the correctness and the feasibility of the proposed solution. With the simulated data, the calculated registration results were consistent with the pre-established parameters, which verifies the correctness of the presented solution. With the real data, the calculated registration results were consistent with the results calculated by iterative methods. Conclusions can be drawn from the two experiments: (1) The proposed solution does not require any initial estimates of the unknown parameters in advance, which assures the stability and robustness of the solution; (2) Using dual quaternions to represent spatial transformation greatly reduces the additional constraints in the estimation process. Introduction With the fast development of Light Detection and Ranging (LiDAR) techniques and their successful application in three-dimensional data acquisition, point cloud registration has attracted significant attention for its role in the fusion of LiDAR point clouds from two neighboring stations. The essence of point cloud registration is to estimate the transformation parameters between the two neighboring stations, which is also known as spatial transformation. As is known, a spatial transformation can be explained as a rotation around the x, y, and z axes, a translation along the three axes, and a scale factor based on the centroid of the coordinate system. Based on the different registration primitives used for the estimation of unknown transformation parameters, the available methods can be categorized into point feature-based methods [1,2], linear feature-based methods [3][4][5], planar feature-based methods [6][7][8], and hybrid feature-based methods [9][10][11]. Until now, point features are still the most popular and widely used registration primitives for simple mathematical expressions. However, affected by the characteristics of LiDAR technology, the extraction of point features from point clouds often has low accuracy without pre-established man-made reflectors. Except for point features, linear features are another popular registration primitive. Compared Table 1. Advantages and disadvantages of the iterative methods and the closed-form methods. Registration Methods Advantages Disadvantages Iterative Methods (1) The most popular and widely used; (2) The derivation of the formulas is simple. (1) Initial approximate estimates of those unknown parameters must be determined in advance; (2) The number of iterations is closely related to the choice of those initial estimates. Closed-form Methods (1) No initial estimates of the unknown transformation parameters are needed in advance; (2) Registration results can be obtained in only one step. (1) The derivation of the formulas is complex. Based on the abovementioned analysis, a closed-form solution to planar feature-based registration of point clouds is proposed. Firstly, a quad tuple-based representation of planar features is given and dual quaternions are then employed to represent the spatial transformation. After the operations between the dual quaternions and the quad tuple are explained, an error norm (error function) is constructed by supposing that the two conjugate planar features are equivalent after registration. Based on L2-norm-minimization, detailed derivations of the proposed solution are given step by step. Lastly, two experiments are designed to verify the correctness and feasibility of the proposed method, in which simulated and real data are both incorporated. The remainder of the paper is organized as follows. Section 2 reviews some related work. Section 3 gives the operations between dual quaternions and the quad tuple-based representation of planar features in three-dimensional space. Section 4 explains the proposed solution, in which the detailed derivations of all formulas are given step by step. Section 5 shows the experiments and the results. Section 6 discusses the proposed solution and gives suggestions for future work. Section 7 concludes the paper. Quaternion and Its Application in Point Cloud Registration Quaternion was first proposed by Hamilton W. R., which has been proved to be a convenient and effective way to describe rotation in three-dimensional space [17]. Due to its compactness and high efficiency, quaternion has attracted considerable attention from researchers working in different fields. The representative work can be summarized as follows: Horn introduced unit quaternion to solve the absolute orientation problem in photogrammetry [13]; Shen et al. used unit quaternion in the transformation of two sets of three-dimensional coordinates [12]; Zeng et al. presented a unit quaternion-based, iterative solution to coordinate transformation in geodesy [18]; Joseph et al. introduced unit quaternion in robot arm manipulation and presented an extended Kalman filter-based algorithm for the estimation of human motion [19]; Kim et al. presented a similar algorithm to Joseph et al., which employed unit quaternion for the real-time estimation of orientation in robot arm manipulation [20]. As is known, spatial transformation mainly consists of a rotation and a translation. However, unit quaternion can only represent the spatial rotation in three-dimensional space, as shown in Figure 1. Later, dual quaternion was presented to represent spatial rotation and translation simultaneously, as shown in Figure 2, in which two quaternions are integrated with the aid of a dual number. The first successful application of dual quaternion in estimating the unknown spatial transformation parameters was introduced by Walker et al. [15]. Based on the analysis of correspondences between dual quaternionbased and matrix-based representations, a single cost function was formulated, which enabled the simultaneous calculation of six parameters in point feature-based registration. Instead of estimating only the six transformation parameters, Wang et al. added the scale parameter to a single cost function, which enabled the simultaneous derivation of rotation, translation, and scale parameters [16]. Prošková et al. also introduced dual quaternion to represent spatial transformation; the presented approach was successfully applied for deriving the seven parameter-based transformation between two sets of three-dimensional coordinates [21,22]. of three-dimensional coordinates [12]; Zeng et al. presented a unit qua ative solution to coordinate transformation in geodesy [18]; Joseph et quaternion in robot arm manipulation and presented an extended K algorithm for the estimation of human motion [19]; Kim et al. prese rithm to Joseph et al., which employed unit quaternion for the real-tim entation in robot arm manipulation [20]. As is known, spatial transformation mainly consists of a rotatio However, unit quaternion can only represent the spatial rotation in space, as shown in Figure 1. Later, dual quaternion was presented rotation and translation simultaneously, as shown in Figure 2, in whi are integrated with the aid of a dual number. The first successful appl ternion in estimating the unknown spatial transformation parameters Walker et al. [15]. Based on the analysis of correspondences betwee based and matrix-based representations, a single cost function was for abled the simultaneous calculation of six parameters in point feature Instead of estimating only the six transformation parameters, Wang e parameter to a single cost function, which enabled the simultaneous tion, translation, and scale parameters [16]. Prošková et al. also intro nion to represent spatial transformation; the presented approach was s for deriving the seven parameter-based transformation between two s sional coordinates [21,22]. n , then the rotation can be expressed as Based on the above analysis, conclusions can be drawn that unit quaternion can only represent rotation in three-dimensional space; when it is applied in spatial transformation, such as the seven parameter-based Helmert transform, the rotation parameters must be estimated first, on which basis the translation parameters and the scale factor will later be estimated. In the case of errors being introduced into the estimation of the rotation parameters, the accuracy of the translation parameters and the scale factor will certainly be affected, which makes it a tendency that dual quaternions are introduced to represent spatial transformation. Planar Feature-Based Registration Methods Compared to point and linear features, planar features exist widely in those point clouds acquired from man-made buildings. On the condition that no specific man-made reflectors are used, point and linear features are often extracted by fitting and the intersection of adjacent planar features. Therefore, if planar features can be used directly in point cloud registration, the amount of calculation will be reduced greatly. In other words, the employment of planar features in point cloud registration will provide more conditions for estimating the transformation parameters than using point and linear features only. Of all existing planar feature-based registration methods, some use the distance between the point and its corresponding planar feature to construct error norms [6,23], others use the parallelism between the two normal vectors and the distance between the two conjugate planar features to construct error norms [7,[24][25][26][27][28]. Detailed explanations are as follows: Grant et al. presented an iterative solution to pairwise point cloud registration, which is based on the correspondences between point and planar features [23]. Pavan et al. presented a closed-form solution to planar feature-based registration of terrestrial LiDAR point clouds [27], which was later applied in the global refinement for the terrestrial laser scanner (TLS) data registration [7]. Moreover, Wang et al. [24], Zhang et al. [25], and Previtali et al. [26] separately presented a planar feature-based registration algorithm based on the correspondences between each pair of planar features in which the Rodriguez matrix, Euler angles, and quaternions were, respectively, employed to represent spatial rotation. In addition, Khoshelham presented a closed-form solution to planar feature-based pairwise point cloud registration [6] in which the Kronecker product was employed to linearize the nonlinear equations; rigid transformation and similarity transformation are both given in detail; the parallelism between normal vectors of the two conjugate planar features was not fully utilized. Föstner and Khoshelham presented an optimal solution and three direct solutions for efficient motion estimation from plane-to-plane correspondences and provided an analysis of the accuracy of the solutions, comparing their performance with the classical iterative closest point (ICP) algorithm [28]. In general, few of the available methods incorporate the scale parameter in planar feature-based registration, which makes it difficult to apply them in the fusion of data from different sources. Moreover, some methods use point and planar features as registration primitives; the role of normal directions is neglected in the estimation of transformation parameters. To summarize: (1) When unit quaternion is introduced to represent spatial rotation in point cloud registration, the accuracy of the translation parameters and the scale factor will surely be affected in the case of the introduction of errors in the estimation of rotation parameters; (2) Planar feature is a good alternative in point cloud registration, however, the determination of the difference between each corresponding planar feature is difficult with traditional mathematical expressions; (3) Few methods incorporate the scale parameters in the planar feature-based registration, which makes it difficult to apply them in the fusion of data from different sources; (4) Most available methods estimate the registration parameters by iterative calculation, the initial approximate estimates of the unknown parameters must be determined in advance to assist the linearization. Based on the above analysis, our goal was to develop a closed-form solution to a planar feature-based registration algorithm in which a quad tuple-based representation of planar features in three-dimensional space is given, which made it possible to directly determine the difference between each pair of corresponding planar features; dual quaternions were employed to represent the spatial transformation in three-dimensional space which made it possible to estimate all the registration parameters simultaneously. The scale factor was also considered in our registration methods, which made it possible to apply our method in the fusion of data from different sources. Representation of a Plane in Three-Dimensional Space Traditionally, a plane is represented by the normal vector → n and any one point → p lying on it. Since the selection of the point → p is random, there is more than one representation for a single plane, which makes it difficult to compare and determine the difference between two planes. To ensure the unique representation of a plane in three-dimensional space, a quad tuple-based representation method was employed, as shown in Figure 3, in which a plane can be represented as a quad tupleΓ, as shown in Equation (1): where → l represents the normalized normal vector that forms n , and m represents the plane moment, that is, the distance between the origin and the plane that forms fusion of data from different sources; (4) Most available methods estimate the registration parameters by iterative calculation, the initial approximate estimates of the unknown parameters must be determined in advance to assist the linearization. Based on the above analysis, our goal was to develop a closed-form solution to a planar feature-based registration algorithm in which a quad tuple-based representation of planar features in threedimensional space is given, which made it possible to directly determine the difference between each pair of corresponding planar features; dual quaternions were employed to represent the spatial transformation in three-dimensional space which made it possible to estimate all the registration parameters simultaneously. The scale factor was also considered in our registration methods, which made it possible to apply our method in the fusion of data from different sources. Representation of a Plane in Three-Dimensional Space Traditionally, a plane is represented by the normal vector n and any one point p lying on it. Since the selection of the point p is random, there is more than one representation for a single plane, which makes it difficult to compare and determine the difference between two planes. To ensure the unique representation of a plane in three-dimensional space, a quad tuple-based representation method was employed, as shown in Definition of a Dual Quaternion A quaternion is a composite of four real numbers, which is generally represented in the form of a quad tuple r , as shown in Equation (2): Figure 3. The quad tuple-based representation of a plane in three-dimensional space. Definition of a Dual Quaternion A quaternion is a composite of four real numbers, which is generally represented in the form of a quad tuple . r, as shown in Equation (2): r is usually called a unit quaternion. A dual quaternion is a composite of two quaternions and a dual number ε, as shown in Equation (3) where . r and . s are both quaternions, each of which corresponds to the real part and the dual part ofq separately; when . r T . r = 1,q is a unit dual quaternion, which is the default definition of a dual quaternion. Operation Rules for Dual Quaternions Given two dual quaternionsq 1 = . s 2 , the addition, subtraction, and multiplication between them can be expressed as shown in Equation (4): Furthermore, multiplication between . r 1 and . r 2 can also be represented as shown in Equation (5): Dual Quaternion-Based Transformation of Planar Features According to the definition of a dual quaternion, the spatial transformation of a plane in three-dimensional space can be expressed as shown in Equation (6): whereq is the dual quaternion corresponding to the spatial transformation;q * is the conjugate ofq that formsq * = . r * + ε . s * ;Γ a andΓ b represent a pair of conjugate planar features, Γ a represents the one before transformation andΓ b represents the one after transformation. To realize the dual quaternion-based similarity transformation, a plane should be represented as shown in Equation (7) According to Chasles' theorem [29], when the scale parameter is not considered, six independent parameters are needed to represent the transformation between two different coordinate frames. However, when a dual quaternion is used, there are eight parameters in total. Two constraints should be added to estimate the unknown parameters for the spatial similarity transformation, as shown in Equation (8): r is the quaternion that represents the spatial rotation and . s is the quaternion that represents the spatial translation. The explanation in Equation (8) is that . r is a unit quaternion and it is orthogonal to . s. Construction of the Objective Functions In three-dimensional space, the transformation of a plane can be expressed as shown in Equation (9): where → l b and m b represent the normal vector and the moment of the plane before transformation, respectively; → l a and m a represent the normal vector and the moment after transformation, respectively; R, → t , and µ separately represent the transformation matrix, the translation vector, and the scale factor between the two coordinate systems. With a dual quaternion, Equation (9) can be further expressed as Equation (10): where . r is the unit quaternion corresponding to the rotation matrix R, r, Equation (10) can be rewritten as Equation (11): where . s * is the conjugate of . s. Based on Equation (7), Equation (11) can be further rewritten as Equation (12): Considering the existence of random errors, the planar feature-based registration approach aims to minimize the difference between (13) and (14): The transformation parameters can be obtained when the expression f = f 1 2 + f 2 2 reaches its minimum. Considering that f 1 2 and f 2 2 are both positive, when each term reaches its minimum, f will be minimal. The optimal value of . r will be obtained by the minimization of Equation (13); . s and µ will be obtained by the minimization of Equation (14). (13) can be decomposed as Equation (15): Solution of Rotation Quaternions Using Equation (5) as the restriction, the error function can be expressed as Equation (16): where λ 1 is a Lagrange multiplier constant. Taking the partial derivative of Equation (16) yields Equation (17): Making A = − 1 2 C l + C T l , Equation (17) can be further expressed as Equation (18): According to the definition of the eigenvalues and eigenvectors of a matrix, . r represents one of the four eigenvectors of A, and λ 1 represents the corresponding eigenvalue of . r. Among the four eigenvectors that satisfy Equation (18), the optimal solution can be determined by referring back to Equation (17). Multiplying Equation (17) with . r T yields Equation (19): Substituting Equation (19) into Equation (16) yields Equation (20): When λ 1 reaches its maximum, Equation (20) will be minimized. The conclusion can be drawn that the optimal solution of . r equals to the eigenvector corresponding to maximum eigenvalue of A. Solution of the Translation Quaternions l b , and decomposing Equation (14), we can obtain Equation (21): Using Equation (5) as the restriction, the best quaternion . s to represent the translation can be obtained when Equation (22) is minimized: where λ 2 is a Lagrange multiplier constant. Taking the partial derivative of Equation (22), with respect to . s and µ, and making them equal to zero, we obtain Equations (23) and (24): Based on Equation (24), the scale factor µ can be expressed as Equation (25): Substituting Equation (25) into Equation (23) yields Equation (26): Multiplying Equation (26) with . r T , we obtain Equation (27): With Equation (27), λ 2 can be expressed as Equation (28): Since C m2 T + C m3 T is a skew-symmetric matrix, the first item of Equation (28) will be zero, that is, T . r = 0. Equation (28) can be simplified, as shown in Equation (29): The quaternion corresponding to the translation between the two neighboring LiDAR stations can be obtained using Equation (30): Implementation of the Proposed Solution As in Figure 4, the given point clouds from the two neighboring LiDAR stations, namely the reference station and the unregistered station, suppose that (1) Based on the normal vectors of each pair of conjugate planar features, construct matrix A and calculate the maximum eigenvalue and its corresponding eigenvector r  of A; (2) Calculate the quaternion s  using Equation (26); (3) Calculate the scale factor  using Equation (25); (4) Calculate the quaternion t using Equation (30). Using r , s , and  , a plane in three-dimensional space can be transformed from one coordinate system to another. More importantly, since each pair of conjugate planar features is extracted from the two neighboring LiDAR stations, point clouds will be merged using r , s , and  . Experiments and Results The proposed planar feature-based registration algorithm was implemented using Matlab. Two experiments were designed to verify the correctness and the feasibility in which simulated data and real data were both incorporated. Simulated Data The first experiment was designed to verify the correctness of the proposed algorithm. Five pairs of simulated planar features were designed as shown in Figure 5. Mathematical descriptions of the simulated planar features were as shown in Table 2. The preestablished spatial similarity transformation parameters were as shown in Table 3. Solution of Unknown Parameters Least Squares Criteria Dual Quaternion s, and µ, a plane in three-dimensional space can be transformed from one coordinate system to another. More importantly, since each pair of conjugate planar features is extracted from the two neighboring LiDAR stations, point clouds will be merged using . r, . s, and µ. Experiments and Results The proposed planar feature-based registration algorithm was implemented using Matlab. Two experiments were designed to verify the correctness and the feasibility in which simulated data and real data were both incorporated. Simulated Data The first experiment was designed to verify the correctness of the proposed algorithm. Five pairs of simulated planar features were designed as shown in Figure 5. Mathematical descriptions of the simulated planar features were as shown in Table 2. The pre-established spatial similarity transformation parameters were as shown in Table 3. Table 3. Pre-established spatial similarity transformation parameters. The obtained transformation parameters were as shown in Table 4; the residuals between each pair of conjugate planar features after registration are also given. Based on the results given in Table 4, the calculated rotation matrix and the scale factor were both consistent with the pre-established ones. The small deviation between the calculated translation vector and the pre-established one can be attributed to the rounding errors in the calculation, which can largely be ignored in practical applications. The conclusion can be drawn that the proposed solution is correct and the obtained result is as expected. Real Data The second experiment was designed to verify the feasibility of the proposed algorithm. The point clouds were collected by a model Riegl LMS-Z420i terrestrial laser scanner. In order to collect the complete point cloud data of the building, a total of eight observation stations were set up around it. The average distance between each observation station and the building was about 100 m, and the average sampling interval was set to 4 cm. Seven pairs of conjugate planar features were extracted from the two neighboring point clouds as shown in Figure 6, and the extracted planar features were as shown in Table 5. rithm. The point clouds were collected by a model Riegl LMS-Z420i terrestrial laser scanner. In order to collect the complete point cloud data of the building, a total of eight observation stations were set up around it. The average distance between each observation station and the building was about 100 m, and the average sampling interval was set to 4 cm. Seven pairs of conjugate planar features were extracted from the two neighboring point clouds as shown in Figure 6, and the extracted planar features were as shown in Table 5. The obtained transformation parameters and residuals between each pair of conjugate planar features are both given in Table 6. To further verify the correctness and the feasibility of the proposed solution, a unit quaternion-based, iterative method [8] was employed to estimate the transformation parameters; the results are also shown in Table 6. The obtained transformation parameters and residuals between each pair of conjugate planar features are both given in Table 6. To further verify the correctness and the feasibility of the proposed solution, a unit quaternion-based, iterative method [8] was employed to estimate the transformation parameters; the results are also shown in Table 6. The visual effects of the point clouds from the two neighboring LiDAR stations before and after registration were as shown in Figure 5. As shown in Table 6 and Figure 7, by using the obtained transformation parameters to register the two neighboring stations, there were no significant residuals between each pair of conjugate planar features after registration. More importantly, there were also no significant differences between the results obtained by our method and by the unit quaternion-based iterative method [8]; the root mean square error (RMSE) of the normal and moment's differences between conjugate planar features were exactly the same. The conclusion can be drawn that the newly proposed closed-form solution to pairwise registration of LiDAR point clouds is correct and feasible. The visual effects of the point clouds from the two neighboring LiDAR stations before and after registration were as shown in Figure 5. As shown in Table 6 and Figure 7, by using the obtained transformation parameters to register the two neighboring stations, there were no significant residuals between each pair of conjugate planar features after registration. More importantly, there were also no significant differences between the results obtained by our method and by the unit quaternion-based iterative method [8]; the root mean square error (RMSE) of the normal and moment's differences between conjugate planar features were exactly the same. The conclusion can be drawn that the newly proposed closed-form solution to pairwise registration of LiDAR point clouds is correct and feasible. Discussion On the assumption that the normal directions of each pair of conjugate planar features are the same, the proposed quad tuple-based representation method can provide a unique mathematical expression of any planar feature in three-dimensional space; the differences between two planar features can be determined by direct comparison, which makes it convenient to propose and implement a closed-form solution to planar featurebased point cloud registration. The results of the two designed experiments were both correct, which proves that the proposed closed-form solution is capable of dealing with point cloud registration problems on the condition that sufficient corresponding planar features are provided as registration primitives. Discussion On the assumption that the normal directions of each pair of conjugate planar features are the same, the proposed quad tuple-based representation method can provide a unique mathematical expression of any planar feature in three-dimensional space; the differences between two planar features can be determined by direct comparison, which makes it convenient to propose and implement a closed-form solution to planar feature-based point cloud registration. The results of the two designed experiments were both correct, which proves that the proposed closed-form solution is capable of dealing with point cloud registration problems on the condition that sufficient corresponding planar features are provided as registration primitives. It should be noted that any vector in three-dimensional space can be expressed in another form with the same module and an opposite direction; however, the prerequisite of our solution is that normal directions of each pair of conjugate planar features should be exactly the same after registration. On the condition that the prerequisite is not fulfilled, ensuring the proper run of the proposed solution will be one of our objectives in the future. Conclusions On the condition that point and linear features are not sufficient, planar features are good alternatives to ensure the proper run of the point cloud registration algorithm. Compared to point and linear features, extracting planar features from point clouds is more convenient. Furthermore, the impact of random errors can be significantly reduced by least squares fitting. Based on the quad tuple-based representation method of planar features in three-dimensional space, a closed-form solution to planar feature-based registration of point clouds is proposed. Two experiments were designed, in which both simulated data and real data were used to verify the correctness and effectiveness of the proposed solution. With the simulated data, the calculated registration results were consistent with the preestablished parameters and with the real data, there were also no significant differences between the results obtained by our method and by the available iterative method [8]; the root mean square error (RMSE) of the normal and moment's differences between conjugate planar features were the same. The conclusion can be drawn that the newly proposed closed-form solution to pairwise registration of LiDAR point clouds is correct and feasible. Moreover, the advantages of the proposed solution can be summarized as follows: (1) By using eigenvalue decomposition to replace the linearization of the objective function, the presented solution does not require any initial estimates of unknown transformation parameters in advance, which assures the stability and robustness of the algorithm; (2) On the condition that no man-made reflectors are used, planar features are directly extracted from point clouds using least squares fitting, the impact of random errors can be reduced significantly and the reliability of the estimated transformation parameters is higher than point and linear feature-based methods; (3) In contrast to Euler angle-based and rotation matrix-based methods, using dual quaternions to represent spatial rotation greatly reduces additional constraints in the estimation of similarity transformation parameters.
2021-08-03T13:14:27.674Z
2021-06-25T00:00:00.000
{ "year": 2021, "sha1": "3c8e595b61e939a69194f1904f6a53daf9137776", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2220-9964/10/7/435/pdf", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "666056040bfa7b761a75e0af18647005b51a8e7e", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Computer Science" ] }
264505063
pes2o/s2orc
v3-fos-license
NEUROD2 function is dispensable for human pancreatic β cell specification Introduction The molecular programs regulating human pancreatic endocrine cell induction and fate allocation are not well deciphered. Here, we investigated the spatiotemporal expression pattern and the function of the neurogenic differentiation factor 2 (NEUROD2) during human endocrinogenesis. Methods Using Crispr-Cas9 gene editing, we generated a reporter knock-in transcription factor (TF) knock-out human inducible pluripotent stem cell (iPSC) line in which the open reading frame of both NEUROD2 alleles are replaced by a nuclear histone 2B-Venus reporter (NEUROD2nVenus/nVenus). Results We identified a transient expression of NEUROD2 mRNA and its nuclear Venus reporter activity at the stage of human endocrine progenitor formation in an iPSC differentiation model. This expression profile is similar to what was previously reported in mice, uncovering an evolutionarily conserved gene expression pattern of NEUROD2 during endocrinogenesis. In vitro differentiation of the generated homozygous NEUROD2nVenus/nVenus iPSC line towards human endocrine lineages uncovered no significant impact upon the loss of NEUROD2 on endocrine cell induction. Moreover, analysis of endocrine cell specification revealed no striking changes in the generation of insulin-producing b cells and glucagon-secreting a cells upon lack of NEUROD2. Discussion Overall, our results suggest that NEUROD2 is expendable for human b cell formation in vitro. Introduction Diabetes mellitus results from the loss or progressive dysfunction of insulin-producing pancreatic b cells that leads to hyperglycemia and severe micro-and macrovascular secondary complications.One potential strategy to treat diabetes is restoring functional b cell mass by triggering endogenous regeneration or replacing the lost b cells by islets from cadaveric donors or differentiated from human pluripotent stem cells (hPSCs) in vitro (1)(2)(3)(4)(5).The establishment of hPSC differentiation protocols into islet cells requires a thorough understanding of the mechanisms regulating endocrine lineage formation during human development (6)(7)(8).In mouse and human, endocrinogenesis is governed by differentiation of pancreatic progenitors into endocrine progenitors, which give rise to different hormoneproducing endocrine cells, including a cells (glucagon + ), b cells (insulin + ), d-cells (somatostatin + ), PP cells (pancreatic polypeptide + ) and ϵ cells (ghrelin + ) (Figure 1A).This multistep lineage segregation is coordinated by variety of signaling pathways and tightly regulated by gene regulatory networks (9-11).However, it is still not well known which genetic programs orchestrate distinct endocrine cell fate decisions in humans.Thus, the functional impact of different transcription factors (TFs) on human endocrine cell induction and specification requires further investigation. Neurogenin-3 (NEUROG3 or NGN3) is the master regulator of endocrinogenesis (12) and deletion of this TF results in endocrine cell agenesis (13).Yet, how this TF precisely regulates endocrine cell induction and specification is unclear.One possible scenario is that NGN3 controls the expression of several key endocrine lineage-specific TFs (14, 15).Among them are Neurod1 and Neurod2 that belong to the bHLH Neurod subfamily, which play important roles in tissue differentiation and lineage commitment.NEUROD1 and NEUROD2 share a high level of similarity in their bHLH domain, suggesting potential conserved functions in DNA binding and target gene activation (16-18).NEUROD2 function is important for neuronal fate specification, migration, and axonal navigation during embryonic development (19).Mice lacking Neurod2 exhibit growth arrest, ataxia, and massive granule cell loss in the cerebellar cortex due to reduced expression of neuronal survival factors that eventually lead to early postnatal death of the knock-out animals (20).Additionally, gene polymorphisms of Neurod2 are directly associated with neurocognitive dysfunctions, such as schizophrenia and epilepsy (21). Despite the well-studied function of Neurod2 in neurogenesis and brain development, the function of this protein in regulating tissue differentiation in other organs, including the pancreas, has remained rudimentary.During mouse pancreas development, Neurod2 is transiently expressed from E12.5 to E17.5, which corresponds to the peak of endocrinogenesis, while Neurod2 is not detectable in the adult pancreas (22).Furthermore, single cell RNA-sequencing (scRNA-seq) has identified the transient expression of Neurod2 in a subset of endocrine precursors during mouse endocrinogenesis (23).However, mice lacking Neurod2 have shown a normal islet cell composition and morphology ( 22).Yet, the spatiotemporal expression pattern of NEUROD2 during human endocrine lineage formation and the function of this gene in regulating human endocrinogenesis has remained unexplored.Here, we investigated the expression and function of NEUROD2 during human endocrinogenesis by leveraging an iPSC differentiation model in vitro.We identified the transient expression of NEUROD2 transcripts in human endocrine progenitors.By generating a NEUROD2 nVenus/nVenus reporter iPSC line, we established a transient endocrine lineage reporter and showed that NEUROD2 is not required for human endocrine cell induction and b cell specification. Cell sources Episomal reprogrammed HMGUi001 (24) and HMGUi001-A-8 (25) iPSC line were used.All cell lines have been confirmed to be Generation of the NEUROD2 nVenus/nVenus iPSC reporter cell line The NEUROD2 locus was targeted by homologous recombination and CRISPR/Cas9 technology using histone 2B (H2B)-Venus-3xHA-tag targeting vector.We cloned the sequence of the (H2B)-Venus-3xHA-tag into a targeting vector and used the 771 bp upstream of the NEUROD2 start codon and 967 bp downstream of the NEUROD2 stop codon as 5′ and 3′ homology arms (HA).A pair of gRNAs introducing dsDNA breaks 3 bp upstream of the start codon and downstream of the stop codon of the NEUROD2 were cloned into the pu6-sgRNA-CAG-Cas9-Venus-bpA expression vector that allowed FACS sorting (26).This vector harboring gRNAs and Cas9 as well as the targeting vector was transfected into HMGUi001-A-8 iPSCs using the standard Lipofectamine transfection protocols.Transfected cells expressing Cas9-Venus were isolated by FACS and different colonies were picked, expanded, and tested by PCR to choose the desired clones. In vitro differentiation of stem cell-derived islets (SC-islets) iPSCs were cultured on 1:30 diluted Geltrex (Invitrogen, catalog no.A1413302) in StemMACS iPS-Brew medium (Miltenyi Biotec, catalog no.130-104-368).At ~70% confluency, cultures were rinsed with 1× DPBS without Mg 2+ and Ca 2+ (Invitrogen, catalog no.14190) followed by incubation with accutasse (Gibco) for 3 min at 37°C.Single cells were rinsed with iPS-Brew, and spun down at 1200 rpm for 3 min.The resulting cell pellet was suspended in iPS-Brew medium supplemented with Y-27632 (10 mM; Sigma-Aldrich, catalog no.Y0503) and the single-cell suspension was seeded at ~1.5-2×10 5 cells per cm 2 on Geltrex-coated surfaces for maintenance.Cultures used for 3D differentiation were seeded at 4-5 x 10 6 cells per well in ultra-low attachment (ULA) plates, placed in a shaking platform at 60 rpm or 1 x 10 6 cells per ml in a stirring spinner flask (ABLE corporation) stirring at 60 rpm.Cultures were fed every day with iPS-Brew medium.3D differentiations with cells in ULA plates were started 24 h following seeding.For seeded cells in spinner flask, differentiation was started as soon as ~80% or more aggregates achieved a size of ~150 mM to ~200 mM usually 72 h after seeding. RNA isolation, cDNA preparation and qPCR analysis Total RNA was extracted from cells with the miRNeasy mini kit (Qiagen).Isolated RNA was reverse transcribed using the SuperScript Vilo cDNA and cDNA synthesis kit (Life Technologies-Thermofisher Scientific).qPCR was performed using predesigned TaqMan ™ probes (Life Technologies) and 20 ng of cDNA per reaction.Each reaction consisted of 4.5 µL cDNA in nuclease-free water, 5 µL TaqMan ™ Advanced master mix (Life Technologies) and 0.5 µL TaqMan probe ™ (Life Technologies).qPCR was performed using QuantStudio 7 Flex (Thermo Fisher Scientific).Ct-values were normalized among samples, transformed to linear expression values, normalized on reference genes and on control samples.Samples were normalized to the housekeeping genes glyceraldehyde 3-phosphate dehydrogenase (GAPDH).The list of used Taqman probes (Applied Biosystems) are provided in Tables S1. Flow cytometry Cells undergoing endocrine differentiation were dissociated to a single cell suspension with accutase for 10-20 min at 37°C.The accutase was inactivated and discarded by washing with iPSC Brew media to later spin down at 1200 rpm for 3 min.Cells pellet was washed once with PBS and fixed with 4% paraformaldehyde for 10 min.Cells were then permeabilized with donkey blocking solution (0.1% tween-20, 10% FBS, 0.1% BSA, 0.2% Triton-X100 and 3% donkey serum).Utilizing the same permeabilization solution, cells were stained with primary antibodies (Table S1) for 1 hour at room temperature or at 4°C overnight.If no conjugated antibody was utilized, the protocol continued with incubation of appropriate secondary antibodies for 30 min-1 h at room temperature (Table S1).Cells were washed with PBS three times after antibody incubation.Flow cytometry was performed using FACS-Aria III (BD Bioscience).FACS gating was determined utilizing isotype, secondary only antibody and stained hiPSCs.FACS data were analyzed using FlowJo. Confocal microscopy and imaging Cryosections rehydration started by washing 3 times with 1X PBS.For attached cell monolayer, dispersed cells at different stages were first prepared as single cell solution and plated utilizing 8 well m-Slide (ibidi), left overnight allowing attachment to the surface and form a cell monolayer.The Cryosectioned and/or cell monolayer were next permeabilized with 0.1 M glycine and 0.2% Triton X-100 in PBS for 30 min and later blocked in blocking solution (PBS, 0.1% Tween-20, 1% donkey serum, 5% FCS) for 1 h.The sections were later incubated with the primary antibody (Table S1) diluted in the same blocking solution overnight at 4°C.Afterwards, 3 times washing with 1X PBS is undertaken to later incubate with secondary antibodies (Table S1) diluted in blocking solution.Finally, after being incubated during 2-3 h with the 2 nd antibody, sections were stained for DAPI (1:500 in 1X PBS) for 30 min, rinsed and washed 3x with 1X PBS and mounted using our self-made Elvanol.Images were obtained with a Leica microscope of the type DMI 6000 using LAS AF software and were analyzed and quantified using LAS AF and ImageJ software programs. ScRNA-seq data analysis Throughout the analysis of scRNA-seq data, Python 3.9.16 was used, in conjunction with Scanpy (version 1.9.3) (https:// github.com/theislab/scanpy)(28).Quality control and normalized scRNA-seq data were obtained from GEO (reference: GSE132188).This data captures four developmental stages of pancreatic tissue (from E12.5 to E15.5) in mouse embryonic pancreatic cells (23).Two cell types, 'Fev+' and 'Ngn3 high EP', were selected and sequestered into a subset, a Principal Component Analysis (PCA) was executed on the subset.Neighbors in the Neurod2 subset were computed using the 'sc.pp.neighbor' function from Scanpy, UMAP representations for the Neurod2 subset were calculated using the sc.tl.umap function of Scanpy.The Neurod2 high subset was selected based on Neurod2 expression of the 'leiden' attribute.Cells were categorized with a 'Neurod2' gene expression of less than or equal to 1 as 'low', and the rest as 'high'. For assessing differential gene expression, we utilized the tl.rank_genes_groups function's 'wilcoxon' method.Differential genes were filtered using the 'sc.tl.filter_rank_genes_groups' function.Considering the importance of pseudo-batch data and pseudo-duplication in single-cell RNA-seq differential expression testing (29).So, we used the R package Delegate (https://github.com/cancerbits/DElegate) to strictly screen for differential genes. Overlaps between the Differential upregulated and downregulated genes with the top 1045 of Neurod2 ChIP-Seq target genes set (30) were computed.A Venn diagram illustrating the overlaps between these three gene sets was plotted using the venn3 function. Statistical analysis Comparison of three or more datasets was performed using ordinary one-way analysis of variance (ANOVA) with Bonferroni's multiple comparison test.All statistics were performed using GraphPad Prism software 9. Results To identify the expression pattern of NEUROD2 during human endocrine lineage formation we used a 3D spinner flask in vitro system (Figure 1B) (27).qPCR analysis identified no expression of NEUROD2 mRNA at pluripotency stage in iPSCs.We found a NEUROD2 transcript expression peak at pancreatic progenitor stage 4 (S4) that was reduced at the beginning of endocrine induction at S5.1.However, a second peak of NEUROD2 expression was evident at S5.4, corresponding to the peak of endocrine progenitor formation.The levels of NEUROD2 were further reduced at S5.7 and S6, the stages corresponding to the generation of stem cell-derived hormone + islet cells (SC-islets).In support of the transient expression during endocrine induction, we found no expression of NEUROD2 mRNA in primary adult human islets and the EndoC-bH1 human b cell line (Figure 1C).In comparison, NEUROD1 was expressed at the endocrine progenitor stage and the expression was even higher in S6 SCislets, human primary islets and the EndoC-bH1 cells (Figure 1D).Together, these data demonstrate the transient expression of NEUROD2 in a subset of human endocrine progenitors, similar to the expression pattern that was previously identified during mouse endocrinogenesis (23,31). The NEUROD2 gene is located in the chromosome 17q12, consisting of 2 exons from which exon 1 is described as a retained intron, and exon 2 is the protein coding region.To uncover the functional impact of NEUROD2 on human endocrinogenesis, we deleted the entire open reading frame of NEUROD2 and replaced it by a histone 2B (H2B)-Venus-3xHAtag sequence in iPSCs using Crispr/Cas9 technology (Figure 2A).To generate the fluorescently reporter iPSC cell line lacking NEUROD2, we used the heterozygous C-peptide-mCherry reporter hiPSC line (HMGUi001-A-8), which allowed us to monitor insulin production and stem cell-derived b cell (SC-b) formation throughout in vitro differentiation (25) (Figure S1A).Genomic PCR analysis confirmed the homologous recombination at the NEUROD2 locus resulting in generation of two homozygous NEUROD2 knock-out H2B-Venus reporter gene knock-in iPSC clones (C89 and C37) (Figure S1B).We further confirmed the deletion of NEUROD2 sequence and in-frame insertion of the H2B-Venus-3xHA-tag reporter using Sanger sequencing (Figure S1C).The obtained homozygous NEUROD2-KO, nuclear H2B-Venus reporter (NEUROD2 nVenus/nVenus ) iPSC clones were negative for mycoplasma and exhibited a normal karyotype (46, XX) (Figure S1D).Bright-field live imaging of the NEUROD2 nVenus/nVenus clones at the pluripotent stage revealed no phenotypic signs of differentiation and expression of H2B-Venus (Figure S1E).This was supported by immunostaining analysis indicating the expression of the pluripotency markers, SOX2 and OCT3/4 (Figure S1F).Furthermore, we tested for the multilineage differentiation potential of the NEUROD2 nVenus/nVenus iPSC clones confirming the generation of endodermal, mesodermal and ectodermal cells (Figure 2B).To assess the effects of NEUROD2 TF knock-out on pancreatic endocrine lineage differentiation, we employed a six-stage (S) 3D spinner-flask differentiation protocol (Figure 1B) (27).FACS analysis and immunostaining revealed the successful formation of definitive endoderm (DE) at S1, pancreatic progenitor (PP2) at S4 and different pancreatic endocrine cells including SC-b (C-PEP + ), SC-a (GCG + ) and SC-d cells (SST + ) at S6 (Figures 2C, D).Because NEUROD2 mRNA expression peaks at the endocrine progenitor stage, we harvested samples from the end of the endocrine induction stage at S5.4 to monitor the expression levels of NEUROD2.qPCR analysis detected no expression of NEUROD2 mRNA in the NEUROD2 nVenus/nVenus endocrine progenitors compared to the wild-type control cells (Figure 2E).Yet, all the clones expressed comparable levels of NEUROD1 mRNA (Figure 2F), indicating no NEUROD1 compensation in the absence of NEUROD2.Next, we analyzed the H2B-Venus reporter expression, which monitors NEUROD2 transcriptional activity.We found emergence of Venus + cells from the S4 until S5.7, with the peak of expression at S5.4 (Figure 2G).This result supports the expression pattern of NEUROD2 mRNA identified by qPCR analysis.Taken together, we successfully engineered NEUROD2 nVenus/nVenus reporter knock-in knock-out iPSCs lines, which show no deficit in pancreatic endocrine lineage formation. Next, we analyzed the NEUROD2 reporter activity in endocrine progenitors at S5.4.We found cells with bright nuclear Venus signals but expressing low or no NGN3 or NKX2-2 (Figures 3A, B; white arrowheads).In contrast, the major fraction of cells expressing high levels of NGN3 or NKX2-2 showed low or no Venus reporter activity (Figures 3A, B; blue arrowheads).A subset of cells also co-expressed medium levels of Venus with NGN3 or NKX2-2 (Figures 3A, B; yellow arrowheads).This data suggests that NEUROD2 is expressed in a subset of endocrine progenitors and its expression starts after NGN3, similar to what was reported in mouse in vivo (23).The transient expression of NEUROD2 was further confirmed by detecting Venus reporter activity only in a subset of C-PEP + cells at S5.4.A fraction of Venus + cells with bright signals expressed low levels of C-PEP (white arrowheads) and a fraction of those with high C-PEP expression exhibited low Venus signal (yellow arrowheads).However, most of the C-PEP + cells showed no NEUROD2 reporter activity (blue arrowheads) (Figure 3C).This result indicates that at least a fraction of NEUROD2 + cells eventually evolved into b cell fate.Next, we explored the impact of loss of NEUROD2 function on human endocrine lineage induction.Immunostaining and FACS analysis of S4 differentiated cells revealed a comparable number of PP2 cells co-expressing PDX1 and NKX6-1 between the control and NEUROD2 nVenus/nVenus cells (Figures 3D, E).Yet, we found a difference in the rate of PP2 cell formation between C89 and C37 clones, likely due to clone variation (Figure 3E).In support of this, we identified comparable numbers of NGN3 + progenitors between the three clones, indicating no deleterious effect in NGN3-mediated endocrine induction (Figure 3F).The clone C89 but not the C37 showed a slight increase in the numbers of NKX2-2 + cells compared to the control cells.Yet, the rate of NKX2-2 + cells between the two NEUROD2 nVenus/nVenus cells was comparable (Figure 3G).To support this data, we also performed qPCR analysis at the end of endocrine induction at S5.4, which corresponds to the highest number of Venus + reporter positive cells.This analysis revealed comparable levels of key pancreatic and endocrine regulatory TF mRNA expression for PDX1, NKX6-1, NGN3, NKX2-2, PAX4, ARX and FEV between the three clones (Figure 3H).Yet, an increased tendency but non-significant in the expression levels of the d-cell TF HHEX (32) was found in the NEUROD2 nVenus/nVenus cells compared to the control (Figure 3H).These results indicate that NEUROD2 is not required for human endocrine lineage induction. To discover the function of NEUROD2 on endocrine lineage differentiation, we analyzed the formation of hormone + cells at S5.7.We used immunofluorescence and stained the SC-islets for the panendocrine marker, CHGA, indicating a comparable number of cells expressing this protein between the three cell lines (Figure 4A).Furthermore, qPCR analysis of S5.7 SC-islets indicated comparable levels of CHGA mRNA between the three clones (Figure 4B).Next, we measured the rate of generation of SC-a and SC-b cells using FACS analysis.We found comparable numbers of C-PEP + , GCG + and C-PEP + -GCG + cells between the control and C89 clones.However, C37 cells exhibited a slightly reduced number of C-PEP + cells and increased number of polyhormonal C-PEP + -GCG + cells but no difference in the number of GCG + cells compared to the control and C89 clones (Figures 4C-E).Immunostaining analysis indicated no apparent difference in the SC-a and SC-b cell composition between the three clones (Figure 4F).To further support these data, we performed qPCR analysis of S5.7 cells for endocrine cell markers.We found comparable mRNA expression levels of INS, GCG, PAX4 and ARX between the three clones (Figures 4G-J).We also extended the qPCR analysis to the S6 SC-islets and found no changes in the expression levels of INS and GCG between the three clones (Figures 4K, L).This analysis demonstrates that NEUROD2 function is not required for SC-a and SC-b cell specification.We next explored the molecular signatures that are likely associated with NEUROD2 function during endocrine lineage formation.Because there are no available human datasets highly enriched for endocrine progenitors expressing NEUROD2, we leverage a scRNA-seq of mouse endocrinogenesis in vivo, which contains a high number of endocrine progenitors (Figure 5A).We detected expression of Neurod2 mainly in endocrine progenitors and in a fraction of Fev + cells (Figure 5B).We then selected all the cells at the time of Neurod2 expression along the pseudotime (circle in Figure 5B) and reclustered them into two populations expressing high levels of Neurod2 (Neurod2 high ) and low or no levels of this TF (Neurod2 low/-) (Figures 5C, D).Differential gene expression analysis revealed 365 upregulated and 532 downregulated genes when comparing the Neurod2 high cluster to the Neurod2 low/-cells (Table S2).We performed pathway enrichment analysis of the upregulated genes using the Metascape annotation and analysis resource (33).Most of the upregulated genes were involved in neuronal cell development and differentiation, cell projection and morphogenesis, hormone secretion, exocytosis and cytoskeleton organization (Figure 5E).Among these, we found upregulation of several genes associated with axonal guidance and growth including Plxna3, Efna3, Fez1, Dpysl5, Emb, Arc, Flrt1, Dab1, Olfm1 and Shroom3.Furthermore, several genes involved in cell migration and cytoskeletal remodeling including Kit, Pgf, Slit1, Sdc3, Map1b, Pak3, Arhgef2, Fermt2, Mapre3, Cyfip2 and Scin were upregulated in Neurod2 high cells (Table S2).This analysis demonstrates the expression of Neurod2 at the stage, in which significant changes in cell dynamics occur, and it likely corresponds to the endocrine cell egression stage during islet cell morphogenesis. To investigate the possible interlink between Neurod2 function and the highly expressed genes in Neurod2 high cells we used a previously reported ChIP-Seq (chromatin-immunoprecipitation and sequencing) dataset of Neurod2 potential target genes in lineages of cortical projection neurons during neurogenesis (30).Comparing the upregulated genes in Neurod2 high cells with the top 1043 Neurod2 potential targets resulted in identification of 49 overlapped genes (Figure 5F; Table S2).Interestingly, several of these genes including Kit, Slit1, Sdc3, Map1b, Dab1, Olfm1, Dpysl5 and Flrt1 are involved in morphogenetic processes (Figure 5G).Among them, Slit1 belongs to Slit family of secreted extracellular matrix proteins, which have been recently reported to regulate endocrine cell morphogenesis during mouse endocrinogenesis (34,35).Therefore, this analysis predicts a possible function of Neurod2 in regulating programs that coordinate endocrine cell morphogenesis. Discussion The gene regulatory networks governing human pancreatic endocrine lineage during embryonic development are still not well explored.Here, we analyzed the spatiotemporal expression pattern of NEUROD2 and the functional impact of its deletion during human endocrinogenesis by combining Crispr/Cas9 technology and iPSC differentiation system in vitro.We successfully generated an iPSC reporter line in which the coding sequence of NEUROD2 was replaced by a H2B-Venus sequence that allowed monitoring the NEUROD2 transcriptional activity, while deleting the NEUROD2 gene.The obtained homozygous clones were pluripotent and successfully differentiated toward all three embryonic lineages.Because NEUROD2 is expressed in many other cell types including neurons (20), the generated line is a valuable tool to study the dynamic expression pattern and functional role of NEUROD2 in the development of other human cell types, such as during neurogenesis and intestinal endocrinogenesis.Furthermore, the expression of H2B-Venus reporter in the NEUROD2 nVenus/ nVenus cells enables having a short lineage tracing tool and allows the specific isolation of NEUROD2-expressing cells for further analysis such a gene profiling and proteomics.Our data revealed no significant impact of loss of NEUROD2 on endocrine cells induction.This finding was not surprising as the expression of NEUROD2 initiates after NGN3 induction (23).Additionally, the levels of NEUROD2 has been reduced in mice lacking Nng3 (14), further suggesting the function of NEUROD2 downstream of Ngn3.Our analysis also did not show striking changes in the rate of formation of SC-a and SC-b cells as the two major types of hormone-producing cells in the pancreas.This finding is aligned well with the previous study in which lack of NEUROD2 did not show significant impact on mouse b cell differentiation (31).However, our previous scRNA-seq analysis predicted a potential relationship between endocrine precursors expressing Neurod2 with a b cell fate during mouse endocrinogenesis (23).Therefore, it is possible that the NEUROD2 function is important to regulate only a subset of b cell-specific programs during development and it is not crucial for their overall differentiation.Additionally, the possible functional impact of NEUROD2 loss on b cell formation might have been masked by the function on NEUROD1.Although we did not find increased NEUROD1 expression upon lack of NEUROD2, the high expression levels of NEUROD1 might be sufficient to overcome the possible phenotype resulting from NEUROD2 deletion.In the future, deep gene-regulatory network studies including single-cell transcriptomics and epigenomics (36,37) combined with NEUROD2 target gene profiling in human endocrine lineage, as has been reported for NGN3 (15), may reveal more subtle phenotypes and functions of NEUROD2 in flow sorted cells during human b cell development.Moreover, due to the increased expression levels of genes associated with hormone secretion and exocytosis in Neurod2 high cells, the possible impacts of NEUROD2 on b cell maturation and function still needs to be investigated. Previous studies have shown the transient and restricted expression of Neurod2 in a subset of murine endocrine progenitors (23,31).Similarly, our data also indicated an increased expression of NEUROD2 mRNA at the peak of NGN3 + endocrine progenitor formation during human endocrinogenesis.This evolutionarily conserved transient expression pattern suggests that NEUROD2 likely functions only for a short time to regulate molecular programs during endocrinogenesis.Importantly, this time window corresponds to the major morphogenetic events initiated by Ngn3 activity resulting in endocrine cell egression from the pancreatic epithelium to form proto-islets (38, 39).In line with this notion, analysis of mouse scRNA-seq data revealed a positive correlation between Neurod2 expression and expression of several genes involved in endocrine cell dynamics such as those regulating cell polarity, migration and cytoskeletal remodeling.Interestingly, our analysis also predicted possible regulatory function of Neurod2 in the expression of some of these genes.This finding suggests that NEUROD2 might be partially involved in orchestrating endocrine cell morphogenetic programs.Future studies using 3D organoids combined with single cell live imaging (40)(41)(42) should address whether NEUROD2 impacts human endocrine cell egression and islet cell morphogenesis.Interestingly, we also found upregulation of genes associated with axonal guidance and growth in Neurod2 high cells.Considering NEUROD2's role in coordinating synaptic innervation, its impact on neuronal excitability, and synaptic function (43,44), it prompts the question of whether this protein plays a role in the establishment of pancreatic islet innervation.In summary, our data provides the expression pattern analysis of NEUROD2 during human pancreas development and uncovers the dispensable function of this TF in regulating SC-a and SC-b cell differentiation in humans.The current differentiation protocols are designed to generate mainly SC-b cells.Therefore, future analysis of the generated clones using protocols to optimally generate other endocrine cells including SC-d and SC-ϵ cells might precisely reveal whether NEUROD2 function regulates the proper segregation of other endocrine cell types in humans. 1 NEUROD2 FIGURE 1 NEUROD2 mRNA is transiently expressed during human endocrinogenesis.(A) Schematic picture of stepwise pancreatic endocrine lineage formation.(B) Schematic representation of human in-vitro endocrine differentiation protocol.Schemes were created with BioRender.com.(C, D) qPCR analysis of NEUROD2 and NEUROD1 expression at different stages of in-vitro endocrine differentiation as well as in adult primary human islets and EndoC-bH1 human b cell line. 4 NEUROD2 FIGURE 4 NEUROD2 function is dispensable for SC-a and SC-b cell differentiation.(A) Representative confocal pictures showing the C-PEP + and CHGA + cells at S5.7.Scale bar, 50 µm.(B) qPCR analysis of CHGA at S5.7.(C) FACS quantification of C-PEP + cells relative to total cells (upper graph) and total endocrine cells (lower graph) at S5.7.(D) FACS quantification of GCG + cells relative to total cells (upper graph) and total endocrine cells (lower graph) at S5.7.(E) FACS quantification of C-PEP + -GCG + cells relative to total cells (upper graph) and total endocrine cells (lower graph) at S5.7.(F) Representative confocal pictures displaying the C-PEP + and GCG + cells at S5.7.Scale bar, 50 µm.(G-J) qPCR analysis of INS, GCG, PAX4 and ARX at S5.7.(K, L) qPCR analysis of INS and GCG at S6.All statistics have been done using one-way ANOVA.Data are represented as mean ± SD. 5 FIGURE 5 Profiling of Neurod2-expressing cells during endocrinogenesis in vivo.(A) UMAP plot of single cells from embryonic (E12.5-E15.5)pancreatic cells from Bastidas-Ponce et al., 2019.(B) UMAP plot displaying the distribution of cells considered as Neurod2 + cells.(C) UMAP plot showing the different expression levels of Neurod2.(D) UMAP plot indicating two clusters based on Neurod2 expression levels (Neurod2 high and Neurod2 low/-).(E) Pathway enrichment analysis of the 365 upregulated genes in Neurod2 high cluster compared to Neurod2 low/-cells.Selected terms are presented.The whole list of the pathway enrichment analysis is provided in Table S2.(F) Venn diagram showing the overlapping between the upregulated and downregulated genes in Neurod2 high cluster with the Neurod2 potential target genes from the Bayam et al., 2015.(G) The list of 49 Neurod2 potential target genes upregulated in Neurod2 high cells.
2023-10-27T15:26:30.417Z
2023-10-25T00:00:00.000
{ "year": 2023, "sha1": "29e095aae98f257d28ea6c711ff4faeda8961960", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fendo.2023.1286590/pdf?isPublishedV2=False", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "e5db87a4038353360b606fd812545d164ba531c6", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
258193484
pes2o/s2orc
v3-fos-license
Oriented Insertion of ESR-Containing Hybrid Proteins in Proteoliposomes Microbial rhodopsins comprise a diverse family of retinal-containing membrane proteins that convert absorbed light energy to transmembrane ion transport or sensory signals. Incorporation of these proteins in proteoliposomes allows their properties to be studied in a native-like environment; however, unidirectional protein orientation in the artificial membranes is rarely observed. We aimed to obtain proteoliposomes with unidirectional orientation using a proton-pumping retinal protein from Exiguobacterium sibiricum, ESR, as a model. Three ESR hybrids with soluble protein domains (mCherry or thioredoxin at the C-terminus and Caf1M chaperone at the N-terminus) were obtained and characterized. The photocycle of the hybrid proteins incorporated in proteoliposomes demonstrated a higher pKa of the M state accumulation compared to that of the wild-type ESR. Large negative electrogenic phases and an increase in the relative amplitude of kinetic components in the microsecond time range in the kinetics of membrane potential generation of ESR-Cherry and ESR-Trx indicate a decrease in the efficiency of transmembrane proton transport. On the contrary, Caf-ESR demonstrates a native-like kinetics of membrane potential generation and the corresponding electrogenic stages. Our experiments show that the hybrid with Caf1M promotes the unidirectional orientation of ESR in proteoliposomes. Introduction Microbial rhodopsins belong to an important class of membrane proteins that perform the transformation of absorbed light energy to ion transport and sensory functions [1][2][3]. Bacteriorhodopsin (BR) from Halobacterium salinarum and green-absorbing proteorhodopsin (PR) from uncultured marine bacterium are the most extensively studied representatives of this family [4,5]. The development of metagenomic approaches enabled the discovery of multiple new retinal proteins, which become the objects of extensive functional and structural studies [3,6,7]. To perform profound characterization of a membrane protein, researchers need to transfer it into a membrane-like environment that could be represented by detergent micelles, bicelles, liposomes, or nanodiscs [8][9][10][11]. In contrast to micelles, liposomes provide the closed circular lipid bilayer and, therefore, enable to study the transport properties of retinal proteins. As was demonstrated by numerous studies, the functional characteristics of microbial rhodopsins in proteoliposomes significantly depend on the lipid characteristics (their charge, length, etc.) and protein chain orientation [12,13]. For example, BR and PR possess similar orientation in the cells with their N-terminus oriented towards the medium Recombinant Fusion Protein Construction and Expression The amino acid sequence and three-dimensional structure of ESR are homologous to those of PR; moreover, similar to PR, ESR has a negatively charged N-terminus and a positively charged C-terminus. The direction of proton transport in proteoliposomes is similar for both proteins, at least at neutral pH values. Therefore, we decided to exploit the utility of the fusion protein approach developed for PR [30] to ensure the uniform orientation of ESR in the bilayer. In this work, we have used three highly soluble proteins as fusion partners. As C-terminal fusions, a fluorescent protein mCherry and E. coli thioredoxin (Trx) were used ( Figure 1). The molecular weight of mCherry is more than two times larger than that of Trx (Table 1); as a result, the effect of the fusion partner size could be also assessed. Characterization of the Fusion Proteins in Micelles Purified fusion proteins in the micelles of non-ionic detergent DDM were characterized by absorption spectroscopy and studies of flash-induced kinetics of absorption changes at selected wavelengths (flash photolysis). At pH 7.0, the absorption maxima of the ESR-Trx and Caf-ESR hybrids were identical to that of the wild-type ESR (528 nm, Figure 1D). In the spectrum of the ESR-Cherry hybrid, two characteristic maxima were observed, a major peak at 541 nm and a smaller one at 587 nm, presumably resulting from a convolution of the ESR and mCherry curves. Previously, we demonstrated that ESR undergoes a photocycle that includes several intermediates with different absorption maxima: ESR532→K→L530→M1400↔M2400↔N1580↔N2/O540→ESR [31,32,42]. The kinetics of light-induced absorption changes in the wild-type ESR and the hybrid proteins were examined at four characteristic wavelengths, 410, 510, 550, and 590 nm ( Figure 2). In all proteins, the decay of the K intermediate and formation of the L intermediate occur on the microsecond time scale, and are accompanied by the decrease of absorption at 590 nm, 550 nm, and 510 nm. At pH 7.0, almost no M intermediate was observed in the photocycle of all studied proteins (data not shown) as was observed earlier for the wild-type ESR [31]. It was shown that the pKa of M accumulation in ESR depends on the environment. It is lower in lipids and lipid-like detergents than in DDM, which results in large M accumulation at neutral pH [34]. At pH 9.0, the formation of the M intermediate is reflected in an increase in absorption at 410 nm and contains three components with 1 ~4-9 µs, 2 ~80-180 µs, and 3 ~2-3 ms (Table 2). Noteworthy, the contribution of the slowest millisecond component of M rise increases in the hybrid proteins up to 56% in ESR-Trx and Caf-ESR, To obtain an N-terminal fusion with a membrane protein with a N out -C in configuration, the fusion partner should be translocated to the periplasmic space. Previously, a hybrid of PR with mCherry was constructed using the Skp signal sequence, which provided Sec-dependent translocation of the N-terminal soluble domain [30]. However, the yield of the obtained protein mCherry-PR was relatively low (about 0.5 mg/g of cell pellet). In our case, expression of the hybrid protein mCherry-ESR was not detected in E. coli cell membranes with the use of protein electrophoresis and Western blot with anti-His antibodies ( Figure 1A,B, lane 4). We presumed that fusion with a secreted bacterial protein would be beneficial to increase the synthesis level of the hybrid protein. With this aim, we selected a highly soluble protein Caf1M from Y. pestis, which belongs to the molecular chaperone superfamily and is abundantly expressed in E. coli periplasm [41]. The gene coding for the Caf1M precursor with its own signal sequence was cloned in frame with the ESR gene, resulting in N-terminal fusion with the target protein ( Figure 1C). In all constructs, the fusion partners were separated by the flexible linker (GSGSGGGGS). Fusion proteins were expressed under the control of the T7lac promoter as previously established for the wild-type ESR [31]. Similarly to the wild-type ESR, the hybrids were detected in the fraction of the induced C41(DE3) cells obtained after high-speed centrifugation of the lysate that confirmed their successful incorporation into the membrane ( Figure 1A,B). The hybrid proteins were solubilized in DDM micelles and purified by Ni-affinity chromatography due to the presence of the C-terminal hexahistidine tag. The obtained yield of the purified proteins was 2.4 mg for ESR-Cherry, 3.2 mg for ESR-Trx, and 1.2 mg for Caf-ESR from 1 g of wet biomass. Characterization of the Fusion Proteins in Micelles Purified fusion proteins in the micelles of non-ionic detergent DDM were characterized by absorption spectroscopy and studies of flash-induced kinetics of absorption changes at selected wavelengths (flash photolysis). At pH 7.0, the absorption maxima of the ESR-Trx and Caf-ESR hybrids were identical to that of the wild-type ESR (528 nm, Figure 1D). In the spectrum of the ESR-Cherry hybrid, two characteristic maxima were observed, a major peak at 541 nm and a smaller one at 587 nm, presumably resulting from a convolution of the ESR and mCherry curves. Previously, we demonstrated that ESR undergoes a photocycle that includes several intermediates with different absorption maxima: ESR 532 →K→L 530 →M1 400 ↔M2 400 ↔N1 580 ↔N2/O 540 →ESR [31,32,42]. The kinetics of light-induced absorption changes in the wildtype ESR and the hybrid proteins were examined at four characteristic wavelengths, 410, 510, 550, and 590 nm ( Figure 2). In all proteins, the decay of the K intermediate and formation of the L intermediate occur on the microsecond time scale, and are accompanied by the decrease of absorption at 590 nm, 550 nm, and 510 nm. At pH 7.0, almost no M intermediate was observed in the photocycle of all studied proteins as was observed earlier for the wild-type ESR [31]. It was shown that the pKa of M accumulation in ESR depends on the environment. It is lower in lipids and lipid-like detergents than in DDM, which results in large M accumulation at neutral pH [34]. At pH 9.0, the formation of the M intermediate is reflected in an increase in absorption at 410 nm and contains three components with τ 1~4 -9 µs, τ 2~8 0-180 µs, and τ3~2-3 ms (Table 2). Noteworthy, the contribution of the slowest millisecond component of M rise increases in the hybrid proteins up to 56% in ESR-Trx and Caf-ESR, and to 75% in ESR-Cherry in comparison with 33% in the wild-type ESR. The decay of M involves two components, 10-12 ms (~70%) and 293-322 ms (~30%). It results in the production of red-shifted intermediates that decay with a time constant of 60-83 ms. involves two components, 10-12 ms (~70%) and 293-322 ms (~30%). It results in the production of red-shifted intermediates that decay with a time constant of 60-83 ms. Overall, the photocycle characteristics of the obtained fusion proteins in DDM micelles were similar to those of the wild-type ESR. To assess the efficiency of the hybrid protein incorporation into the lipid bilayer, their orientation, and functionality, we studied the kinetics of light-induced changes of transmembrane potential difference (ΔΨ) for the proteoliposomes with the hybrid proteins attached to a collodion film along with the light-induced absorption changes at selected wavelengths, as described below. Photocycle of the Hybrid Proteins in Proteoliposomes During the first 10 µs of the photocycle, the decay of a red-shifted K-like intermediate is accompanied by the decrease of absorption at 590 nm in the wild-type ESR and the fusion proteins with  ~3-5 µs at pH 7.5 ( Figure 3). In the proteoliposomes reconstituted Overall, the photocycle characteristics of the obtained fusion proteins in DDM micelles were similar to those of the wild-type ESR. To assess the efficiency of the hybrid protein incorporation into the lipid bilayer, their orientation, and functionality, we studied the kinetics of light-induced changes of transmembrane potential difference (∆Ψ) for the proteoliposomes with the hybrid proteins attached to a collodion film along with the light-induced absorption changes at selected wavelengths, as described below. Photocycle of the Hybrid Proteins in Proteoliposomes During the first 10 µs of the photocycle, the decay of a red-shifted K-like intermediate is accompanied by the decrease of absorption at 590 nm in the wild-type ESR and the fusion proteins with τ~3-5 µs at pH 7.5 ( Figure 3). In the proteoliposomes reconstituted with the wild-type ESR, the formation of M was clearly observed as an increase of absorption at 410 nm with time constants~3, 14, and~82 µs (Table S1). Contrary to that, the amount of M was very small in all hybrid proteins at neutral pH ( Figure 3), indicating that the pKa of M accumulation is increased in hybrids, presumably as the result of a higher pK a of the Schiff base in M. It should be noted that only a small fraction of K converts fast to M in the hybrid proteins incorporated in proteoliposomes at neutral pH. The larger part decays much slower with time constants in the order of 10-20 ms (Figure 3). The amplitude of the absorption changes at 410 nm was the largest in the ESR-Trx hybrid, where the formation of the M intermediate proceeds via two kinetic components with time constants~5 and~130 µs (Table S3). Similar to the wt, the fast phase of M decay with τ~0.4 ms and the slow phase with τ~4 ms were observed in this hybrid. In both proteins, a decrease of absorption at 410 nm at this timescale is accompanied by an increase at 510, 550, and 590 nm, indicating accumulation of N1 (maximal changes at 550 nm) and N2/O states (maximal changes at 590 nm) [32]. The decay of the N2/O intermediates and return to the initial state in the hybrid proteins proceeds in one kinetic component with τ~20-23 ms (Tables S2-S4). This value is close to the wild type (21.6 ms, Table S1). with time constants ~5 and ~130 µs (Table S3). Similar to the wt, the fast phase of M decay with  ~0.4 ms and the slow phase with  ~4 ms were observed in this hybrid. In both proteins, a decrease of absorption at 410 nm at this timescale is accompanied by an increase at 510, 550, and 590 nm, indicating accumulation of N1 (maximal changes at 550 nm) and N2/O states (maximal changes at 590 nm) [32]. The decay of the N2/O intermediates and return to the initial state in the hybrid proteins proceeds in one kinetic component with  ~20-23 ms (Tables S2-S4). This value is close to the wild type (21.6 ms, Table S1). In the ESR-Cherry, corresponding absorption changes at 410 nm are about four times smaller than in ESR-Trx and include kinetic components with time constants ~4 and ~90 µs for M formation, and 3.4 ms for M decay (Table S2) In the ESR-Cherry, corresponding absorption changes at 410 nm are about four times smaller than in ESR-Trx and include kinetic components with time constants~4 and~90 µs for M formation, and 3.4 ms for M decay (Table S2) In general, the photocycle of Caf-ESR at pH 7.5 is similar to those of other hybrid proteins (Table S4). The fast kinetic components with  ~3 and 14 µs coupled to the formation of the M state in the wild-type ESR were not detected in Caf-ESR. M rise proceeds with a single time constant of 54 µs and M decay has  ~3 ms. Remarkably, the photocycle transitions in Caf-ESR demonstrate better consistency with the kinetic components of the electrogenic response (see below). At the same time, it should be noted that direct electrometry provides a much better signal-to-noise ratio; that is, it is more sensitive compared to spectral measurements. It is worth mentioning that the kinetics of light-induced absorption changes of the hybrid proteins in proteoliposomes at pH 7.5 resemble those of the DDM-solubilized wild-type ESR measured at the same pH value [32]. In that conditions, the predominance of the long-lived red-shifted K-like species measured at 590 nm and a negligible amount of M were the characteristic features of the photocycle. It was attributed to the influence In general, the photocycle of Caf-ESR at pH 7.5 is similar to those of other hybrid proteins (Table S4). The fast kinetic components with τ~3 and 14 µs coupled to the formation of the M state in the wild-type ESR were not detected in Caf-ESR. M rise proceeds with a single time constant of 54 µs and M decay has τ~3 ms. Remarkably, the photocycle transitions in Caf-ESR demonstrate better consistency with the kinetic components of the electrogenic response (see below). At the same time, it should be noted that direct electrometry provides a much better signal-to-noise ratio; that is, it is more sensitive compared to spectral measurements. It is worth mentioning that the kinetics of light-induced absorption changes of the hybrid proteins in proteoliposomes at pH 7.5 resemble those of the DDM-solubilized wildtype ESR measured at the same pH value [32]. In that conditions, the predominance of the long-lived red-shifted K-like species measured at 590 nm and a negligible amount of M were the characteristic features of the photocycle. It was attributed to the influence of the protonated state of His57, closely interacting with the proton acceptor from the Schiff base, Asp85, on its pKa in M and accumulation of the latter [32]. Electrogenic Response of the Fusion Proteins Incorporated into Proteoliposomes Studies of the kinetics of membrane potential generation are of great value for qualitative and, in some cases, quantitative assessment of the efficiency of charge translocation by proton pumps and their orientation in proteoliposomes. For example, a decrease in the total relative amplitudes of millisecond phases points to the reduced efficiency of proton pumping due to reverse reactions [36]. At the same time, the rapid discharge of liposomes indicates an increase in the permeability of liposomes to protons. A decrease in the total amplitude of the photoelectric response and the appearance of negative electrogenic components indicate the misorientation of proteins in proteoliposomes. The kinetics of light-induced changes of transmembrane potential difference (∆Ψ) for ESR-containing proteoliposomes attached to the phospholipid-impregnated collodion film was examined for the wild type and the hybrid proteins along with the light-induced absorption changes of proteoliposomes at selected wavelengths. Figure 5 demonstrates the kinetics of ∆Ψ generation in the wild-type ESR, and the hybrid proteins at pH 7.5 and 6.5 on a piecewise linear a and logarithmic time scales. The curves with marked positive and negative phases of the response are provided in the Supporting Material ( Figures S1-S3). The corresponding absorption changes at pH 7.5 are shown in Figure 3. After the flash, all three hybrid proteins produced a photoelectric response, which corresponded to the proton transfer from the inside of the liposomes to the bulk, similar to the wild-type ESR [35]. However, the amplitude of ΔΨ generated by the proteoliposomes with ESR-Cherry and ESR-Trx was on average about two times smaller than that of the proteoliposomes with Caf-ESR under the same conditions ( Figure 5). Moreover, the number and sign of the components and the rate constants of the electrogenic response exhibited substantial differences between these proteins as discussed below. After the flash, all three hybrid proteins produced a photoelectric response, which corresponded to the proton transfer from the inside of the liposomes to the bulk, similar to the wild-type ESR [35]. However, the amplitude of ∆Ψ generated by the proteoliposomes with ESR-Cherry and ESR-Trx was on average about two times smaller than that of the proteoliposomes with Caf-ESR under the same conditions ( Figure 5). Moreover, the number and sign of the components and the rate constants of the electrogenic response exhibited substantial differences between these proteins as discussed below. ESR-Cherry Due to the instability of the proteoliposomes containing ESR-Cherry at pH 6.5, the photoelectric response of this hybrid was measured only at pH 7.5. The maximum amplitude is more than two-fold lower in this hybrid than in the wild type and the relative contribution of phases is strongly altered. In response to the flash, the microsecond components of ∆Ψ generation are resolved, which correspond to the electrogenic proton transfer through the proteoliposome membrane (~2.7 and~79 µs, Table S2 and Figure 5). Similarly to the wild-type ESR, these phases are associated with deprotonation of the Schiff base (M-state formation). The overall contribution of these components comprises 11.3% of the total amplitude, which is~2.3 times greater than in the wild type (5%). The relative increase of the amplitude of the electrogenic phase associated with proton transfer from the Schiff base to the primary acceptor Asp85 and the formation of the M state could be explained assuming a decreased efficiency of transmembrane proton transport by the hybrid protein incorporated in proteoliposomes in comparison with the wild-type ESR in the stages of photocycle that involve M decay. In that case, the observed effect is a consequence of a decrease in the amplitudes of subsequent electrogenic stages associated with reprotonation of the Schiff base in the catalytic cycle. Electrogenic components of M decay in ESR-Cherry with τ~0.4, 1.55, and~7.2 ms reflect the electrogenic proton transfer reactions in the transitions M→N(O)→ESR similarly to the wild type ESR [35]. The time constants of these components demonstrate significant differences from the corresponding components of the kinetics of light-induced absorption changes (3.4 and 23.3 ms, Table S2). Remarkably, the direction (sign) of membrane potential generation for two of these components differs from the wild-type ESR. In contrast to the 0.6 ms electrogenic phase in the wild type, the electrogenic phase with τ~0.4 ms in ESR-Cherry has a negative sign reflecting the decrease in membrane potential generation. In opposite to the electrogenic phase with τ~0.4 ms, the next electrogenic step with τ~1.55 ms has the same direction as the microsecond electrogenic components associated with the formation of the M state and produces the main contribution (>88%) to positive electrogenesis in this hybrid. It is close in time to the electrogenic phases (0.6 ms, 3.4 ms) in the wild type that are associated with the M→N(O) transition and presumably corresponds to the Schiff base reprotonation in ESR-Cherry during this transition in the parallel 3.4 ms optical phase. Similarly to the~0.4 ms component, the 7 ms electrogenic phase in ESR-Cherry has a negative sign reflecting the decrease of the membrane potential. Corresponding absorption changes presumably are too small and are not resolved. It should be mentioned that the electrogenic phase with τ~18 ms that in the wild type corresponds to the proton release at the extracellular surface in the N2/O→ESR transition is absent in the case of the hybrid protein. The presence of the negative phases in the kinetics of ∆Ψ generation associated with a reverse proton transfer to the Schiff base was revealed previously in the K96A mutant of ESR [36]. In ESR-Cherry, the internal proton donor for the SB Lys96 is present, which should largely prevent reprotonation of the Schiff base from the extracellular side. However, similar reverse reactions could not be totally excluded. The more likely explanation for the 0.4 ms negative phase is a superposition of the kinetics from oppositely oriented ESR molecules in the proteoliposomes, a misorientation. At first sight, the presence of the protein species with opposite orientations should lead to the decrease of the amplitude of the electrogenic phases, but not to the change in the ratio of the amplitudes of the microsecond and millisecond phases. However, the different environment of the oppositely oriented molecules that could result in altered kinetics should be taken into account. The supplementary explanation assumes that the proteoliposomes with this hybrid are intrinsically leaky, causing the fast passive reverse flow of the charges with ca 7 ms that prevent the accumulation of potential on the millisecond time scale. Poor stability of the samples at pH 6.5 could also be associated with such leakiness and indirectly supports this presumption. Previously, it was shown that the appearance of faster membrane discharge components indicates an increase in the permeability of liposomes for protons due to the addition of uncouplers or channel-promoting agents [43]. ESR-Trx Similarly to the ESR-Cherry, studies of the electrogenic response of the hybrid protein ESR-Trx at pH 7.5 revealed an increase in the relative amplitude of the electrogenic phases that correspond to the M-state formation in comparison with the wild-type ESR. However, their contribution was lower than that in ESR-Cherry (6.7 vs. 11.3%, Table S3). The main contribution to the positive electrogenesis is provided by the electrogenic phase with τ~1.4 ms, which corresponds to the M→N/O transition in this hybrid. The presence of the negative electrogenic components in the millisecond timescale that correspond to M decay and recovery of the initial state was also revealed in ESR-Trx with time constants that were 2-4 times slower than in ESR-Cherry (0.9 vs. 0.4 and 29 vs. 7 ms). Similarly to the ESR-Cherry, the positive electrogenic phase corresponding to the recovery of the initial state of the ESR and proton release at the extracellular surface of the protein was not detected in this hybrid protein. Instead, the phase with a negative sign is resolved with a similar time constant (29 ms). The relative contribution of this component is about two times smaller than that of the corresponding phase in ESR-Cherry (24 vs. 51%). At pH 6.5, fast electrogenic components in the microsecond time scale (~4 and 30 µs), which together with slower microsecond electrogenic phases reflect the proton transfer from the Schiff base to Asp85 (formation of the M state) in the wild-type ESR, are not resolved in ESR-Trx (Table S5). The amplitude of the negative electrogenic phase with τ~0.9 ms increases significantly, while the contribution of the positive phase with τ~2.5 ms presumably corresponding to the electrogenic proton transfer from the bulk to the Schiff base is markedly decreased. A very slow positive electrogenic phase (τ~200 ms) appears after that. Taken together, the features of the kinetics of ∆Ψ generation in ESR-Trx hybrid protein at pH 7.5 and even more so at pH 6.5 point to the considerable distortion of the photoelectric response, which is presumably caused by superposition of the signals from oppositely oriented protein molecules in the proteoliposomes and by reverse reactions of the photocycle. Based on the ratio of the sum of the amplitudes of the negative and positive electrogenic phases, we can estimate that the fraction of misorientation could reach up to~25%. The appearance of the 200 ms phase is most probably a result of the accelerated passive leak through the membrane of the proteoliposomes containing ESR-Trx. Caf-ESR In the kinetics of ∆Ψ generation in proteoliposomes containing the Caf-ESR hybrid protein at pH 7.5, the relative amplitude of the microsecond electrogenic components corresponding to the formation of the M state is similar to that in the wild type and even smaller (~3% instead of~5%, Table S4, [35]). The same tendency is observed in the comparison of the photoelectric response of the hybrid protein with that of the wild type at pH 6.5 (1.2% instead of 3.2%, Table S5). In contrast to other hybrid proteins, the relatively low contribution of the microsecond phases in Caf-ESR demonstrates the native-like kinetics of subsequent electrogenic stages and the high efficiency of the proton transport in this protein. It should be noted that the photoelectric response of the Caf-ESR in the microsecond scale includes a kinetic component with a time constant~3 µs, which is presumably associated with the decay of the K state coupled with a decrease in absorption at 590 nm. Its relative amplitude is close to that of the wild type (~1.5%). In BR, the process of the L-state formation from the K state corresponds to the charge transfer over a distance of 4.5 Å [44] that correlates with the amplitude of this component in the wild-type ESR, taking into account that in ESR, the process of formation of K and the transition of K to L possess close characteristic time constants [35]. Therefore, decreased yield of the microsecond electrogenic components in the Caf-ESR is likely explained by the slow formation of the M intermediate as discussed below. In the microsecond time range, a single kinetic component with the time constant 60 µs reflects deprotonation of the Schiff base and proton transfer to the acceptor residue Asp85. This constant reasonably corresponds to the~54 µs component of the photocycle (Table S4). In the wild-type ESR, two kinetic components in this time range correspond to the formation of M, with τ~24 and 100 µs with a total amplitude, which is~2.3 times greater than that of the~60 µs phase. These data point to the decreased rate of M formation in the hybrid ( Figure 5). Remarkably, the amplitude of the corresponding absorption changes at 410 nm (with τ~54 µs) in the hybrid is significantly smaller compared to the sum of the amplitudes of the 24 and 100 µs kinetic components in the photocycle of the wild-type ESR (Tables S1 and S4). In the wild-type ESR, a decrease in absorption at 590 nm in this time interval is observed, while in the hybrid there is an increase in absorption, which probably corresponds to a small production of the N state. We assume that this component in the Caf-ESR hybrid includes part of the electrogenic process caused by the transition of M to N (reprotonation of the Schiff base). Apparently, the transition between early and late red-shifted intermediates in the Caf-ESR proceeds without accumulation of M ( Figure 3D). Since the distance of electrogenic proton transfer during M→N transition is significantly greater than during the formation of M from K/L, a small amount of the emerging intermediate N makes a slightly greater contribution to electrogenesis, explaining the difference between the amplitudes of the kinetics of absorption changes at 410 nm (9 times) and corresponding kinetics of ∆Ψ generation (~2.3 times) between the wild type and the Caf-ESR hybrid. Unlike in the wt, in the Caf-ESR, the M→N/O transition at pH 7.5 is accompanied by a single positive electrogenic phase (τ~1.6 ms), which reflects reprotonation of the Schiff base. The relative contribution of this electrogenic component (61%) is close to the sum of contributions of two electrogenic phases in the wild type, which are connected with the M→N/O transition (74.5%). The absence of the negative electrogenic components in the kinetics of ∆Ψ generation in Caf-ESR at pH 7.5 distinguish this hybrid protein from the two other fusions, ESR-Cherry, and ESR-Trx, and resembles the kinetics of the wild type ESR at this pH. The negative components are also absent at pH 6.5 in contrast to other variants including the wt ESR, indicating the absence of back reactions and misorientation in this protein. In contrast to other hybrid proteins, the last electrogenic phase that corresponds to the N2/O→ESR transition (τ~17.6 ms) in Caf-ESR has a positive direction and contributes about 36% to the overall response. This value corresponds to the proton release from the acceptor site to the bulk phase, similar to the wild type ESR [35]. At pH 6.5, two millisecond electrogenic components (0.29 ms and 4.2 ms) with relative contributions to the full amplitude of 18.7 and 47.3%, respectively, related to transitions M→N1 and M→N1→N2/O are resolved in the kinetics of the photoelectric response of Caf-ESR (as in the wild-type protein). The slowest electrogenic component (with~17 ms and 32.9% contribution) refers to the transition N2/O→ESR. At both pH values, no accelerated passive discharge of the liposome was observed, indicating that Caf-ESR does not increase the permeability of the liposome membrane for ions unlike the other two hybrids. Discussion Fusion proteins are widely used to provide membrane protein expression in E. coli cells for structural and functional studies [45][46][47][48]. Among the most popular fusion partners are glutathione-S-transferase [49], ketosteroid isomerase [50], maltose-binding protein (MBP) [51], and thioredoxin [40,52]. Frequently, the construction of membrane protein fusions is aimed to obtain higher expression levels. In that case, fusion with highly hydrophobic proteins can promote the formation of the inclusion bodies and accumulation of the insoluble protein requiring subsequent refolding step [50]. When membrane localization and the functional state of the targets are the decisive factors, the usage of highly soluble proteins (e.g., MBP, thioredoxin) is desirable. For example, successful recombinant expression of BR in E. coli was achieved by the fusion with MBP [53] and Mistic [54]. Fusions of the target polypeptides with GFP and other fluorescent proteins can be also used for seamless monitoring of their synthesis level during expression optimization studies [55]. The ability of the partners to provide proper orientation of a target protein in the artificial membrane that enables more precise control of their functional activity represents a new perspective field of the fusion technology application [30]. In the current work, we employed three highly soluble proteins as fusion partners to provide a directed orientation of ESR in the proteoliposomes. mCherry is a popular monomeric derivative of DsRed from Discosoma sp. [39]. Fusions with mCherry are frequently used to monitor target proteins' biogenesis, localization, folding, and other properties [56]. Thioredoxin from E. coli facilitates recombinant protein production in soluble form and increased yield [40]. Both proteins have cytoplasmic localization in bacterial cells and, therefore, were attached to the C-terminal end of ESR. It should be mentioned that mCherry could also be secreted through the Sec pathway and retains fluorescent properties in the periplasm of E. coli cells [30,56]. Unfortunately, our attempts to obtain N-terminal Cherry-ESR fusion were unsuccessful; expression of this recombinant protein was not detected in E. coli cells. This is in contrast to previously published data where Cherry-PR hybrid protein was obtained; however, with limited yield [30]. Consequently, we decided to use a secreted Caf1M protein with the aim to increase the expression yield of the N-terminal fusion with ESR. All three hybrids were efficiently expressed in E. coli cells, though with different yields. It should be mentioned that the synthesis level of the mature Caf-ESR is limited by the capacity of the cellular machinery that provides protein export to the periplasmic space (Sec-translocon). Not surprisingly, the yield of the Caf-ESR hybrid was the lowest among the three fusion constructions. Since the photocycle time constants of the hybrids in detergent micelles were close to those of the wild-type ESR, we studied the light-induced absorbance changes and transmembrane potential generation in proteoliposomes reconstituted with these proteins in comparison with ESR-containing liposomes. Interestingly, a small amount of the M intermediate was observed at pH 7.5 only in the photocycle of ESR-Trx incorporated into proteoliposomes, while the recombinant ESR at this pH produced large absorbance changes at a characteristic wavelength 410 nm (Figure 3, [32]). To provide an explanation for this phenomenon, we examined the pH dependence of the photocycle of the ESR-Cherry in a liposome environment and found that the M intermediate in this protein was detected only at pH > 8.5. Similar pH-dependence was observed earlier for the wild-type ESR [32] and in this work for the fusions solubilized in DDM micelles. It was shown previously that pK a values of the photocycle transitions in ESR are very sensitive to the environment. For example, in DDM-solubilized samples, the corresponding values are shifted by 2-2.5 units to the higher pH range [32], while in the micelles of lipid-like detergent LPG, they are more similar to those in liposomes [34]. In other words, the hybrid proteins reconstituted in liposomes demonstrated the photocycle properties (namely, pH dependence of M formation), which are characteristic for the DDM-solubilized samples, which is apparently from the influence of fusion properties on the pKa of the counterion in M and, specifically, the pK a of His57 that closely interacts with proton acceptor Asp85. It is widely accepted that charged headgroups of the lipid bilayer can affect the proton concentration and surface potential near the interface [57,58]. For example, the addition of anionic lipids (cardiolipin, phosphatidic acid, and others) to POPC-based large unilamellar vesicles resulted in a systemic shift of the pK a of the pHLIP peptide insertion, which depends on electrostatic surface potential [58]. The local properties of such regions could differ from the measured pH in the bulk solution, and these differences presumably stand for observed discrepancies in the pK a values in the micelle and proteoliposome environments. In the proteoliposomes reconstituted with hybrid proteins, the presence of the soluble domains can eliminate/neutralize the effect of the lipids by providing highly hydrophilic space at the membrane exterior. This creates a situation that is close to the one observed in DDM-solubilized samples, resulting in similar pK a values of M formation. In other words, properties of ESR with a soluble domain incorporated into the proteoliposome resemble those of the same protein in a detergent micelle due to the presence of a hydrophobic shell (lipid moiety) and a hydrophilic exterior (soluble domain). Indirect evidence for the proposed explanation is provided by the fact that among the studied fusion proteins ESR-Trx demonstrated the largest amount of the M intermediate in proteoliposomes at pH 7.5. This could be attributed to the smaller size of the thioredoxin partner (~12 kDa in comparison with~27 kDa for mCherry and Caf1M), and, correspondingly, to the smaller neutralizing effect of the influence of lipid headgroups (which include negatively charged phosphatidylinositol, zwitterionic phosphatidylethanolamine, and phosphatidylcholine, and other components of azolectin used for proteoliposome preparation). The contribution of the soluble domain charge to the charge on the liposome surface should be also considered as a possible reason for the altered pK a values of M formation in the hybrid proteins. Placing of a negatively charged domain at the proteoliposome exterior should increase its negative potential and vice versa [27]. However, taking into account that three fusion partners possess different pI values (Trx and mCherry are negatively charged at neutral pH, and Caf1M is slightly positive, Table 1), this explanation is less likely and requires a more detailed study of charge distribution in these proteins. The decrease of the total amplitude of rapid electrogenic phases coupled to M formation in all hybrid proteins also represents an interesting phenomenon that could be associated with the slow accumulation of the M state in the hybrids. In the photocycle of the hybrid proteins, M formation presumably occurs in a submillisecond time scale, simultaneously with the M decay; therefore, the M state does not accumulate. Correspondently, electrogenicity, which is coupled with this process, is partially shifted towards the millisecond scale of the kinetics of ∆Ψ generation. As a result, the relative amplitude of the electrogenic microsecond components is smaller than in the wild type. We can presume that the observed high ratio of the amplitudes of the micro-and millisecond electrogenic components in the ESR-Cherry and ESR-Trx hybrids is even higher due to this effect. It should be mentioned that microsecond phases make a relatively low contribution to the electrogenesis in the wild-type ESR (about 6%, [35]). Therefore, even large differences in the kinetics of M-state formation are only moderately reflected in the kinetics of ∆Ψ generation. Due to this reason, electrogenic responses of the wt and Caf-ESR look similar, especially on the linear time scale, while their photocycles are dramatically different. In spite of the overall similarity of the photocycle characteristics, the hybrid proteins demonstrated different efficiencies in facilitating directed orientation of the ESR in the lipid bilayer, efficiency, and vectorality of proton transport. This conclusion is supported by the following observations. (1) The amplitudes of the photoelectric response from the proteoliposomes containing Caf-ESR protein were almost two times larger than those from two other hybrids. Undoubtedly, these estimations are semi-quantitative because the amplitude of ∆Ψ depends on the efficiency of proteoliposome association with the collodion membrane in each experiment. (2) The increased ratio of the kinetic components in the microsecond time range to that in the millisecond time range of the photoelectric response of the ESR-Cherry and ESR-Trx incorporated in proteoliposomes corresponds to the decreased efficiency of transmembrane proton transport in comparison with the wild-type ESR from reverse reactions. Unlike two other hybrids, in Caf-ESR this ratio is similar to that in the wild type and even smaller. (3) The kinetics of light-induced changes of the transmembrane potential difference of ESR-Cherry and ESR-Trx at pH 7.5 exhibited large negative phases presumably associated with the opposite orientation of the protein in the proteoliposome bilayer. These negative phases were completely absent in Caf-ESR. Moreover, in the wild-type ESR, a small negative phase with τ~1.3 ms was previously observed at pH 6.6, which was attributed to the presence of a small fraction of oppositely oriented protein in the membrane [33]. In Caf-ESR, it was absent even at pH 6.5. As a result, the kinetic constants of the electrogenic phases differ from those of the photocycle transitions in ESR-Cherry and ESR-Trx (see Tables S2 and S3), while in Caf-ESR, they are more similar (Table S4). The observed differences could be explained assuming the presence of the ESR-Cherry and ESR-Trx with opposite orientations in the proteoliposomes and superposition with opposite signs of the corresponding photovoltage responses through the electrometric study. In the kinetics of light-induced absorption changes, signals from differently oriented proteins in proteoliposomes have the same direction and their addition leads to significantly smaller differences in comparison with the kinetics of ∆Ψ generation. This confirms the utility of the direct electrometry approach for the assessment of the orientation and functional state of the proton pumps in the lipid environment. The obtained results indicate that fusion with Caf1M provides a highly unidirectional orientation of ESR in proteoliposomes with a fraction of correctly oriented molecules close to 100%. Ritzmann et al. obtained almost 100% opposite orientation of the PR molecules by placing fluorescent proteins at its N-or C-end [30]. In our work, the ESR fusions had mainly the same orientation as the wild-type protein independently of the fusion position. We presume that the obtained results could be explained by different procedures for liposome preparation used in these studies. In [30], the preformed liposomes were mixed with the purified protein that promoted preferential insertion of the molecules with the soluble domain outside the bilayer. The protocol described by Rigaud [21] does not include the initial preparation of the liposomes; instead, they are formed upon incubation with the protein solution. During this procedure, soluble domains have the possibility to be located both inside the lumen and in the external bulk; consequently, the direction of insertion is determined mainly by the membrane protein. Intrinsic properties of the protein molecule (hydrophobicity, configuration, and charge) are responsible for obtaining its preferential orientation in proteoliposomes [21]. It was demonstrated earlier that the ESR molecule itself has mostly unidirectional orientation in the lipid bilayer [35]. We can conclude that its geometry/charge distribution promotes asymmetrical insertion in the N out -C in direction. Presumably, C-terminal fusions with ESR also tend to preserve this orientation in the bilayer with their soluble domains (Trx or mCherry) inside the lumen. However, the limited internal volume of the proteoliposomes does not allow accommodation of all the fusion molecules in a single direction and, as a result, a fraction of the fusions acquires an opposite (N in -C out ) orientation ( Figure 6). The kinetics of the potential difference generation differs between two oppositely directed protein populations, resulting in the appearance of the negative phases that were detected in the ESR-Trx and ESR-Cherry photoelectric response. Notably, the relative contribution of the major negative phase in ESR-Trx (24%) was about two times lower than in ESR-Cherry (51%), possibly reflecting the decreased amount of the oppositely oriented ESR-Trx molecules due to the lower size of the thioredoxin fusion partner in comparison with mCherry. On the contrary, the N-terminal position of Caf1M in the fusion does not create any steric constraints and further promotes the insertion of ESR in the N out -C in direction ( Figure 6). This is reflected in the absence of negative phases in the kinetics of ∆Ψ generation in the Caf-ESR at neutral and mildly acidic pH values. We can speculate that fusion with Caf1M strengthens the natural ability of ESR to insert with its N-terminus facing the external bulk by complete prevention of insertion in the opposite direction. 6). The kinetics of the potential difference generation differs between two oppositely directed protein populations, resulting in the appearance of the negative phases that were detected in the ESR-Trx and ESR-Cherry photoelectric response. Notably, the relative contribution of the major negative phase in ESR-Trx (24%) was about two times lower than in ESR-Cherry (51%), possibly reflecting the decreased amount of the oppositely oriented ESR-Trx molecules due to the lower size of the thioredoxin fusion partner in comparison with mCherry. Recombinant Gene Construction and Expression To construct ESR-Cherry, ESR gene was amplified from pET-ESR plasmid [31] with primers T7prom and ESR_Bam ACATGGATCCGGACGTCAGCGTTTTTCCTT, digested with NdeI and BamHI, and cloned into pET32 plasmid together with mCherry coding sequence. Gene coding for mCherry was amplified using primers Bam-Cherry ATAAGGATCCGGTGGAGGTGGCTCTGTGAGCAAGGGCGAGGAG, and Cherry-Xho ACATCTCGAGCTTGTACAGCTCGTCCATGC and pmCherry-C1 (Clontech, Mountain View, CA, USA) as a template; and digested with BamHI and XhoI. Gene coding for E. coli thioredoxin (Trx) was amplified from pET32a with primers TrxBam TCATAGGATCCGGTGGAGGTGGCTCTAGCGATAAAATTATTCACCTGAC and TrxXho TCATACTCGAGGGCCAGGTTAGCGTCGAGG; and cloned into pESR-Cherry digested with BamHI and XhoI. The coding sequence of Caf1M chaperone with its own signal sequence was obtained by PCR with primers Nde-Caf ACTAACATATGATTTTAAATAGATTAAGTACG and Caf-Nco GTTTGTATTCCAAAAATGTGACTTTAGGAGGTtccatggATAA from pCaf1M plasmid DNA [41]; and after digestion with NdeI and NcoI, cloned into pET32 plasmid together with ESR gene amplified with primers Nco-ESR ATAACCATGGGAGGTTCTGAAGAAGT-CAATTTACTCGTTC and T7term, and digested with NcoI and XhoI. pCherry-ESR was constructed by cloning the mCherry gene amplified with primers Pag_Cherry TACTATCATGAGTTCTGAAGATGTTATC and Cherry_Bam ACATGGATCCCTTGTACAGCTCGTCCATGC, together with the ESR gene amplified with Bam_ESR and ESR_Xho into the pET20b vector digested with NcoI and XhoI. All constructs were verified by sequencing (Evrogen, Moscow, Russia). E. coli C41(DE3) cells were transformed with the resulting plasmids and grown in LB with ampicillin at 37 • C until OD at 560 nm reached 0.8. Expression of the recombinant genes was induced by addition of 0.2 mM IPTG. Incubation continued at 25 • C for 16 h in the presence of 5 µM all-trans retinal. Hybrid Protein Purification Harvested cells were resuspended in 50 mM Tris-HCl, pH 8.0, 5 mM EDTA, 20% sucrose with lysozyme (0.2 mg/mL) and disrupted by sonication. After centrifugation for 30 min at 6000× g, the obtained supernatant was ultracentrifuged for 1 h at 100,000× g. The resulting precipitate (total membrane fraction) was resuspended in 50 mM Tris-HCl, pH 8.0, and solubilized overnight by addition of 1% DDM. Supernatant after centrifugation at 20,000× g was applied onto Ni-Sepharose (GE Healthcare, Chicago, IL, USA) column, washed with buffer containing 30 mM Na-P, pH 7.4, 200 mM NaCl, 0.05% DDM, and 20 mM imidazole; and eluted in the same buffer containing 300 mM imidasole. The purified protein was concentrated and washed from imidasole using Ultracel YM-30 centrifugal filter devices (Merck Millipore, Burlington, MA, USA). Protein Electrophoresis and Western Blot Membrane proteins were separated by gel electrophoresis in 13% SDS-PAGE and transferred onto nitrocellulose membrane (Bio-Rad, Hercules, CA, USA). Bands were visualized using monoclonal antibodies to the hexahistidine tag conjugated with HRP (anti-His 6 , Invitrogen, Waltham, MA, USA). Spectroscopic Characterization Kinetics of flash-induced absorbance changes of the hybrid proteins were measured at characteristic wavelengths as described earlier for ESR [35]. Before measurements, the samples were diluted to achieve A 530 = 0.1. Flashes (532 nm, 8 ns, 10 mJ) were from LS-2131M Nd-YAG Q-switched laser (LOTIS TII, Minsk, Belarus). Transient absorption changes were detected by photomultiplier and digitized by Octopus CompuScope 8327 (GaGe, Toronto, ON, Canada). Kinetic traces were fit with a sum of exponentials using Mathematica (Wolfram Research, Champaign, IL, USA). All experiments were repeated at least three times with the mean results presented. Electrometric Time-Resolved Measurements of the Membrane Potential ∆Ψ Generation Reconstitution in proteoliposomes and photoelectrical measurements were performed as described in [35][36][37]. Liposomes were prepared from azolectin (20 mg/mL, Sigma, type IV-S, 40% w/w phosphatidylcholine content) by sonication at 22 kHz, 60 µA for 2 min in 1 mL of 25 mM HEPES-NaOH buffer, pH 7.5. Reconstitution of ESR into proteoliposomes was performed by mixing the liposomes with ESR in 1.5% (w/v) OG at the lipid/protein ratio of 100:1 (w/w) for 30 min in the dark. The detergent was removed according to [21]. Bio-Beads SM-2 (Bio-Rad) was added to the mixture in a 20-fold excess (by weight) and suspension was stirred for 3 h at room temperature. The proteoliposomes were separated from absorbent by decanting and pelleted by centrifugation at 140,000× g at 4 • C for 1 h in a Beckman L-90K ultracentrifuge. The pellet was resuspended in 25 mM HEPES-NaOH buffer (pH 7.5). A home-made electrometric setup with 100 ns time resolution was used in combination with a pulsed Nd-YAG laser (532 nm, 12 ns, 40 mJ). Flash-induced kinetics of ∆Ψ generation was measured by Ag+/AgCl electrodes at different sides of the membrane. For measurements at different pH values, equivalent (~25 mM) buffer solutions were used: MES, HEPES, Tris, or CHES. All experiments were repeated at least three times. Conclusions In conclusion, in this paper we demonstrate the utility of N-terminal fusion with Caf1M protein for highly efficient unidirectional insertion of ESR into proteoliposomes. The developed approach could be useful for obtaining oriented incorporation of other membrane proteins into artificial membranes for their functional studies and biotechnological applications.
2023-04-19T15:06:49.773Z
2023-04-01T00:00:00.000
{ "year": 2023, "sha1": "1bdb609e9a63d19f4f3375b2e92f8e280c8d88e8", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1422-0067/24/8/7369/pdf?version=1681714045", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "422fae5d594f465b940f2207a5006d58dcbd449f", "s2fieldsofstudy": [ "Biology", "Chemistry", "Materials Science" ], "extfieldsofstudy": [ "Medicine" ] }
70747834
pes2o/s2orc
v3-fos-license
Considerations on the current situation in Brazilian Dermatology * Considerações sobre o momento atual da Dermatologia Brasileira * INTRODUCTION As a counterpoint to Dr. Bruce Thiers’ article “Issues facing dermatology in the United States”, published in this number, the editors of the Anais Brasileiros de Dermatologia consider it a timely opportunity to offer the dermatological community a brief and succinct view concerning certain topics related to our specialty. It should be stressed that many of our problems are similar to those faced by American dermatologists. The main purpose of these considerations is to issue an invitation to reflection on the directions our profession and specialty is following and on our ability to make the necessary corrections. INTRODUCTION As a counterpoint to Dr. Bruce Thiers' article "Issues facing dermatology in the United States", published in this number, the editors of the Anais Brasileiros de Dermatologia consider it a timely opportunity to offer the dermatological community a brief and succinct view concerning certain topics related to our specialty. It should be stressed that many of our problems are similar to those faced by American dermatologists.The main purpose of these considerations is to issue an invitation to reflection on the directions our profession and specialty is following and on our ability to make the necessary corrections. MEDICAL SCHOOLS AND THE CONTINGENT OF PHYSICIANS IN BRAZIL The problem is complex both in essence and in possible solutions; therefore it is necessary, if we are to solve it, to carry out a historical analysis of the founding of medical schools in Brazil (Table 1). The first period assessed in Table 1 is characterized by the prevalence of public medical schools, when the State was strongly present in the teaching of medicine.From the 1950s onwards, with the settingup of the first four private courses, the number of private medical schools has progressed steadily and overwhelmingly, until the present day where the situation is a possibly ephemeral balance with public schools. Medical courses are costly to maintain.Private capital, whose ultimate goal is the bottom line, seeks to provide the greatest number of places, which does not always go hand in hand with quality of teaching.Possibly the most perverse aspect of private courses, owing to their expensive tuition fees, is that entrance examinations have ultimately become an economic rather than knowledge-based selection process. The last three years of the Fernando Henrique Cardoso administration (2000)(2001)(2002) show the highest proportional increase in medical schools in Brazil, with the setting-up of 28 new courses.Furthermore, during the first Cardoso administration, the National Health Council (Conselho Nacional de Saúde) was no longer allowed to have the last word about the social need to open new medical schools, a prerogative it had enjoyed up until 1996.Politicians in general, and governing authorities in particular, are usually shortsighted with regard to education.They see teaching as a costly activity bringing little electoral rewardrather, in fact, the fuel of permanent oppositionist manifestations.To dwell on only one example of official neglect, a unique but telling case, one needs only consider that the salary of a full professor, devoting him or herself full time to a federal university department, is equivalent to that of an elevator operator at the Senate. More than 10,000 physicians graduate in Brazil per year, a figure that may be an under-estimated given that some recently created courses have not yet graduated their students.These figures mean that Brazil shows the average general ratio of 1 physician per 622 inhabitants, above the World Health Organization recommendations of 1 physician per 1,000 inhabitants.Rio de Janeiro and the Federal District are the Brazilian states with the greatest density of physicians with ratios of 1/302 and 1/309 inhabitants, respectively. The number of physicians is growing at twice the rate of population growth, and a large proportion are professionals coming out of schools that lack minimal conditions to operate.The government spurs the growth of new medical courses under the notalways true, often fallacious, argument that it enhances social inclusion.This is a clear and unwarranted option for quantity over quality.Instead of investing in basic and technical education, acknowledged to be the weak point of the educational system, the government chooses to open courses without complying with fundamental technical requirements, and often aiming to reap political dividends.On the day the present article was written (18 December 2006), the total number of medical schools operating in Brazil was 162 (93 private schools and 69 public schools), offering 14,730 places in the first year.The most recently authorized new opening was on 14 November 2006, the Pará State University Center (Centro Universitário do Estado do Pará). This plethora of physicians, many of whom lack qualification, leads to inevitable consequences in the job market, with low wages, unfair competition and unfavorable working conditions.The consequences are felt in all segments, but above all hit the individual who ought to be the greatest beneficiary of medicine practiced with knowledge, skill and dedication: the patient. One particular public authority, notorious for bringing together countless features that are undesirable in a politician, met on one occasion with a group of physicians demanding a salary rise.In order to justify his refusal to grant such a rise, he came out with this pearl of wisdom: "I will not grant you a rise because physicians are like salt: white, cheap and found on every corner".When such words, uttered by someone of that ilk, come dangerously close to being true, then something very wrong must be happening. One noteworthy aspect is that despite these facts and this evidence, medical courses still attract the greatest number of candidates and demand the highest scores for entrance.The most simplistic explanation might be that our profession, despite coping with underemployment and low wages, does not yet suffer from unemployment. DERMATOLOGY AND THE JOB MARKET Simultaneously, the job market has undergone profound changes.The most dramatic of these may have been the drastic fall, virtually to extinction, of the private clinic and the rise of health plans and organizations.Physicians are no longer the archetype of the professional individual, but are rather service providers whose fees are stipulated by the plan-owning organizations.It is a sad fate for the field of health (medicine, dentistry, psychology, physiotherapy, etc.) when it is the only one in which professional activity is mediated and governed by companies, often banks and insurance companies, that take upon themselves the right to fix the value of the work undertaken. In this nebulous scenario, characterized by an imbalance between work and reward, dermatology has added cosmiatrics to its traditional fields of activity.This is founded on a tripod of incontrovertible evidence: the unbridled pursuit of beauty, raised by some to the major objective in life; the growing demand of an avid public willing to pay high rates for esthetic procedures, as against the always-undervalued clinical reasoning; and the possibility of extra earnings, extracted even from health plan patients, since health plans do not cover cosmetic procedures. Over the last decade dermatology has become the most fashionable specialty, and as such attracts a growing number of physicians.Some are genuinely interested, while others are only seduced by the glamour and the financial possibilities.And in their wake, a crew of cunning creators of weekend post-graduation courses, whose only aim is to get rich through the use of misleading advertising and the training of pseudo-specialists. The need for permanent media exposure has given rise to personalities hitherto unknown to medicine: the press relations officer and the marketing professional.Despite the undeniable worth of ethical marketing, there is nothing better for promoting a physician than a patient who has been treated well.SBD, the Brazilian Dermatology Society currently has 62 accredited services, with an installed capacity to offer specialist instruction to 235 physicians per year.Offering new places ought not to exclusively obey the criteria of economic demand, since demand will grow owing to the above-mentioned reasons.One must bear in mind, among other parameters, the needs of the market as based on the population and the distribution of specialists throughout the several regions of Brazil. OUR IMAGE WITH THE LAY PUBLIC AND PHYSI-CIANS Our dermatology colleagues are often invited by the media to give their opinions on topics concerning our specialty.These issues are sometimes clinical or public health issues, but the overwhelming majority has to do with cosmetic procedures.Regardless of the nature of the topic raised, interviews in any type of media are an inestimable opportunity for the specialist to adopt the stance of a serious professional with a thorough training and sound technical knowledge.If one behaves in this way, one will gain value and above all dignify dermatology as a medical specialty. Unfortunately, however, many people use such opportunities for self-promotion.What must go through the mind of a lay person who sees a dermatologist posing by the side of his or her new, imported car, or showing of his or her closet full of designer clothes and shoes?Undoubtedly the image of success, since these values are wrongly thought to be the synonym of professional competence.But the worst risk is that we should be reduced in the eyes of the public at large to frivolous physicians, professionals only concerned with the details of beauty treatment.Dermatology cannot, most definitely, allow itself to be stigmatized as a specialty restricted to esthetic procedures.It must retain its clinical and pathological identity, its main reason for existing, as is the case for any other medical specialty. One of the main factors leading to the sullying of the specialty is that the medical act has been made banal, with patients exposed on television programs to demonstrate new treatment techniques.The idea is conveyed falsely that these are simple procedures that can be carried out in any setting and under any conditions.It cannot therefore be any surprise that some of these therapeutic modalities are being made available even in beauty parlors.If we ourselves do not value our specialty, no one else can be expected to do it for us. There is a classic overlap in medicine between certain specialties, such as orthopedics and rheumatology, nephrology and urology, otorhinolaryngology and head and neck surgery.Dermatology, since the skin is the largest organ in the body, will naturally present many more points of contact with other specialties than any other medical area.And as a result of our expansion as a specialty, we are beginning to work in segments formerly restricted to plastic surgery or oncological surgery, to name but two examples.Dermatology, which had itself in the not too remote past complained of other specialists encroaching upon it, merely prescribing "creams and ointments", has now been stretching out its own tentacles and extending its own domain. The reply from other specialties has not been long in coming.Shining like gold in a setting of destruction, dermatology found itself invaded by physicians from other medical specialties.Many of the students graduating from the much-vaunted weekend courses are gynecologists, pediatricians, endocrinologists and anesthetists, among others.What drives them to this?First of all, undoubtedly, the sight of easy earnings and a plentiful clientele.However, there is another reason many are reluctant to admit, but which strikes any attentive observer immediately.If the application of botulinum toxin, a procedure of unquestionable value, backed up by scientific evidence, can be carried out by such differing specialists, such as dermatologists, plastic surgeons, ophthalmologists, otorhinolaryngologists, gynecologists or endocrinologists, one may conclude that its application does not depend on thorough, exclusive, formal training in any one of these areas.This is the major dilemma of many esthetic procedures: the risk of their becoming no-man's-lands, accessible to any outsiders with knowledge gleaned from practical, one-day courses. Those colleagues who proceed in this way definitely do not change specialty.They simply come into contact with some techniques of esthetic treatment.Dermatology with a capital D is infinitely greater than this. THE ROLE OF PROFESSIONAL ASSOCIATIONS AND THE SBD Some of the problems facing us result from decisions taken on the basis of political, rather than technical, aspects, such as the opening of new medical schools, the overall economic situation, low salaries, and the existence of health plans, or even of ineffectual legislation.Combating these inconsistencies is an arduous, sometimes inglorious, struggle, which depends basically on our ability to organize and on our strength as a class. Young physicians tend to see associative activities as boring and bureaucratic, as well as burdensome, and forget that only effective engagement can bring our profession advantage.They must therefore be encouraged from the outset of their careers to join professional associations and scientific societies, which are the proper stage for the discussion of the problems that concern us. The Brazilian Medical Association (Associação Médica Brasileira -AMB) and the Federal Medical Council (Conselho Federal de Medicina -CFM) have made considerable efforts trying to persuade public authorities to improve the situation of physicians.All this effort is often lost, however, in bureaucracy, in the slowness of the legal system, or even in the lack of commitment on the part of the authorities.There is nothing more difficult than trying to close down a functioning medical course, even if it is proven to be precarious and lacks the slightest justification for its existence.Many schools take advantage of loopholes in the law or of inconsistencies between the Federal Constitution and State Constitutions to set up courses that are not approved by the Ministry of Education (MEC).The Federal Medical Council therefore brought in Resolution 1808, enacted on 10 November 2006, laying down that Regional Medical Councils can only register the graduation diplomas granted by higher education institutions recognized by the MEC.Since the government will not play its part, the necessary measures have to be taken somehow MEC accreditation of lato sensu post-graduate level courses provides more fertile ground for illintended actions.Some regularly accredited institutions "transfer" this right to entities set up with the sole purpose of managing such courses and reaping exceptional profits.Since official control is precarious and there is no evaluation, the birth rate of such courses is growing exponentially.What can one expect of a physician who, after taking such courses, is unable to obtain a qualification as a specialist in dermatology?That he or she enter the job market, even without the necessary qualification, or prepare to take another examination, or switch specialty?One does not need a crystal ball to guess the answer. The Brazilian Medical Association (AMB) and the Federal Medical Council (CFM) recognize 59 specialties, including dermatology and its three main areas of activity: dermatological surgery, hansenology (leprosy studies) and cosmiatrics (cosmetic dermatology).The much-vaunted specialty known as "esthetic medicine" does not, therefore, exist.Despite legal non-existence, however, esthetic medicine is actually practiced by many, both physicians and non-physicians.And in certain cases we hardly know how. One possibly promising way of overcoming these problems is by the setting-up of the Physician Association Order, along the lines of the Brazilian Bar Association (Ordem dos Advogados do Brasil -OAB).Beset by a plethora of law schools and poorly qualified professionals, and in the absence of any official action to curb this scandalous situation, the OAB created the examination of the association.Upon graduation, a student receives the qualification of Bachelor in Law, which does not entitle him or her to exercise the profession.In order to do so, he or she will have to pass in the Bar exam, and thus earn their qualification as a lawyer.In a recent exam for entering the OAB, the fail rate was over 80%.It would be an exercise of the imagination to predict the fail rate in a medical association examination, but it is reasonable to posit a disappointing figure.Here again the government has failed to play its part, overseeing the quality of education, and therefore it fell to the OAB to play its own part. Among its multiple roles, the Brazilian Dermatology Society (SBD) has been active in the scientific area, in providing public services, and in policy.It is the only scientific society accredited by the Brazilian Medical Association (AMB) as representative of the specialty and areas of activity, and authorized to apply examinations to qualify specialists.It is of the utmost importance that this role be brought repeatedly to the attention of the public at large, which is bombarded by misleading advertising from other unrepresentative entities that are intensely dedicated to publicizing their events, mini-courses and therapeutic miracles. Finally it is worth mentioning, albeit briefly, the qualification to become a specialist of the SBD.Initially aiming to certify those who pass, the examination should adapt to the present circumstances and incorporate a new feature: distinguishing between those actually trained in order to exercise the specialty from those opportunists who have received precarious training, who are well-skilled in going to court to contest the questions.While the test remains merely theoretical we run the risk of passing candidates who simply possess enough bytes of memory to rote learn texts.We must demand medical conduct, and assess through practical exams the candidate's performance in clinical cases, their differential diagnoses, the histopathological picture and therapeutic options.Only in this way will we be able to make this distinction. TABLE 1 : Timeline of the setting-up of medical schools in Brazil
2017-10-20T07:21:53.895Z
2006-12-01T00:00:00.000
{ "year": 2006, "sha1": "0c6fc6ed63f0ea821a190404aed94da5e9e4a4fe", "oa_license": "CCBYNC", "oa_url": "https://www.scielo.br/j/abd/a/tYyTcgwp4dS7wFkW8s9f7VM/?format=pdf&lang=pt", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "0c6fc6ed63f0ea821a190404aed94da5e9e4a4fe", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
252408383
pes2o/s2orc
v3-fos-license
Salmonella enterica frequency in backyard chickens in Vermont and biosecurity knowledge and practices of owners The popularity of backyard chickens has been growing steadily over the past 10 years, with Covid-19 stay at home orders in 2020 yielding an added boost in popularity. Concurrently, cases of salmonellosis from live poultry exposure have also risen. Previous research on backyard chicken owners has focused primarily on urban chicken owners, which may have differing knowledge and biosecurity habits from rural backyard chicken owners. The goal of this study was to investigate the prevalence of S. enterica in rural and urban flocks of chickens in the state of Vermont and to determine what attitudes toward and knowledge about S. enterica owners had, as well as what biosecurity practices they used. We conducted two surveys in Vermont between 2019–2022; a pilot study tied to sampling for Salmonella enterica in backyard chicken flocks from 2019–2021 and a statewide study in 2022 to determine the prevalence of backyard chickens in Vermont and obtain representative survey data from backyard chicken owners. We found (i) overall, 19% (8/42) backyard chicken flocks from 2019–2021 had S. enterica, but S. enterica rates varied substantially by year; (ii) backyard chicken owners were wealthier and more educated than the average Vermonter and generally lived in rural areas; (iii) participants in the statewide survey had much lower uptake of good biosecurity habits compared to the pilot survey; (iv) despite increased messaging about backyard chicken-associated salmonellosis and good biosecurity measures over the past several years, uptake of biosecurity measures is inconsistent, and rates of unsafe practices such as kissing or cuddling chickens have increased in Vermont. Overall, the data indicate the need for improved messaging on biosecurity and risks associated with backyard chickens The popularity of backyard chickens has been growing steadily over the past years, with Covid-stay at home orders in yielding an added boost in popularity. Concurrently, cases of salmonellosis from live poultry exposure have also risen. Previous research on backyard chicken owners has focused primarily on urban chicken owners, which may have di ering knowledge and biosecurity habits from rural backyard chicken owners. The goal of this study was to investigate the prevalence of S. enterica in rural and urban flocks of chickens in the state of Vermont and to determine what attitudes toward and knowledge about S. enterica owners had, as well as what biosecurity practices they used. We conducted two surveys in Vermont between -; a pilot study tied to sampling for Salmonella enterica in backyard chicken flocks from -and a statewide study in to determine the prevalence of backyard chickens in Vermont and obtain representative survey data from backyard chicken owners. We found (i) overall, % ( / ) backyard chicken flocks from -had S. enterica, but S. enterica rates varied substantially by year; (ii) backyard chicken owners were wealthier and more educated than the average Vermonter and generally lived in rural areas; (iii) participants in the statewide survey had much lower uptake of good biosecurity habits compared to the pilot survey; (iv) despite increased messaging about backyard chickenassociated salmonellosis and good biosecurity measures over the past several years, uptake of biosecurity measures is inconsistent, and rates of unsafe practices such as kissing or cuddling chickens have increased in Vermont. Overall, the data indicate the need for improved messaging on biosecurity and risks associated with backyard chickens KEYWORDS backyard chickens, Salmonella enterica, biosecurity, animal husbandry, poultry (chicken), homesteading, food safety Introduction Researchers estimate that more than one million people contract non-typhoidal Salmonella enterica in the U.S. each year, leading to an estimated $3.7 billion annually in lost wages, productivity, healthcare costs, and mortality (1,2). Salmonellosis is most frequently acquired through eating contaminated meat (especially poultry), and eggs (3). However, salmonellosis can also be contracted via contact with live reptiles, including turtles, and backyard poultry (4,5). These zoonotic infections represent roughly 11-20% of all S. enterica infections each year in the United States, and cause 51.2% of salmonellosis cases in children under 10 years old (4,5). Additionally, across all ages, animalassociated salmonellosis has higher odds of hospitalization compared to food-associated salmonellosis (5). Finally, over the past 10 years, the rate of live poultry-associated salmonellosis rose to nearly 1,800 cases per year in 2020 (6), with the CDC reporting in 2020 that there had been 77 outbreaks of S. enterica since 2010 (7). Despite this rise in salmonellosis associated with live poultry, only a small number of studies have been performed to determine the prevalence of S. enterica in backyard chickens in the U.S and the biosecurity practices associated with the presence or absence of S. enterica in flocks. A study of flock characteristics and owner biosecurity habits for backyard poultry flocks in Maryland in 2011 found that just 65.8% of owners consistently washed their hands after interacting with their flock, and even fewer (31.7%) had dedicated footwear for the poultry pens (8). Additionally, nearly half (44%) of flocks were free-ranged, and owners reported their flocks had interactions with multiple other species, including wild birds (53.7% of flocks), pets (75.6%), livestock (31.7%), wild carnivores (46.3%), and rodents (36.6%) (8). Sixty-one percentage of owners had had birds for fewer than 5 years, and 17.1% of owners had had birds for less than a year, indicating a potential increase in the popularity of backyard chickens (8). The study did not find any S. enterica in the 39 flocks they evaluated (8). Researchers in Colorado in 2012 surveyed 807 backyard flock owners on flock characteristics, housing, health, and the owners' biosecurity practices (9). Most flocks contained under 50 birds, and most flocks (59.9%) were housed in an outdoor coop with fenced-in outdoor access (9). Owners typically washed their hands after handling birds (79%), but only 20% changed shoes after being around birds (9). Additionally, roughly 60% of owners quarantined new birds before introducing them to the flock (9). The researchers did not test for pathogens. A study of backyard chickens in the greater Boston urban area from 2016 to 2017 tested 53 flocks and found that just one had S. enterica (10). Further, the S. enterica found was Salmonella Kentucky, which is rarely implicated in human illnesses, and therefore posed little risk to the owners (11). The researchers surveyed 30 of the owners on biosecurity habits and attitudes and found that 95.6% of families in the study who had children considered their birds to be pets, and 68.9% (20/30) of children reportedly petted the birds or picked them up (10). This suggested that owners and their children would have a high risk of contracting salmonellosis if their birds were infected with S. enterica. A survey and observational (video) study of backyard poultry owners in Seattle in 2014 also observed poor biosecurity habits, including kissing and snuggling birds (12). Ultimately, 25% of participants indicated that they snuggled, kissed, touched their mouth, or ate/drank around their birds, while video recordings showed >50% of participants touched their face, and an additional 22% were recorded snuggling birds (12). However, that study did not assess S. enterica prevalence in the chickens. A later study in Washington State in 2016 found that 1/34 flocks in counties around Seattle had S. enterica (13). The serovar found in the infected flock was I 4, [5],12; i-, a serovar frequently implicated in human illnesses (13)(14)(15). They also found that 83% of the Escherichia coli isolated from the flocks was resistant to ≥3 classes of antibiotics (13). Finally, the study found that 62% of owners (21/34) contained elderly or young family members who had direct contact with birds, and that 6% of their owners considered their poultry pets (13). Despite these studies, numerous knowledge gaps remain. Kauber and McDonaugh's studies looked only at urban contexts (10, 12), while Shah's study included some rural farms, but did not survey owners on biosecurity (13). Meanwhile, Madsen's study was the only one to note the number of backyard chicken flocks in the state it was performed in, as Maryland at the time had a required backyard flock registry (8,16). Perhaps most importantly, none of the studies was conducted over more than 12 months. The goal of this study was to investigate the prevalence of S. enterica in rural and urban flocks of chickens in the state of Vermont and to determine what attitudes toward and knowledge about S. enterica owners had, as well as what biosecurity practices they used. Methods Vermont is a small state in the northeastern United States, with a high rural population [61.3% (17)] and a high prevalence of home food production (18). It experiences relatively cold winters and temperate, short summers, and is in zones 3b-5b (19,20). In Vermont from 2016-2020, 20.8% of all reported salmonellosis cases were connected with live poultry, out of an average of 102 salmonellosis yearly cases (21). Importantly, children under 10 make up 30.2% of all live poultry associated salmonellosis patients in Vermont, despite making up only 20.5% of the population (4,21,22). To determine the rates of S. enterica in backyard chickens in Vermont and the biosecurity knowledge and practices of their owners, we conducted two surveys. Pilot survey In this survey, owners of backyard chickens in Vermont who were willing to have their flocks tested for S. enterica were asked to complete a survey about their knowledge of S. enterica and food safety as well as their husbandry practices and some demographic questions (Supplementary Table S1). This survey was a sample of convenience and was completed by 43 backyard chicken owners from 2019 to 2021. Surveyed flocks were primarily located in Chittenden County, Vermont (Northwestern Vermont), and were recruited through advertisements in feed stores, on Facebook poultry fancier/homesteading pages, and through posts in Front Porch Forum (a neighborhood forum/newsletter site). Due to the fact there were no reliable numbers on backyard chicken ownership in Vermont, we simply strove to get at least 30-50 flocks, in line with previously published studies on backyard poultry flocks (10, 13). The survey consisted of three basic sections; questions about the owners' flock, including number of birds, breeds, other animals present, and housing/feed; questions on owners' biosecurity and egg handling habits; and questions about perceptions of risk. This survey was deemed exempt by UVM's Institutional Review Board (Approval #: STUDY00000237). Statewide survey To obtain statewide, representative data, we contracted with UVM's Center for Rural Studies to field a statewide survey designed to determine the percent of Vermonters who keep backyard chickens as well as the habits of backyard chicken owners (Supplementary Table S2). It consisted of four types of question; questions about the owners' flock, including number of birds, breeds, other animals present, and housing/feed; questions on owners' biosecurity habits; questions about perceptions of risk., and demographic questions. This survey was approved by UVM's IRB as a modification of STUDY00000237. Cloacal swabbing Cloacal swabs were taken from each bird and tested for the presence of S. enterica (23). Prior to starting sampling, our cloacal swab procedure was approved by UVM's IUCAC board (IACUC protocol 19-053). Data on each chicken's breed was recorded along with farm and swab number. A sterile swab was inserted into the cloaca of the bird, to collect fecal matter, returned to a sterile, labeled test tube, and brought back to the laboratory for testing using standard Salmonella enrichment protocols (23), with pre-enrichment in buffered peptone water (BD Difco, Franklin Lakes, NJ or Thermo Scientific, Waltham, MA) for 24 h, followed by sub-culturing into tetrathionate and Rappaport-Vasiliadis Broth (BD Difco, Franklin Lakes, NJ) at 37 and 42 • C, respectively for 24 h. Tetrathionate and Rappaport-Vasiliadis enrichments were then streaked onto xylose lysine tergitol 4 agar (BD Difco, Franklin Lakes, NJ) and incubated at 37 • C for at least 24 h. After 2019, XLT4 plates were incubated for 48 h, to better capture slow-growing samples. Bedding sampling A roughly quart-sized sample of soiled bedding was collected, either by the owners or by our team and placed in a clean Ziploc bag or container. The sample was brought back to the laboratory and frozen at −20 or −80 • C if it could not be processed immediately. To detect S. enterica, a 25 gram sample was weighed out aseptically into a stomacher bag. Hundred milliliter of Buffered Peptone Water (BD Difco, Franklin Lakes, NJ) was added and the sample stomached (Seward, West Sussex, UK) for 1 min in on the standard stomaching speed or handmassaged for 2 min. Samples were pre-enriched at 37 • C for 4 h to allow for bacterial recovery before proceeding with the Salmonella detection protocol above. Salmonella confirmation Presumptive positive colonies from XLT4 (black, pink, or colorless on a red background) were streaked for isolation and screened with PCR for the hilA gene (24). We developed our own primers, as the primers in Pathmathan et al., did not work well for us. Primer sequences: hilA_FW_2 5' GGA CAG GGC TAT CGG TTT AAT 3' and hilA_RV_2 5' CAA ACT CCC GAC GAT GTA TTC T 3'. DNA for PCR was obtained by boiling a single colony in 100 µl of dd H 20 and centrifuging to pellet cell debris. PCR was performed using GoTaq Colorless 2X master mix (Promega, Madison, WI) in a BioRad T100 thermal cycler (BioRad, Hercules, CA), and results visualized via a 1% agarose gel. PCR cycling conditions were as follows: 95 • C for 5 min, followed by 30 cycles of 95 • C for 1 min, 50 • C for 1 min, 72 • C for 1 min. A final extension was performed at 72 • C for 5 min before cooling to 12 • C. Statistics Chi-squared analysis was used to determine statistically significant differences between the pilot and statewide surveys and among groups within the survey, with a Fisher's Exact Test for samples with ≤4 in a category. Statistics were run in Rstudio (version 3.5.2) or in SPSS (version 1.0.01275). Results We conducted two surveys. The first was a survey of backyard chicken owners in Vermont who agreed to cloacal swabbing or bedding sampling of their flocks from 2019 to 2021 (hereafter referred to as the "pilot survey"). The second was a large survey of Vermonters conducted by the UVM Center for Rural Studies, designed to determine the percentage of Vermont residents with backyard chickens and the knowledge Frontiers in Veterinary Science frontiersin.org . /fvets. . and biosecurity habits of backyard chicken owners in Vermont (hereafter referred to as "Statewide survey"). Pilot survey Our pilot survey yielded 43 respondents from 2019 to 2021. The mean number of birds in each flock was 10, while the median number was eight (Table 1). However, flock size ranged from 2 to 75 birds. Most (30/43) purchased at least some birds sourced from a commercial hatchery, either directly or through a feed store. A quarter (11/43) had also acquired birds from a friend, acquaintance, the humane society, or via chicken swaps. Two owners had also purchased birds at fairs, and five owners reported hatching chicks from their own flocks. Most flocks (29/43) were penned in a fixed area around their coop at least some of the time, while 37.2% of flocks (16/43) were free ranged at least part of the time, and four flocks used a form of mobile chicken unit. All respondents fed at least some commercial feed to their birds, though most supplemented with table scraps (28/43) or forage (25/43). Most flocks (86%; 37/43) had at least one other species of domestic animal present on site, with dogs (31/43) and cats (21/43) being by far the most common. Four owners also had horses, and five had goats (2/5 farms with horses and chickens also had goats). Two farms had rabbits, one farm had llamas, and one farm had sheep. However, it was unclear how much interaction occurred between domestic animals. We asked owners whether their flocks might have contact with wildlife. Most answers correlated with the housing situations the owners had described. However, in four cases, owners indicated that their chickens probably did not have exposure to wildlife, despite indicating the birds were free ranged at least part of the time, and six owners said their birds definitely had exposure to wildlife, despite indicating the birds were penned without free-range access. These answers, while at first surprising, are understandable; the penned flocks may have previously experienced predation, leading to the "definitely yes" response. Conversely, the owners of the free-range birds who were certain their birds did not interact with wildlife may not think of wild birds as wildlife. Motivations for keeping poultry varied (Table 2), but the most popular top three reasons for keeping poultry in 2019-2021 were that eggs from backyard chickens were tastier (28/43 responses), chickens were fun/pets (22/43 responses) and that backyard chicken eggs were healthier (25/43 responses). Sustainability came in fourth (19/43), followed by "A good experience for children" (14/34). Notably, for the five participants who chose "chickens are a good learning experience eggs were dirty. Only 22/43 (51.2%) of owners indicated they changed their shoes after walking around in the chicken area; an additional owner said they washed their shoes "if dirty from chicken area." Finally, eight owners (18.6%) said they wore masks while cleaning their coops. Last, we asked about interactions of children with the backyard flocks. Only 16/43 (37.2%) respondents said they kept children from snuggling birds, while just four (9.3%) respondents indicated they kept children from interacting with their flock. During 2021, we asked about the frequency of children interacting with the flocks. Of the nine surveys from 2021, 7/9 indicated that children "often" or "always" interacted with the birds. Only one of these respondents also indicated that they tried to keep children from snuggling birds, while one respondent who indicated children "sometimes" interacted with their flock also noted that they tried to keep children from snuggling chickens. We asked a number of questions designed to provide insight into owners' knowledge and risk perception around backyard chickens and Salmonella risks ( Table 3). All owners recognized that chickens could probably or definitely have Salmonella without seeming sick. We were curious whether backyard chicken owners thought backyard chickens were less likely to have S. enterica than commercial flocks. Most owners thought backyard chickens were either less likely (65.1%; 28/43) to have S. enterica than commercial chickens or that there was no difference (25.6%; 11/43). Just four owners (9.3%) said that backyard chickens might be more likely to have S. enterica than commercial chickens. Nearly half (48.8%; 21/43) of the respondents thought chickens purchased from commercial hatcheries might be more likely to have S. enterica, 41.9% (18/43) thought there was no difference, and 9.3% (4/44) thought the chickens might be less likely to have S. enterica. We asked about the safety of eggs, both from backyard flocks in general, and from the respondent's backyard Salmonella prevalence in pilot survey During the pilot survey, we also conducted Salmonella sampling of the flocks, and continued after the end of the pilot survey. Flocks we sampled were concentrated near Burlington, Vermont, however we sampled flocks across most Vermont (Figure 1). Of the 42 flocks we sampled, eight tested positive for S. enterica (19%). During 2019, we sampled 28 flocks and found no S. enterica. In 2020, we sampled one flock, and found S. enterica. In 2021, we sampled 13 flocks, and found S. enterica in seven flocks (53.8% positive). Our sampling methods could have influenced S. enterica rates. During 2019-2020, we exclusively used cloacal swabs for sampling; in 2021, we transitioned to using a mix of cloacal swabs and bedding samples. Cloacal swabs are a less sensitive method of sampling than fecal or soiled bedding sampling (25); however, of the eight S. enterica positives, six were from cloacal swabs, so the increase in . S. enterica rates is unlikely to be due to the move toward bedding samples. S. enterica was slightly more common in rural flocks. While we had a mix of urban and rural flocks in all years we sampled, we found that 5/8 of the positive flocks were located in rural or semi-rural areas (we defined "semi-rural/urban" as a house in an urban area which backed up onto a large undeveloped/rural area or a cluster of houses/housing development in an otherwise rural area), and only three S. enterica-positive flocks were located in urban areas. In contrast, housing was not a major factor; 3/8 flocks with S. enterica were free-ranged at least some of the time, while 5/8 flocks with S. enterica were kept penned. We did not ask whether pens had roofs. Finally, seven out of 43 owners surveyed in the pilot study indicated they had had diarrhea in the last year or since they'd gotten their birds, whichever was more recent, while four didn't remember, and the remainder had not had diarrhea since acquiring their birds/in the last year. Despite this being a common question in previous published surveys, we did not find it predictive. In the five surveyed flocks which we found had S. enterica, 3/5 owners responded that they had not had diarrhea, one couldn't remember, and only one indicated she'd had diarrhea in the past year/since getting her chickens. Statewide survey Because Vermont does not require owners to register their flocks, the number of backyard chickens in Vermont was unknown, and the representativeness of our pilot survey was unclear. To obtain statewide, representative data on the number of backyard chicken owners in Vermont and their knowledge and biosecurity practices, we contracted with UVM's Center for Rural Studies to conduct a large-scale survey using CRS databases of contact information. The Poultry housing is a key factor in biosecurity, determining whether the birds are likely to come into contact with wild animals and birds or their droppings. The most common housing setups for chickens were a coop with a fixed pen area outside the coop (Table 1; 49.5%; 191/386) or free range outside the coop (46.6%; 180/386). From sampling visits to rural farms, we have noted that many of the fixed penned areas outside coops are not covered and would allow wildlife to enter the area. Only 4.1% of owners kept their chickens entirely indoors (16/386), and 6.7% reported employing a mobile chicken unit (26/386). Eleven percent of owners used a combination of approaches, including both free range and penned areas, or fixed and moveable grazing areas, or a mobile chicken coop in summer with a fixed coop in winter, among others. Owners fed their chickens a variety of different foods. Commercial feed was the most common food source (93.5%; 375/401), but chickens also frequently received table/food scraps (73.3%; 294/401) and forage (72.3%; 290/401). Given the frequency of free-range housing for birds (40.4%), it is not surprising that forage was a common food option. However, the prevalence of forage as a food source suggests that either some owner with fixed-pen housing believe their chickens have access to forage within those pens (which is possible with a large enough pen) or that they sometimes allow their chickens to free range. Owners reported a variety of approaches to sick chickens, with "home remedies not specifically natural" being the most common response by a narrow margin (29.2%; 117/401). However, the frequency for "antibiotics/veterinary-prescribed medications, " "natural remedies (herbs, essential oils), " and "put them down" were all in the range of 27.2-29.2%. Among the 6.5% of respondents reporting "other, " 11 reported not treating their birds or letting "nature take its course, " eight reported isolating and monitoring the bird, and seven noted that treatment would depend on various factors. We asked owners where they got their information on raising chickens, to determine whether official resources were reaching backyard chicken owners in Vermont. Just 26% (98) of owners reported having taken a relevant food microbiology or food safety class or a training that included food safety. Most owners reported getting their information about raising chickens from talking with other chicken owners (67.3%; 270/401), from books (57.9%; 232/401), or from instructional/informational websites or YouTube (57.9%; 232/401). About 21% (84/401) of owners reported getting information from Facebook or social media sources (which vary wildly in quality and accuracy of information), while 16.2% got their information from veterinarians (65/401), and 17.7% reported getting their information from university extension websites or trainings (71/401). UVM Extension's website does not have materials on raising backyard chickens, so owners would probably be accessing materials from other state extension organizations. Finally, 14.7% reported using magazines as a source of information, which is unsurprising, given that magazines such as Backyard Poultry are specifically intended for this audience. Just 4.5% (18/401) of respondents chose "other, " with 17 indicating "experience" as a source of information, and one respondent having taken a relevant class. Overall, it seems that CDC/USDA messaging around biosecurity and backyard chickens may not be reaching its target population well. Perhaps unsurprisingly, given the informal nature of owner education, only a slim majority of owners (60.2%; 228/379) were aware that chickens could carry S. enterica (referred to in the survey as "Salmonella") without seeming sick. Over 37% indicated they did not know (37.2%; 141/379), and only 2.6% thought chickens could not carry Salmonella without seeming ill (10/379). Knowing which part of the egg was a risk for S. enterica was less common; only 41.9% (158/377) indicated S. enterica could be either inside or outside the egg, while 12.7% (48/377) thought Salmonella was only on the outside, and fully 43% of respondents chose "I don't know" (162/377). Owners also overall thought backyard chickens were less likely to have issues with S. enterica than commercial flocks, with 51.2% (193/377) of respondents indicating that backyard chickens were less likely to have Salmonella than commercial flocks. Just 13.8 (52/377) thought backyard chickens were equally likely to have Salmonella, while 2.4% (9/377) thought backyard chickens might be more likely to have Salmonella than commercial flocks, and nearly 33% replied that they didn't know (123/377). When asked about the risk of Salmonella from eggs, results were similar. 40.7% (154/378) of owners thought backyard chicken eggs were less likely to contain Salmonella than eggs from the store, 35.4% (134/378) didn't know, 17.7% thought they were equally likely (67/378), and 6.1% (23/378) thought they were more likely to contain Salmonella than eggs from the store. Owners were more confident about the safety Frontiers in Veterinary Science frontiersin.org . /fvets. . of eggs from their own flocks, with 46% (174/378) indicating they were safer than eggs from the store, 32.3% (122/378) being unsure, and just 3.2% (12/378) indicating they were less safe. Unsurprisingly, given this patchy knowledge of chickens being a risk for S. enterica, biosecurity practices varied (Table 2). Roughly 75% of owners indicated they habitually washed their hands after handling chickens (304/401) or dirty eggs (300/401), and 68.6% (275/401) reported washing and/or sanitizing their hands after handling any eggs (regardless of dirty/clean appearance). However, fewer than half avoided kissing their birds (47.4%; 190/401) or snuggling their birds (36.4%; 146/401). Only 42.6% of owners (171/401) reported they changed shoes after walking in the chicken area, despite more than 46% of birds having been previously reported to be free range, and just 31.2% of owners reported wearing a mask when cleaning the coop (125/401). Finally, despite 40.2% of respondents reporting having children in the household, just 21.4% (86/401) reported keeping children from snuggling birds, and only 12.2% kept children from interacting with chickens (49/401). However, when asked specifically how often children interacted with their chickens, just 23.5% (88/376) owners indicated that children often or always physically interacted with birds (petting, picking up, etc.), while 42% (158/401) indicated children rarely interact physically with their chickens, and 34.6% of owners indicated children never interact with their chickens (130/401). Anecdotally, when interacting with backyard chicken owners across Vermont, we frequently observed children snuggling or kissing chickens, including in one case a neighbor's child who frequently visited the chickens. Consequently, there is substantial room for improvement in biosecurity habits among families with children. Discussion We found (i) overall, a high proportion of backyard chicken flocks from 2019 to 2021 had S. enterica,; (ii) backyard chicken owners were wealthier and more educated than the average Vermonter, but generally lived in rural areas; (iii) participants in the statewide survey had much lower uptake of good biosecurity habits compared to the pilot survey; (iv) despite increased messaging about backyard chicken-associated salmonellosis and good biosecurity measures over the past several years, uptake of biosecurity measures is extremely inconsistent, and rates of unsafe practices such as kissing or cuddling chickens, have increased in Vermont. S. enterica in Vermont backyard chickens We found S. enterica in 8/42 flocks tested over a 3-year period from June 2019 to December 2021, with 1 positive flock in 2020 (out of 1) and seven positive flocks (out of 13) in 2021. This is a substantially higher rate of S. enterica than has been previously reported. McDonagh et al. assessed the prevalence of S. enterica in 53 urban backyard chicken flocks in 2016-2017 using a mix of cloacal swabs, dust samples, and fecal samples (5). Just one flock (1.9%) was positive for S. enterica (5). A study published in 2019 of backyard poultry flocks in the counties surrounding Seattle, WA, also found S. enterica in a single flock (1/34; 2.9%). Finally, a study by the California Animal Health and Food Safety Laboratory System evaluated rates of S. enterica in dead chickens submitted for laboratory evaluation from 2012 to 2015 (26). They found S. enterica in just 1.6% of birds (37/2,347 birds) over this 3-year period, testing multiple samples from each bird (26). They found S. enterica rates did not vary substantially by year, with rates of S. enterica ranging from 1.7 to 2.1% of samples from 2012 to 2015 (26). A similar study was performed in Canada by the Animal Health Laboratory of Ontario over a 2-year period, with BYC and small flock owners sending in recently deceased birds for autopsy (27). Two hundred and forty-five chickens were received from a total of 160 farms, and just five farms (3%) had chickens positive for S. enterica. Finnish researchers also investigated S. enterica prevalence in backyard poultry sent in for necropsy over an 11-year period, and found no birds positive for S. enterica (28). However, Finnish law requires owners to keep their birds indoors from March to the end of May each year when the wild birds are returning, and this may have reduced S. enterica rates in backyard chickens (28). Additionally, it is possible that Finnish hatcheries have eliminated S. enterica from their breeder flocks, which would substantially reduce prevalence in backyard flocks, since Finland prohibits the importation of poultry (28). The only study which found similar rates of S. enterica in backyard chickens was performed in Australia and found Salmonella in 10.4% of flocks (4/30) (25). Consequently, our study presents a massive increase in S. enterica over previous studies in North America. A potential reason for this sudden increase in S. enterica in backyard flocks in 2021 is the outbreak of salmonellosis from S. enterica serovar Typhimurium in songbirds, specifically in Pine Siskins, a species whose range includes Vermont (29-31). This outbreak was reported in April 2021, and led to 29 illnesses across 12 states in the United States, including New Hampshire (32). While no illnesses were reported in Vermont, the Vermont State Department of Health had seconded all their personnel to COVID-19 response; consequently, salmonellosis cases were not investigated and therefore not reported (33). Intriguingly, 4/7 positive flocks in 2021 tested positive during the songbird outbreak period. Further research is needed to determine whether these S. enterica isolates are similar to the outbreak strain. Survey data More than 80% (83.7%; 36/43) of respondents to the pilot survey were female, compared with 66.7% of the statewide survey respondents (Table 4). Just over half (55.3%; 21/38) of the pilot survey respondents with available address information were living in rural areas, as assessed by Google Maps imagery. In contrast, 82.2% of respondents in the statewide survey indicated they lived in rural areas. We did not ask about education or income in the pilot survey. The most recent census data (2021) reported 260,029 households in Vermont, meaning our statewide survey represents 0.67% of Vermont households (22). Highly educated Vermont residents were over-represented in our survey; 38% of Vermonters overall have a Bachelor's degree or higher in Vermont, with 52.6% of residents aged 25-64 having at least a technical degree or certificate beyond high school (34), compared with 72% of our survey population with at least a technical degree or certificate beyond high school (22). In tandem, the majority (67.1%) of our owner pool had a higher income than Vermont's median salary of $61,973 (22). Overall, the data suggest that backyard chicken owners in Vermont are more highly educated and earn a higher income than the average Vermonter. This is similar to the findings of McDonagh et al. in their survey of backyard chicken owners in Massachusetts. In their sample, 79.6% (39/49) chicken owners had a household income of $100,001->200,000 per year, compared to the 2016-2020 median income for Boston of $76,298, and an equal number had a graduate degree. Kauber et al.'s study in Seattle did not ask about income, but also found that 48% (24/50) of their survey respondents had a graduate degree (12). In a 2014 survey of backyard chicken owners across the United States (though 61% of respondents were from California), the majority of respondents were also female and highly educated, with 67% of respondents having completed a 4 year degree or higher (35). Additionally, 41.2% of respondents had incomes of >$100,000 per year (35). This was perhaps influenced by the preponderance of responses from California, but very few respondents, even in rural areas, indicated they kept birds for income (35). Similarly, a recent nationwide survey of backyard poultry owners in France found backyard poultry owners were most commonly middle-aged and nearly 30% were in senior management, suggesting a comfortable income (36). Consequently, efforts to educate backyard chicken owners, should take advantage of their target audience's education to create nuanced materials that reflect the level of uncertainty around the actual risk of backyard poultry to the owners. Despite the high levels of education, populations surveyed do not fully understand the risks associated with backyard chickens. Most (83.7%; 36/43) of owners in the pilot study thought chickens could have Salmonella without seeming ill, and 60.2% (228/379) of respondents to the statewide survey answered "yes" to the same question. In the pilot survey, only 51% of respondents knew that Salmonella could be inside an egg, while only 41.9% of the statewide survey respondents were aware of this. In the pilot survey, 30% of respondents chose the "unsure" response to this question, while 43% of the statewide survey respondents chose "I don't know" in response to this question. When asked about the relative likelihood of S. enterica in backyard flocks vs. commercial flocks, 65.1% of owners in the pilot survey thought backyard chickens were less likely to have Salmonella and 25.6% thought there was no difference in likelihood. In contrast, in the statewide survey, only 51.2% thought backyard chickens were less likely, and 13.8% thought they were equally likely to have Salmonella. However, the pilot survey did not have an "I don't know" option for this question, which may have influenced the results, as for the statewide survey, 32.6% of respondents chose this option. Unsurprisingly, owners of backyard chickens inconsistently employ risk-mitigation measures. Seattle, Boston, and Canadian (Ontario) owners washed hands after handling chickens or ducks 98, 65.3, and 94% of the time, respectively, (10, 12, 37) while 84.1% of chicken owners in our pilot study and 75.8% of owners in our statewide study reported washing their hands after handling their chickens ( Table 5). The rates of handwashing after handling chickens were likely higher in our pilot study than our statewide study due to the higher percentage of urban owners in our pilot study or due to the higher engagement/interest in biosecurity required of owners in the pilot study. However, this does not explain the lower rates of handwashing in Boston vs. Seattle and Ontario, Canada. Similarly, 86% of Seattle owners reported washing their hands after handling raw eggs, while 65.3% of Boston owners (10), 60.5% of our pilot study respondents, and 68.6% of our statewide survey respondents washed their hands after handling eggs. The reason for this disparity between Seattle and the Northeast is unclear. In contrast, masking while cleaning the coop was adopted at similar rates in Seattle and Vermont; 28% (13/47) of Seattle BYC owners reported wearing a mask to clean their coops (12), compared with 18.6% of owners in the pilot survey and 31.2% of owners in the statewide study. The small increase in masking from our pilot study to our statewide study may be due to the increased accessibility and use of masks overall during the COVID-19 pandemic. Risk factors for acquiring S. enterica from live poultry include picking up birds, kissing birds, and snuggling birds (4). In Seattle, 22% of owners admitted to snuggling, kissing, eating/drinking, or touching their face around their adult chickens and 26% admitted to the same practices around chicks (Table 5) (12). Additionally, 24% allowed poultry to live in their houses or to have access to patios or kitchens (12). In Boston, the authors found that nearly all (96.5%) of owners picked up their birds, while only 41.4% hugged them and just 10.3% kissed their birds (10). Additionally, 68.9% of children picked up their chickens, 37.9% of children hugged chickens and 17.2% of children kissed chickens (10). . /fvets. . In our pilot study, we found numbers similar to McDonagh's for snuggling and kissing; 58.1% avoided snuggling their birds, and 90.7% avoided kissing their birds, suggesting that 41.9% may have snuggled their birds, and 9.3% may have kissed their birds. However, in our statewide survey, we found substantially higher rates of probable snuggling, with only 36.4% of owners indicating they avoided snuggling their birds, and only 47.4% of owners indicating they avoided kissing their birds. We explored whether the decrease in good biosecurity habits was associated with a higher number of rural respondents, but we found that overall, rural participants in the statewide study were slightly more likely to adopt good biosecurity habits, compared to urban participants (though this could be due to having only 31 urban participants in the statewide survey; Supplementary Table S3). Finally, 40% of our sample had children in their household, and 23.5% of owners had children often or always petting or picking up their birds (χ 2 = p < 0.001). This is relevant, as children are considered more likely to develop salmonellosis, and indeed, the Vermont Department of Health investigations on live-poultry associated salmonellosis found that 30.2% of patients were under 10 years old (4,21). Significantly, the implied rates of bad biosecurity habits are overall similar to the exposure characteristics found in a compilation of characteristics connected with Salmonella cases from live-poultry exposure; 59% of live poultry-associated salmonellosis patients had held/snuggled their birds, and 13% had kissed their birds (4). This indicates that outreach efforts since 2013 have affected minimal to no change in poultry owners' interactions with their birds, which is supported by the low reach of university extension websites in Vermont (just 17.7%) to backyard chicken owners. We did not specifically ask about government websites, such as the CDC's Healthy Pets, Healthy People web page on backyard poultry (32), so it is unclear how much reach these have had, though based on the survey responses, governmental websites are unlikely to be reaching more than 60% of the backyard chicken owners in Vermont. The CDC recently ran a series of focus groups with backyard poultry owners (38). The key finding was that despite the fact that all participants were aware that kissing chickens was risky and handwashing was important, participants were unwilling to reduce physical contact with poultry (38). This correlates with the relatively low percentage of owners avoiding kissing and snuggling their birds in our study, and to a lesser extent, in the Boston study (10). One reason for this was that the CDC focus group participants didn't believe they were at risk for salmonellosis, preferring to view themselves as "responsible Frontiers in Veterinary Science frontiersin.org . /fvets. . owners, " and did not view salmonellosis as a genuine threat (38). Additionally, participants indicated that risk-based messaging was not persuasive (as is also obvious from studying backyard chicken social media pages after a CDC outbreak report). This may indicate that public health entities should move away from a risk-based focus to a more positive messaging style that encourages best practices. Indeed, when the CDC asked poultry owners about the style of messaging they preferred, participants preferred "visually appealing and eye-catching images [and] layout, " ideally with pictures of baby chicks and disliked negative messaging about the number of live poultry-associated outbreaks (38). Ultimately, it is still important for flock owners to know that there are risks, so perhaps providing hard numbers on the frequency of S. enterica in backyard chickens and the most beneficial and easy-to-implement biosecurity strategies to reduce risk would strike the balance of informing and encouraging without employing "scare tactics." Further, tweaking federal messaging to increase its relevancy for the state or local context, and increasing the use of social media outreach (for instance, on Instagram) could increase the potency and reach of biosecurity messaging. An example of the way forward is potentially the U.S. National Park Service's Instagram page (https://www.instagram.com/nationalparkservice/). Combining humor, pop culture references, and facts (e.g., the danger of approaching bison and other wildlife or facts on wildlife), it has attracted a substantial following (4.1 million followers as of June 2022) and has high engagement with its posts. In addition to determining how to reformulate biosecurity messaging to be more persuasive, the biosecurity risk of bird feeders adjacent to domestic poultry should be included in future messaging. While the uptick in Salmonellosis in Vermont backyard birds may or may not be related to the songbird outbreak of 2021 (39), a recent study in Georgia found that wild birds frequent chicken coops where there is accessible chicken feed (40). Northern Cardinals, a species commonly affected by S. enterica outbreaks, spent the most time around chicken coops, which demonstrates a strong potential for spillover infections into backyard poultry (40,41). Further, a study in Canada found that 10% of all European Starlings and House Sparrows collected near broiler houses were positive for S. enterica (42). Consequently, biosecurity messaging should include the importance of keeping poultry feeding stations inaccessible to wild birds (i.e., inside the coop) and not having bird feeders in the same yards as domestic poultry. This could be framed as both protecting the wild birds from acquiring S. enterica from the poultry, as well as protecting the poultry from the wild birds, without passing undue blame on either species. Conclusions and future directions Despite several years of messaging campaigns from state and national health organizations, little has changed in the habits and perceptions of backyard chicken owners. This indicates an acute need for more effective communication surrounding biosecurity practices. Meanwhile, the S. enterica rate in backyard chickens fluctuates wildly, for reasons that are still unclear. Key problems to solve in future include (i) increasing the reach of information around biosecurity best practices and (ii) increasing the effectiveness of this messaging. Additionally, backyard chicken owners need clearer information on the risks associated with close contact with domestic poultry, so they can make informed decisions. This requires ongoing investigation on S. enterica rates in different contexts to determine which factors (e.g., temperature, season, housing, feed, wildlife exposure, etc.) increase the likelihood of S. enterica in backyard poultry. Future directions for this project include completing sequencing of the S. enterica isolates from this research to determine whether they are from serovars which commonly cause human illness and working with communications professionals to develop and test messaging relevant to Vermonters/the Northeast in both rural and urban contexts. Funding This research was funded by the Vermont Agricultural Experiment Station-Hatch funds startup funds (2019-2020) and by Award No.VT-H02812MS (2021-2024) as part of Multistate Project S1077.
2022-09-22T13:23:26.447Z
2022-09-22T00:00:00.000
{ "year": 2022, "sha1": "9dd0b3c08389e2a3c2dea175875fcf6f4fdb4748", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Frontier", "pdf_hash": "9dd0b3c08389e2a3c2dea175875fcf6f4fdb4748", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "extfieldsofstudy": [ "Medicine" ] }
247978489
pes2o/s2orc
v3-fos-license
Malaria at international borders: challenges for elimination on the remote Brazil-Peru border ABSTRACT Understanding local epidemiology is essential to reduce the burden of malaria in complex contexts, such as Brazilian municipalities that share borders with endemic countries. A descriptive study of malaria in the period 2003 to 2020 was conducted using data from the Malaria Epidemiological Surveillance Information System related to a remote municipality with an extensive border with Peru to understand the disease transmission, focusing on the obstacles to its elimination. The transmission increases at the end of the rainy season. During the period of 18 years, 53,575 malaria cases were reported (Mean of API 224.7 cases/1,000), of which 11% were imported from Peru. Thirteen outbreaks of malaria were observed during the studied period, the last one in 2018. The highest burden of cases was caused by P. vivax (73.2%), but P. falciparum was also prevalent at the beginning of the study period (50% in 2006). Several changes in the epidemiological risk were observed: (1) the proportion of international imported cases of malaria changed from 30.7% in 2003 to 3.5% in 2020 (p<0.05); (2) indigenous people affected increased from 24.3% in 2003 to 89.5% in 2020 (p<0.0001); (3) infected children and adolescents < 15 years old increased from 50.2% in 2003 to 67.4% in 2020 (p<0.01); (4) the proportion of men decreased from 56.7% in 2003 to 50.4% in 2020 (p<0.01); (5) the likelihood of P. falciparum malaria has significantly declined (p<0.01). The number of cases and the incidence of malaria in 2019 and 2020 were the lowest in the period of 18 years. The burden of malaria in indigenous areas and its determinants, seasonality, geographical access and the long international border are obstacles for the elimination of malaria that must be overcome. INTRODUCTION Currently, the world is committed to the elimination of malaria. Aiming at "a world free of malaria", one of the global technical strategies (GTS) for 2030 is to reduce malaria mortality and case incidence rates by at least 90% when compared to 2015 1 . Transforming malaria surveillance into a core intervention is one of the pillars of GTS. Reliable information and analyses are essential for planning actions and for identifying populations and areas at higher risk for malaria 2 . In Brazil, 144,888 cases of malaria were reported in 2020. From 2000 to 2015, Brazil showed a significant reduction in the number of malaria cases (76%), reaching one of the millennium goals 3 . However, when compared to 2016, there was a 61% increase in the number of cases in 2017 4 . The disease has a heterogeneous distribution and its control is focused on certain locations where most of the cases are concentrated and there is a high unstable transmission. As malaria results from specific ecological, sanitary, political and cultural conditions in each place 5 , it is necessary to know the local epidemiology in which the disease is present to establish adequate measures to fight against it. Different determinants are pointed out for malaria endemicity and they are related to the difficulty of disease elimination; the main determinants are: gold mining malaria [6][7][8][9][10] , malaria in indigenous area 11,12 , urban malaria [13][14][15] , border malaria [4][5][6][7][8][9][10][11][12][13][14][15][16] and malaria in conflict zones 17 , each one with different degrees of endemicity. Atalaia do Norte, one of the largest municipalities in Amazonas State, Brazil, is an area historically affected by malaria, with a high incidence of the disease, although it has been poorly studied until now. This study analyzed the epidemiology of malaria in this municipality during the last 18 years (2003)(2004)(2005)(2006)(2007)(2008)(2009)(2010)(2011)(2012)(2013)(2014)(2015)(2016)(2017)(2018)(2019)(2020) to understand the disease transmission, focusing on the surveillance of malaria as a core intervention to achieve elimination. Study area The municipality of Atalaia do Norte (Latitude: 4° 22' 20'' South, Longitude: 70° 11' 33'' West) is located in the Southwest part of Amazonas State, Brazil, in the Alto Rio Solimoes region, at a distance of 1,138 km in a straight line from the State's capital Manaus, and it is accessible only by river. The municipality has an area of 76,355 km 2 , an estimated population of 19,438 habitants (mean of 0.25 inhabitants per km 2 ) and an extensive international border with Peru in the North and West 18 . There are 147 locations scattered over four extensive river basins: Itacoai, Itui, Curuca and Jaquirana, in an inhospitable and difficult-toacess region. Atalaia do Norte is located in the Amazon rainforest, with the rainy season between November and March, decreasing in April and May and the dry season from June to October, when the rivers decrease in volume and reach the lowest levels. Approximately 80% of the population is indigenous. Part of the Vale do Javari Indigenous Land is located in this municipality, one of the largest indigenous lands in the world, inhabited by the Marubo, Matses (Mayuruna), Kanamari, Matis, Kulina-Pano and Korubo ethnic groups. Peruvian and Colombian immigrants also live in the municipality's headquarters. Vale do Javari is the territory with the highest concentration of isolated (and not contacted) indigenous people in the world. Access to rural locations is by small boats and occasionally by air with small landing strips in some villages and flights conducted by the military in small aircraft that provide services to the State, at a very high cost. There are no commercial planes in this area. Laboratory diagnosis and treatment are supported by the Brazilian Ministry of Health. The local health service is responsible for malaria outside indigenous areas while malaria in indigenous areas is managed by professionals from the Special Indigenous Unit of the Vale do Javari Sanitary District (DSEI-VJ) organized in seven base poles with health structures. All cases must be reported to the Epidemiological Surveillance Information System (SIVEP-Malaria), a robust Ministry of Health database filled in by local health workers. Epidemiological design A descriptive study of the information contained in SIVEP-Malaria was carried out from 2003 to 2020 for the municipality. All positive cases by the probable location of infection were filtered for variables such as origin (imported vs autochthonous), spatial (municipality, urban area, rural area, indigenous area and settlements), temporal (month and year), demographic (sex and age group) and parasitological (Plasmodium species). An endemic curve was constructed and analyzed using the number of cases for each month in the period studied. The time series were observed to separate different periods for the analysis. As the data series met the normality criteria, the endemic curve was elaborated in two stages: a) the mean and standard deviations of the cases from each month were calculated for the all the period (2003-2020). The upper limit was calculated by adding two standard deviations to the mean. The lower limit was calculated by subtracting two standard deviations and the mean of cases. Months that exceeded the upper limit of the average were excluded from the initial analysis; b) the means and standard deviations were recalculated for each month, after excluding the epidemic months, to build the expected endemic curve for the municipality. Graphs were made with the expected and the observed number of cases. Months were considered epidemic when they exceeded the expected upper limit. Such classification was an adaptation for the study to demonstrate the frequency variation in the time series. For the analysis of the duration of epidemics, the classification described by Braz et al. 19 was used: short (one to four epidemic months in a year); medium (five to eight epidemic months in a year) and long (nine to 12 epidemic months in a year). Statistical analysis All data were analyzed using the Epi Info 7 software (Centers for Disease Control and Prevention, Atlanta, USA). Absolute and frequency analyzes of variables were performed. The χ 2 test was used to analyze the proportions of sex, age group, origin of cases and Plasmodium species. To quantify the strength of the association between variables, Odds ratios (OR) were used. Mean and standard deviation (SD) were calculated for quantitative data. The Student's t-test was used to compare means (specially time variables). A p value <0.05 was considered significant, with 95% confidence intervals (CI95%) for all hypothesis tests. Ethical considerations The research was submitted and approved by the Institute Oswaldo Cruz (IOC) Research Ethics Committee (CAAE 74999617.7.0000.5248) and by the National Research Ethics Commission (CONEP,) opinion issued Nº 2,666,109 of May 2018. RESULTS From 2003 to 2020 a total of 53,610 (annual mean ± SD: 2,976 ± 1,377, 95% CI: 2,292-3,662) cases of malaria were reported in the municipality of Atalaia do Norte (Table 1). Between 2005 and 2008 there was an increase in the number of cases, followed by a reduction from 2009 to 2011 and a new peak in 2012. However, from 2013 to 2020 the municipality has been showing a decrease in the number of cases. The average API (annual parasitic incidence) was 224.7 cases per 1,000 inhabitants (range 37.6-373.8 cases per 1,000/inhabitants). The API in 2019 was 78.2, the lowest in the last 15 years. Despite the decrease in API, the municipality remained classified as having a high epidemiological risk throughout the period of analysis ( Figure 1A). Of the total reported cases, 47,691 (89.0%) were autochthonous and 5,919 (11.0%) were imported from other municipalities or countries. Of the imported cases, 93.5% were from other States and/or countries. During the period, there was a great fluctuation in the ratio autochthonous: international imported malaria cases. The likelihood of an autochthonous case was 12 times higher in 2020 than in 2003 (OR= 12.3, 95% CI: 9.2-16.3, p≤0.01). Most imported cases from other countries came from Peru. Figure 1B). In 2019, there was a 24% decrease in the number of cases compared to 2018, but in 2020 there was an increase of 7% compared to 2019; the greatest prominence occurred in nonindigenous rural areas, which showed a 56.2% reduction in 2019 compared to 2018, but a low increase in 2020 (7.7%). Malaria in indigenous areas Malaria data in indigenous areas showed two different moments. The first period, from 2003 to 2012, when the highest number of cases was recorded (n=20,342 cases, 68.6% of autochthonous cases), characterizing it as an epidemic period, in which the base pole with more cases was Alto Itui (4,662 cases, 22.9%), but the average API for the period showed that the greatest burden of the disease taking place in Alto Curuca (565.2 cases/1,000) and Alto Itui Poles (519.7 cases/1,000). The second period was from 2013 to 2020 when 12,316 cases were recorded, characterizing it as a post-epidemic period, and most of the cases were reported in Itacoai Pole (2,767 cases, 22.5%). Significant differences were found when comparing the cases in indigenous areas and other autochthonous cases during the two periods (p<0.0001). The highest API in this period was 303.6 cases/1,000 inhabitants in the Middle Javari Pole. The mean of cases from the 2003-2012 period was higher (2,906±893 cases, 95% CI: 2,080-3,732) than the 2013-2020 period (1,540 ± 241 cases, 95% CI: 1,338-1,741), (p<0.05) (Figure 2). The proportion of indigenous people affected increased from 24.3% in 2003 to 89.5% in 2020 (p<0.0001) and currently, the greatest burden of malaria occurs in indigenous areas. Seasonality and epidemics peaks The high transmission season starts in April with a peak in June-July and a decrease in August. The low transmission season occurs between September and December. During the study period, 13 epidemic episodes were detected, 12 were classified as short-term epidemics, and one as longterm epidemics (nine months). The 2003-2012 epidemic period registered nine (69.2%) peaks, eight of short duration (mean 49 days) and one of long duration. During the postepidemic period (2013-2020), there were four (30.8%) peaks, all of them of short duration (mean 45 days). There is no record of outbreaks since October 2015 (Figure 3). Age group and sex analysis During the study period, 56.5% of cases were diagnosed in children under 15 years old (of whom 22.8% were under 5 years old). At the beginning of the period, 50.2% of cases Figure 1D). Differences by sex were also assessed: 55% of men were affected by malaria, with a ratio male/female of 1.2:1. In 2003, 56.7% of cases were among men and in 2020, they represented 50.4% of the cases. This difference was statistically significant (OR: 0.8 95% CI: 0.7-0.9, p < 0.01), with a current lower burden of cases in males comparing 2003 with 2020. DISCUSSION Our results showed that this municipality has a long history of high incidence of malaria with several outbreaks [20][21][22] . However, few studies have been conducted in this area to understand the epidemiology of the disease. The annual mean API during the period was 206.7 cases/1,000 inhabitants, and despite the significant decrease in recent years, the municipality is still considered of high epidemiological risk. This value is almost 20 times higher than the annual mean API in the Amazon region, in the same period (mean API: 11 cases/1,000 inhabitants) 3 . In 1994, the National Health Foundation reported an important number of deaths in the indigenous population that was initially attributed to hepatitis, but was later demonstrated to be malaria, the main cause of morbidity and mortality in that outbreak 21,23 . Sampaio et al. 21 showed that the cases were related to the distance between villages and extractive areas and that the disease was spread, not radiated from a single focus. In 1997, the non-governmental organization down and programs need to maintain their activities as API and number of cases are still very high. The COVID-19 pandemic is an additional obstacle for achieving the goals, at least in the foreseen time 26 . The percentage of imported cases reported has decreased from 30.7% in 2003 to 3.5% in 2020 (p<0.05). The greatest malaria burden occurred in rural areas, being the indigenous ones the most impacted, accounting for 61% of the total cases, but the epidemiology has changed: whereas in 2003 only 24% of cases were reported in indigenous areas, in 2020 almost 90% of cases were in these areas. Similar findings in other areas were pointed out by others in a similar context 10,27,28 In addition to the combined work between the professional teams of indigenous health and the municipality, the assistance of indigenous leaders and communities is essential. The use of RDT for the effective diagnosis and treatment is paramount in areas in which there are not enough health services structures. This work should be strengthened, as well as the improvement of the DSEI structure in the most distant and difficult-to-access locations. More studies are needed to understand the differences of malaria burden in this enormous indigenous area that has a complex epidemiology. Seasonality in malaria has variations from one year to another. The greatest transmission occurs at the end of the rainy season and the low number of cases is during the dry season. A study carried out in the Loreto region, in the Peruvian Amazon, an area that borders the Javari River Valley, showed the same evidence of continuous transmission with an increase in the number of cases from February to July 29,30 . Control actions must be implemented before the beginning of the highest transmission season. Activities should be carried out simultaneously on both sides of the frontier for a greater probability of success 19,27 . By comparing 2020 to 2003, there were differences in the risk of having malaria between men and women (p<0.01), but these differences were not observed when the entire period was analyzed. Other studies have consistently shown the changing pattern between men and women over time 10,31 . In the Amazon region, malaria has long been considered an occupational disease associated with mining, extractive activities and other predominantly male occupations 32 . The quarantine due to COVID-19 led to a reduction in the occupational mobility. This may explain, at least partially, the decrease in the percentage of cases among men in 2020, but this fact must be clarified. Changes in the most affected age group were also observed. In 2003, 58% of the individuals affected were less than 15 years old; in 2020, 60.8% of those affected were under 15 years old (p<0.01). Similar changes were observed in other contexts with long histories of malaria and a high epidemiological risk in the Amazon region 3,10,21 . Taking these findings into account, it is imperative to carry out field studies to establish the presence of asymptomatic infections. Several studies have shown that after many malaria episodes in high epidemiological risk areas, adults may develop clinical immunity and not show (or show few) symptoms, while children, who have not yet developed this type of immunity, are more likely to suffer from symptomatic clinical malaria 33,34 . Although P. vivax has been the most prevalent Plasmodium species in the municipality, in the last years, that was not always the case. In the study of Sampaio et al. 21 , 68.2% of the detected cases were due to P. falciparum and during the first years of our study, P. falciparum contributed with 50% of the cases in 2006. After the introduction of artemisinin-based combination treatment schedules (ACTs) in 2007, the percentage of cases due to P. falciparum had decreased to 10%. However, in 2019, there was an increase of 31% in cases compared to 2018 3 . One of the goals of the Brazilian National Malaria Control Program is to eliminate cases caused by P. falciparum, and to achieve this goal actions against this parasite, that are more susceptible to vector control activities are necessary. The reduction of vector control activities is one of the indicators that shows that the work of health services has been well executed 35 . Worryingly, the COVID-19 pandemic can also be an obstacle to achieving this goal 26,36 . Vale do Javari Indigenous Land is frequently targeted by illegal gold miners, fishermen, hunters, drug traffickers and religious organizations from Brazil, Colombia and Peru. Invasions into the territory have increased over recent years 37 . Despite the decrease of imported cases from Peru in the last years, the international border of 1,180 km long may be one of the obstacles to malaria control. Both countries are separated by the Javari River that can be easily crossed, and the inhabitants are permanently moving to both sides of the frontier in search of health, goods and services. In the region of Loreto, the bordering area in the Peruvian Amazon, malaria cases have almost tripled in recent years 38 and riverside communities living at the border of Peru have a fragile structure of health services. The study by Braz and Barcellos 22 , carried out in 2016 and 2017, showed that Atalaia do Norte forms a conglomerate with six other municipalities bordering Peru, maintaining a high incidence of the disease. It is worth mentioning that malaria in border areas is difficult to control due to the displacement of people, border conflicts, cultural differences and complicated national public health regulations 39 . Control strategies involve necessarily a coordinated work between both countries. Sharing epidemiological information systematically can be the first step to start a coordinated work 40 . As we used secondary data of the surveillance program, there are some limitations. Malaria cases are likely to be underreported due to the geographical extension as well as the lack of access to health services. However, in Brazil, malaria is a disease that is diagnosed and treated within the Brazilian National Health System (SUS in Portuguese). Antimalarial drugs are provided after a laboratory diagnostic test. There are no other health services available in this area and apparently, there is no illegal trade for selling antimalarials to the populations. Thus, we believe that the underdiagnosis is not that high. Finally, we believe that strengthening local epidemiological surveillance to understand the situation and taking appropriate and early actions is a fundamental pillar that will lead to the elimination of malaria in areas of complex epidemiology such as the Javari River Valley. CONCLUSION The burden of malaria in indigenous areas and its determinants, seasonality, geographical access and the extensive international border between Brazil and Peru are obstacles for the elimination of malaria that must be overcome to reach the GTS for 2030. AUTHORS' CONTRIBUTIONS MPC and MCSM conceived the study, wrote the protocol and implemented the study; MPC, MCSM, MBM, JG, NBA and RS analyzed and interpreted data; RS did the maps. MPC and MCSM wrote the first draft. All authors read and approved the final manuscript. CONFLICT OF INTERESTS Authors declare that they have no conflicting interests. FUNDING This study had financial Support from Laboratory of Parasitic Diseases of the Institute Oswaldo Cruz/Fiocruz. MPC received a fellowship from CAPES (Capes 01). MCSM had a fellowship of the Young Scientist from Rio de Janeiro State Program (Faperj).
2022-04-07T06:17:03.998Z
2022-04-04T00:00:00.000
{ "year": 2022, "sha1": "bee92f5da6a86d59e30e3bf81042e6414f168ee2", "oa_license": "CCBYNC", "oa_url": "https://www.scielo.br/j/rimtsp/a/5bjCNgwxN7QRkTFwkpY7LQH/?format=pdf&lang=en", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "5275dd3caaebefadc2059fc7fe0d92ceb90c6948", "s2fieldsofstudy": [ "Environmental Science", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
253370356
pes2o/s2orc
v3-fos-license
Quantum Protocol for Decision Making and Verifying Truthfulness among $N$-quantum Parties: Solution and Extension of the Quantum Coin Flipping Game We devised a protocol that allows two parties, who may malfunction or intentionally convey incorrect information in communication through a quantum channel, to verify each other's measurements and agree on each other's results. This has particular relevance in a modified version of the quantum coin flipping game where the possibility of the players cheating is now removed. Furthermore, the analysis is extended to $N$-parties communicating with each other, where we propose multiple solutions for the verification of each player's measurement. The results in the $N$-party scenario could have particular relevance for the implementation of future quantum networks, where verification of quantum information is a necessity. I. INTRODUCTION A. General Background Quantum communication using quantum channels is becoming practical, but there are many issues that need to be addressed in order to actually operate them in business.In the usual setting of multiparty secure computation, many protocols assume secure communication channels between all two parties [1,2].For example, the conventional quantum key distribution [3] realizes secure keys for secret channels under the assumption that the sender and the receiver are trusted.However, it is nontrivial for a player to achieve a reliable communication channel without trusting other parties. Besides, regardless of whether one can trust the other party or not, even if secure communication channels are realized, all kinds of problems can occur in real human communication, including political and business problems.While it is important to pursue quantum channel technology that allows for secure and accurate quantum communication, it is equally important to pursue the design of software and systems that will operate correctly and desirably on the quantum channel. The study of desirable systems and software for people on a network/market is called mechanism design (or market design) and is widely studied in economics in terms of auctions, optimal matching, and allocation of public goods [4][5][6][7][8].(The Sveriges Riksbank Prize in Economic Sciences in Memory of Alfred Nobel 2007 was awarded to Hurwicz, Maskin and Myerson "for having laid the foundations of mechanism design theory" [9].) In this study, we consider the quantum coin flipping game [3] from a perspective of mechanism design.This paper assumes that quantum channels are in practical use.We discuss the problems that players who play the quantum coin flipping game may face, and redesign the game.When looking at the quantum coin flipping game from this viewpoint, it is not so much a matter of cryptography, but rather a matter of consensus building, which refers to the process essential for building consensus among the parties involved in the same assignment, project, or business meeting with a customer.Consensus building is essential for sharing objectives and facilitating business. Extending a two-person quantum coin flipping game to an N -person game is indeed the General Byzantine Problem, which asks whether a group of communicating objects as a whole can form a correct consensus when communication or individual objects may convey incorrect information due to malfunction or intentionality.This issue has been studied in a wide range of fields [10,11], including Blockchain [12][13][14][15] and its quantum extensions [16,17].The N -party coin flipping game that we presented is a simple case of a generic Byzantine Generals Problem or the N -party decision making problem.Our quantum coin flipping problem for N people is exactly the quantum Byzantine General Problem when each quantum general has a single qubit. To design a quantum game that can work properly as a game, let us consider the conditions that a game must meet.In designing a game, the minimum requirements to be met would include (A) There is no room for cheating. (B) Each player is correctly aware of the other's results. (C) Each player can agree on the outcome of the game.Condition (A) is necessary to achieve consistency in the rules of a game.If there is a loophole in the rules, an attack that exploits a vulnerability in the rules is possible.There is no reason to believe that the remote players who are about to play against each other will not cheat.The use of device-independent quantum key delivery, which is effective even when the sender and re-arXiv:2211.02073v1[quant-ph] 3 Nov 2022 ceiver are not trusted, could solve this problem [18][19][20][21].However, that is not the only problem.It is even more difficult to achieve a system where everyone can agree on the outcome, even if the game is played correctly. Condition (B) is necessary for each player to confirm the progress of the game.This allows one to verify the validity of one's past strategies and to plan for future strategies.Clearly, having the correct information about the other players is important when building consensus. Condition (C) is necessary for the game to converge.At the end of the game, the outcome is determined and all players accept the result.Players would lose incentive to participate in a game where the outcome of the game is controversial.Besides, even if the players temporarily agree on the outcome, they may later reverse their agreement.In such cases, it is necessary in practice to make it provable to a third party that an agreement has been reached in order to avoid a situation of incomplete contracts [22][23][24][25][26]. The link between game theory and quantum mechanics was founded over 20 years ago, and the development in the field since its initial discovery has been rapid [27][28][29][30][31]. Alongside this, there has been research into quantum information with specific focus on quantum networks and its potential implementation in quantum computing [32][33][34][35][36][37].In recent years, these two fields have started to work in unison where the advantages gained from quantum game theory can be utilised for the benefit of quantum networks. Quantum games have been developed for repeated games [38,39], extensive form games [40], contract theory [41] and markets in quantum networks [42].This has a natural crossover with quantum mechanics due to uncertainty being prevalent in both fields.From this, it is clear that the potential advantage that can be gained from using quantum correlations in network systems could have significant practical implications in quantum technologies. There are a wide range of quantum games that have been investigated where quantum correlations have been found to yield quantum advantage compared to the respective classical counterpart.A common example for this is the CHSH game, which allows a practical implementation for the benefit of non-locality.Interestingly, it was found that there is an inherent link between Bell's inequalities and Bayesian game theory [43]. The game which this paper focuses on is the quantum coin flipping game [3], where the essence of the game is based on a two-player, two-outcome game. B. Statement of Results In this work we formulated the quantum coin flipping game with two parties and extended it to a game with N parties.We have ensured that any possibility of cheating or attack is eliminated, under the assumption that each player only flips a coin and does not operate arbitrarily on their own quantum state. It is important to recall that in the conventional quantum coin flipping game, even with these natural assumptions imposed, there was still plenty of room for remote players to cheat (Problems (♥, ♣, ♠) defined later).In the conventional game, regardless of the outcome of the coin flip, the player who announces the result last always can cheat and therefore always can win, and this can be regarded as an ultimatum game.To avoid the quantum coin flipping game becoming an ultimatum game, we redesigned the game using an entangled state (1) between the players (Fig. 3).As we described in the main text, we solved those three problems (♥, ♣, ♠). A. Preliminaries The quantum coin flipping game originates from the classical coin flipping game where there are two parties, which will be denoted by Alice (A) and Bob (B).Consider the scenario where Alice and Bob are a recently divorced couple who decide to play a game to determine who gets the car they previously shared.Since they do not like each other, they are living far away from each other, so decide to play this game over the telephone.The game is set up as follows: both players have a fair twosided coin which can either land on heads or tails.They then each flip their respective coins.If one player lands on heads, and the other lands on tails, the player who landed on heads wins.If they both land on heads, they flip again, and if they both land on tails, they flip again, until there is a winner.However, it is clear that each player cannot verify the other players result, therefore when communicating over the telephone, if Alice claims to have landed on tails, then Bob can win the game by claiming to have landed on heads, even if this were not the case.This game can be implemented as shown in Fig. 1, whose sketch is shown in Fig. 2. As can be easily seen from the figures, when such a game is played without sharing information with each other, the last person to claim the outcome can always become the winner.Such games are called ultimatum games [44].The quantum version of this game is similar, but in this scenario Alice prepares a quantum state in addition to their coin.Whether Alice lands on heads or tails determines what basis Alice measures their quantum state in, and from this, Alice sends Bob this prepared quantum state.Bob then performs measurements on this quantum state, and attempts to deduce what basis Alice measured in.Bob then sends this state back to Alice in addition to communicating to Alice what basis Bob believes the state was prepared in, and from this, Alice can confirm whether Bob deduced the measurement basis correctly, and thus whether the coin landed heads or tails.This can work in the opposite direction, where Bob prepares the quantum state and measures in a particular basis and sends this quantum state to Alice.Due to the quantum correlations, this form of the coin flipping game does allow Alice and Bob to increase their chances of winning.However, this game still allows the possibility of cheating, as ultimately Alice and Bob still have to perform classical communication along the channel. B. What were the problems? In this section we present our solution for two-party quantum coin flipping game.To describe our contribution accurately, let us elaborate on what the challenges were and how they were solved.The remaining challenges of the previous research on this issue has been to establish a way to fairly and rigorously recognize each other's independent results between two remote parties.From the perspective of a mechanism design, this game setting has the following problems. (♥) The two people can chose arbitrary coins independently. (♣) The result of one cannot be recognized by the other. (♠) There is a time lag between when one player flips a coin and when the other player learns the result. Under these circumstances, it is obvious that they could cheat in any way they want.Despite the simplicity of the problem set-up, the second and the third reasons make this problem difficult and fairness between players is lost.The first point (♥) concerns fairness before the game is played.The presence or absence of prior information about the tools used in the game can lead to information asymmetry, which is an important factor in the progression of the game.When information about the game is asymmetric, the benefits of changing the rules of the game vary from player to player.A game in which rule changes are impossible is not a good game because it lacks flexibility and development.For each player to be allowed to independently select any coin, knowledge of which coin is selected must be disclosed to all players.However, this is not possible in a setting where each player is remote and there is no neutral third party to verify this. In the quantum coin flipping game, the second problem (♣) is even more serious than the first.This problem occurs immediately after the game begins and before communication with others begins.In the conventional setting of quantum coin flipping, it is possible to lie to the other player since only the player can observe their own result.This makes the game no longer work because each player can arbitrarily change the outcome, eliminating the need for a coin toss in the first place.In other words, it is no longer even a "coin-flipping game", as there is no need to even prepare coins in the first place. The third problem (♠) exacerbates the second.In the real physical environment, the speed at which information is transmitted to the other party is finite, so it takes a finite amount of time to convey the result of a coin to a remote party.The presence of this time difference would be enough to cause hesitation and frustration to the players.Even if players decide to play fair at the start of the game, they may decide to take advantage of the time difference to change their results after their own results are observed.Moreover even if all players were honest about the results, the existence of the time difference is sufficient to lead one to believe that changes were made to the results.Mail-in ballots in presidential elections, for example, contribute to creating this kind of distrust among some people. III. SOLUTION FOR TWO-PARTY QUANTUM COIN FLIPPING A. How we solved the problems As we have discussed, the conventional quantum coin flipping game is fundamentally flawed in its setup, which not only prevents the game from being executed properly, but also makes it unworkable from the start. Therefore, we first need to redesign the game so that it can be played correctly.To this end, let us first look back at what the fundamental concept of the game was: 1.Each of the two remote players flips a coin.At this point, the probability that Player A observes Similarly the probability that Player B observes |i⟩ B is The game consists of 4 stages and proceeds as follows. 1. Preparation of an entangled state: Prepare an initial entangled state |ψ⟩ Coin (1). 4. Decision making stage: Each player compares the results of their own coins with those of their opponents to recognize and agree on the winners and losers. Here we explain how this protocol works.Let us first make sure that this game does not depend on the order in which the players play.Of course, they can play simultaneously.Suppose Player A flips a coin and gets |i⟩ A .Then the state of the coin (1) changes into In Confirmation stage, each player can confirm the state of their opponent by measuring the corresponding ancilla state.For example, Player A finds |j⟩ A for the result of Player B with probability 1 and vice versa. For simplicity, let us play the classical setting.Then the state (1), which we use for the game is generated by the circuit shown in Fig. 3, in which coins are prepared by the following procedure where B. Validity of the Protocol Here we show the validity of our protocol and confirm that the problems (♥), (♣) and (♠) raised in the previous section are completely solved in principle. Regarding the problem (♥), they both use the same state (1) to play the game.The probability distribution is completely determined by the matrix where A coin such that two players A and B have exactly the same probability of getting heads and tails can be defined as |c ↑↓ | = |c ↓↑ |.Using ( 2) and (3), Hence the most general form of a fair coin ( 7) is The classical setting can be recovered by putting θ ij = 0 for all i, j and a = 1 2 .The non-trivial phases play important roles in an quantum extensive form game [40,41].Both parties should make as many coin states as possible before starting the game to check the probability distribution before playing. The problem (♣) already been solved for the following reasons: each player can confirm their opponent's result by measuring their own second qubit at the end of the game as shown in Fig. 3.In this game, only measuring one's own qubits is allowed, but one of the possible attacks from one player to the other are as follows: it is natural to ask what would happen if the second qubit were measured first.Suppose Player B measures their second qubit before Player A tosses the coin, thus finalizing Player A's result.However, it turns out that this attack is meaningless, as we will see below.The probability that Player B observes |i⟩ The problem (♠) no longer exists in our protocol.Due to entanglement in game state (1), as soon as the state |i⟩ of one player's coin is determined, the other player's qubit |i⟩ used for confirmation is instantly determined.As is well known, this does not mean that information is being transmitted beyond the speed of light [45]. C. Solutions to Other Possible Attacks In this game, the quantum state given initially is never broken and is played to the end.Each player can only observe their own state, and the results of their observations do not affect others.In situations where only flipping (measuring) a coin is allowed for each player, there is no room for discussing the outcome of the game.Moreover, as both players are aware of the outcome on both sides, they have to accept the result.Thus, the game is perfectly fair and works well. In the original setting of quantum coin flipping game, only measuring one's own qubits is allowed, however, as a general extension of the game, we can also consider the case where players can manipulate their own qubits.In this case it is possible for a player to claim that they have obtained a value different from their opponent's result |j⟩ B : This issue is easily prevented by adding a third party (Witness) to the network.For this, we modify the initial state of coin (1) as follows (12) Fig. 4 shows a quantum circuit to play quantum coin flipping game with a witness for the case of constant c ij = 1 2 .The first and second qubits of Witness corresponds to the states of Player A and B, respectively.After all players flip their respective coins, the state of the game changes into Copy to Witness Witness can confirm that Player A's result is i and Player B's result is j, respectively.Even if Player A performs the same operation U A on their state as before, it does not change the record of the game held by Witness: ) Ultimately, Player A might try to change the state of Witness as well, but this is a completely different game, and is not in the scope of quantum coin flipping game. Thus, as long as the game is played via a proper quantum channel, there is no room for cheating on both sides (Condition (A)).Each player is correctly aware of the other's results (Condition (B)) and can agree on the winner of the game (Condition (C)).Therefore the problems with the conventional quantum coin flipping game have been completely solved by redesigning the game and utilising a shared entangled state. IV. N -PARTY QUANTUM COIN FLIPPING GAME A. Motivation & General Remark Here we extend our previous design of quantum flipping game for 2 persons.Before we present some explicit architectures, let us explain our motivations to consider N -person games.The problem in designing a two-person quantum coin flipping game was how to create a system in which two remote players could correctly share (B) and agree on their true results (C) without cheating (A).As we described in Introduction, this is a non-trivial task. In the case of N -player quantum flipping game, we present three solutions: central review, peer-to-peer review and hybrid peer-to-peer review.The first is the simplest method, but the most accurate and universal.As we did in the previous section, we invite an authorized third party (Witness) into the network.To eliminate unnecessary concerns, we assume that the state of Witness is not accessible from the outside.All participants must agree before the game begins that the result of the authorized Witness will be the final decision of the game. B. Design & Solution 1: Central Review As an initial state of the game, all the players and Witness share the following entangled state whereas before we use blue text for coin qubits of the players.Fig. 6 shows a quantum circuit to play quantum coin flipping game with a witness for the case of constant Prepare Coins Copy to Witness Flip coins Confirm results Below is an example of a game in progress for the case N = 3.When Player 1 first tosses the coin and observes |i 1 ⟩ 1 , the corresponding qubit |i 1 ⟩ of Witness is decided uniquely.The same is true for Player 2 and 3 who get |i 2 ⟩ 2 and |i 3 ⟩ 3 , respectively: As in the N = 2 case, Witness can check the results for all participants by measuring their own qubits one by one.This can be further extended for the N −party scenario as follows.After n(≤ N ) players flipped their coins and they observed {i 1 , i 2 , ⋯, i n } , the state of game changes into where c i 1 ⋯i n j n+1 ⋯j N gives the coefficient for the rest of N − n players after n players flipped their coins.This is a generalisation of the previous examples, however this is only correct for N − 1 players, therefore to complete the N -party game, the result for n = N must be considered.This is given by By combining equations ( 17) and (18), this gives the full solution for the central review protocol where N -players can confirm each others measurements through the witness.This is dependent on the assumption that in the Decision making stage, by prior agreement, the players agree on the Witness observation as the final outcome of the game.The advantage of this method is the results are determined immediately after the game is over, since the Witness status is uniquely determined as soon as everyone flips a coin.Moreover this is the simplest system, requiring only 2N qubits and 2 depth of gates to prepare the state before the game starts.However, if there is any doubt about the reliability of Witness or if there is an error in Witness' quantum measurement, an untrue result could be the final outcome of the game. C. Design & Solution 2: Peer-to-Peer Review Here we provide a solution to N -player quantum coin flipping game without an authorized third party (Fig. 7).In this system, all participants review the results of other participants.Again we use blue text for coin qubits and red text for confirmation qubits. The simplest way to extend the state (1) used for the two-player game is to use For i = 1, ⋯, N − 1, Player i confirms a result of Player (i + 1) and Player N confirms a result of Player 1.However, this leaves the verification of Player i's results entirely up to Player (i + 1).This state can be prepared as illustrated in Fig. 7. In order to achieve a peer-to-peer solution, we prepare the following state where at n = 1 and n = N the states should be understood as |i 2 ⋯i N ⟩ and |i 1 ⋯i N −1 ⟩, respectively.This state can be prepared by operating SWAP operators between two different qubits for all combinations of Players. Each player can know the results of everyone else by observing their own confirmation qubit N − 1 times.For example, Player 1 obtains the set {i 2 , i 3 , ⋯, i N } of results of all others.The result of Player i is reviewed by the other players.Let i n be a result of Player n and let i nj the data of Player n confirmed by Player j observing their own qubit.A dataset {i n1 , ⋯, i nN } for N −1 players regarding the results of Player n is obtained.Let r n be the ratio that the result of Player n agrees with the review results of other players where # denotes the number of elements in the set.Let R n be the ratio that the result of Player n does not agree with the review results of other players All participants decide before the game starts a constant criteria r, R, which do not depend on a particular n, for r n and R n to approve each player's result.For example, i n will be accepted if r n ≥ r, otherwise it will be rejected.One advantage of employing this method is that the outcome of the game is not dependent on a particular third party.If it is desirable for players to decide the outcome in a democratic manner, this method can be used.One undesirable aspect of this method is that it takes a long time to get results, and multiple players can collude to get an incorrect result.Another problem is that in such cases, there is no place to complain about fraud.Moreover, as shown in Fig. 7, this system requires N 2 qubits and (2N − 1)-depth of gates to prepare a state to play the game.Given that in the case of the Central Review system, the required gate depth is constant (= 2) regardless of the number of participants, and the number of required qubits is 2N , the peer-to-peer review system is much more expensive to implement. D. Design & Solution 3: Hybrid Peer-to-Peer Review Here we consider a hybrid peer-to-peer review system as a complementary mechanism to the central review sys- tem and the peer-to-peer review system.This is beneficial for networks that require a central server with peerto-peer capabilities.If necessary, it is possible to operate only a central server or only a peer-to-peer network.In this system the players use states (20) with a state of Witness: This system is easily implementable by combing quantum circuits shown in Fig. 6 and Fig. 7.The N = 2 case is illustrated in Fig. 4. There are two main ways to build consensus: 1. Players will primarily follow Witness's results, but will appeal using peer-to-peer review results. 2. Players will primarily follow peer-to-peer review results, but will appeal using Witness's results. Witness's states are determined as soon as all players have flipped their coins, but the result of the peer-to-peer review is not available until all players have completed all measurements.Players can choose the first method if efficiency is a priority, or the second method if democracy is a priority. V. DISCUSSION & FUTURE DIRECTIONS The work presented in this paper opens up a wide range of avenues to pursue in the future.To the best of the authors' knowledge, this is the first study of quantum coin flipping games motivated by mechanism design and incomplete contracts.While there has been much technical and theoretical research on quantum cryptography, there has been little discussion on what kind of systems/software are user-friendly.However, in order to promote and develop quantum computers and quantum communications in general society, research from this perspective is essential.Subsequently, quantum game theory will become increasingly important. So far we investigated the quantum coin flipping game with pure states, but it will be interesting to extend the game to mixed states.For example, in this paper we focused on using entanglement to prevent cheating in quantum games, however it would be interesting to see if quantum discord could be utilised [46,47].It has already been shown that quantum discord could be measured in a bipartite system [48], therefore it opens up the possibility of using quantum discord for quantum advantage.Fur-thermore, it would be particularly interesting if we were able to develop a protocol which could verify each players measurements, without each player having to specifically reveal their measurement.This could be done using a quantum zero knowledge proof [49].This would be in the form of peer-to peer-review, however the players would be allowed to keep their measurements secret.This could be a realistic scenario if the players measurements reveal sensitive information.From this perspective, creating a generic (hybrid) peer-to-peer quantum system is also an interesting open question. The game could also be developed into a repeated game where the players play multiple times [38][39][40].This type of game could be used to reveal the distribution of the shared state between the players.Such a scenario may occur if the players are unaware of what shared state they are performing their measurements on and they would like to deduce the distribution of the shared state. Figure 1 : Figure 1: Conventional Setting of the traditional Coin Flipping Game. Figure 2 : Figure 2: The classical coin flipping game can be represented by the figure above.Both players flip their respective coins, and based on their measurements, they then communicate their results to each other.From this the players decide on the winner of the game.Therefore, it is clear that both players must trust each other to report their results correctly, otherwise the game could be unfairly manipulated. 2. A winner is determined based on the results of the coins observed by each player.Now let us design the quantum coin flipping game as follows.In order to remove the possibility of cheating, consider now that Alice and Bob perform their measurements on a shared entangled state given by |ψ⟩ Coin = where ∑ ij |c ij | 2 = 1.The first two qubits |i⟩ A ⊗ |j⟩ B correspond to coins of Player A and Player B. For example, Player A can flip a coin by observing their own qubit |i⟩ A .The remaining two qubits |j⟩ A ⊗ |i⟩ B record the results of Player A and B, respectively.For example, Player A can confirm Player B's result by observing their own qubit |j⟩ A . 2 . Coin flipping stage: Each player independently makes a measurement on their own coin qubit |i⟩ A ⊗ |j⟩ B . 3. Confirmation stage: Each player independently confirms the opponent's state by measuring their own second (ancilla) qubit |j⟩ A ⊗ |i⟩ B . Figure 4 : Figure 4: Quantum Circuit for Quantum Coin Flipping Game with Witness.Here |0⟩ W is a qubit of Witness. Figure 5 : Figure 5: An extension of the quantum coin flipping game when manipulation to players' qubits are allowed can be represented by the figure above.Both players perform their respective quantum measurements on the quantum coin |C⟩, and based on their measurements (either |H⟩ or |T ⟩), the witness can verify the measurements.Subsequently, the players have no way of falsely communicating their results to each other. Figure 6 : Figure 6: Central Review Quantum Circuit for Quantum Coin Flipping Game with N -person. If Player B's coin is |j⟩ B , the state of the game changes from (4) into |ψ⟩ Coin ↠ (3) let Player B flip a coin.It is easy to see that Player A's result does not affect the probability distribution(3)of Player B. we replaced |0⟩ ↔ |↑⟩ and |1⟩ ↔ |↓⟩.Player A's first qubit is a coin of Player A, who can use the second qubit to confirm Player B's result.
2022-11-07T06:43:52.927Z
2022-11-03T00:00:00.000
{ "year": 2022, "sha1": "cfef23dd131175a08d724e689251bc61ab5f2403", "oa_license": "CCBY", "oa_url": "https://publications.aston.ac.uk/id/eprint/45480/1/qtc2.12066.pdf", "oa_status": "GREEN", "pdf_src": "ArXiv", "pdf_hash": "cfef23dd131175a08d724e689251bc61ab5f2403", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science", "Physics", "Mathematics" ] }
247677927
pes2o/s2orc
v3-fos-license
Cortical softening elicits zygotic contractility during mouse preimplantation development Actomyosin contractility is a major engine of preimplantation morphogenesis, which starts at the 8-cell stage during mouse embryonic development. Contractility becomes first visible with the appearance of periodic cortical waves of contraction (PeCoWaCo), which travel around blastomeres in an oscillatory fashion. How contractility of the mouse embryo becomes active remains unknown. We have taken advantage of PeCoWaCo to study the awakening of contractility during preimplantation development. We find that PeCoWaCo become detectable in most embryos only after the second cleavage and gradually increase their oscillation frequency with each successive cleavage. To test the influence of cell size reduction during cleavage divisions, we use cell fusion and fragmentation to manipulate cell size across a 20- to 60-μm range. We find that the stepwise reduction in cell size caused by cleavage divisions does not explain the presence of PeCoWaCo or their accelerating rhythm. Instead, we discover that blastomeres gradually decrease their surface tensions until the 8-cell stage and that artificially softening cells enhances PeCoWaCo prematurely. We further identify the programmed down-regulation of the formin Fmnl3 as a required event to soften the cortex and expose PeCoWaCo. Therefore, during cleavage stages, cortical softening, mediated by Fmnl3 down-regulation, awakens zygotic contractility before preimplantation morphogenesis. Introduction During embryonic development, the shape of animal cells and tissues largely relies on the contractility of the actomyosin cortex [1][2][3]. The actomyosin cortex is a submicron thin layer of cross-linked actin filaments, which are put under tension by nonmuscle myosin II motors [4]. Tethered to the plasma membrane, the actomyosin cortex is a prime determinant of the stresses at the surface of animal cells [4,5]. Contractile stresses of the actomyosin cortex mediate crucial cellular processes such as the ingression of the cleavage furrow during cytokinesis [6][7][8], the advance of cells' back during migration [9,10], or the retraction of blebs [11,12] the tissue scale, spatiotemporal changes in actomyosin contractility drive apical constriction [13,14] or the remodeling of cell-cell contacts [15,16]. Although tissue remodeling takes place on timescales from tens of minutes to hours or days, the action of the actomyosin cortex is manifest on shorter timescales of tens of seconds [1,3,17]. In fact, actomyosin is often found to act via pulses of contraction during morphogenetic processes among different animal species from nematodes to human cells [13,14,[18][19][20][21][22][23][24][25]. A pulse of actomyosin begins with the polymerization of actin filaments and the sliding of myosin minifilaments until maximal contraction of the local network within about 30 seconds [13,26,27]. Then, the actin cytoskeleton disassembles, and myosin is inactivated, which relaxes the local network for another 30 seconds [28][29][30]. These cycles of contractions and relaxations are governed by the turnover of the Rho GTPase and its effectors, which are well-characterized regulators of actomyosin contractility [19,29,31]. Indeed, the Rho pathway controls both the activity of myosin motors via their phosphorylation and the turnover of actin filaments via formins [5,19,26,29]. In instances where a sufficient number of pulses occur, pulses of contraction display a clear periodicity. The oscillation period of pulsed contractions ranges from 60 seconds to 200 seconds [14,18,19,22]. The period appears fairly defined for cells of a given tissue but can vary between tissues of the same species. What determines the oscillation period of contraction is poorly understood, although the Rho pathway may be expected to influence it [19,29,30]. Finally, periodic contractions can propagate into traveling waves. Such periodic cortical waves of contraction (PeCoWaCo) were observed in cell culture, starfish, and frog oocytes as well as in mouse preimplantation embryos [19,22,32,33]. In starfish and frog oocytes, mesmerizing Turing patterns of Rho activation with a period of 80 seconds and a wavelength of 20 μm appear in a cell cycle-dependent manner [19,34]. Interestingly, experimental deformation of starfish oocytes revealed that Rho activation wave front may be coupled to the local curvature of the cell surface [35], which was proposed to serve as a mechanism for cells to sense their shape [34]. In mouse embryos, PeCoWaCo with a period of 80 seconds were observed at the onset of blastocyst morphogenesis [22,36]. What controls the propagation velocity, amplitude, and period of these waves is unclear, and the potential role of such evolutionarily conserved phenomenon remains a mystery. During mouse preimplantation development, PeCoWaCo become visible before compaction [22], the first morphogenetic movements leading to the formation of the blastocyst [3,37,38]. During the second morphogenetic movement, prominent PeCoWaCo are displayed in prospective inner cells before their internalization [36]. In contrast, cells remaining at the surface of the embryo display PeCoWaCo of lower amplitude due to the presence of a domain of apical material that inhibits the activity of myosin [36]. Then, during the formation of the blastocoel, high temporal resolution time-lapse hint at the presence of PeCoWaCo as microlumens coarsen into a single lumen [39]. Therefore, PeCoWaCo appear throughout the entire process of blastocyst formation [3]. However, little is known about what initiates and regulates PeCoWaCo. The analysis of maternal zygotic mutants suggests that PeCoWaCo in mouse blastomeres result primarily from the action of the nonmuscle myosin heavy chain IIA (encoded by Myh9) rather than IIB (encoded by Myh10) [40]. Dissociation of mouse blastomeres shows that PeCoWaCo are cell autonomous since they persist in single cells [22]. Interestingly, although removing cell-cell contacts free up a large surface for the contractile waves to propagate, the oscillation period seems robust to the manipulation [22]. Similarly, when cells form an apical domain taking up a large portion of the cell surface, the oscillation period does not seem to be different from cells in which the wave can propagate on the entire cell surface [36]. This raises the question of how robust PeCoWaCo are to geometrical parameters, especially in light of recent observations in starfish oocytes [34,35]. This question becomes particularly relevant when considering that, during preimplantation development, cleavage divisions halve cell volume with each round of cytokinesis [41,42]. In this study, we investigate how the contractility of the cleavage stages emerges before initiating blastocyst morphogenesis. We take advantage of the slow development of the mouse embryo to study thousands of pulsed contractions and of the robustness of the mouse embryo to size manipulation to explore the geometrical regulation of PeCoWaCo. We discover that the initiation, maintenance, or oscillatory properties of PeCoWaCo do not depend on cell size. Instead, we discover a gradual softening of blastomeres with each successive cleavage, which exposes PeCoWaCo. This softening results in part from the reorganization of the actin cortex due to the down-regulation of the formin Fmnl3 during the first cleavage stages. Together, this study reveals how preimplantation contractility is robust to the geometrical changes of the cleavage stages during which the zygotic contractility awakens. PeCoWaCo during cleavage stages PeCoWaCo have been observed at the 8-, 16-cell, and blastocyst stages. To know when PeCo-WaCo first appear, we imaged embryos during the cleavage stages and performed Particle Image Velocimetry (PIV) and Fourier analyses (Fig 1A-1C, S1 Movie). We note that PeCo-WaCo pause during mitosis (S2 Movie), similarly to pulsed contractions in fly neuroblasts [24], and we have therefore excluded from our analysis embryos during mitosis. This analysis reveals that PeCoWaCo are detectable in fewer than half of zygote and 2-cell stage embryos and become visible in most embryos from the 4-cell stage onward (Fig 1D, S1A-S1F Fig, S1 Table, S1 Data). Furthermore, PeCoWaCo only display large amplitude from the 4-cell stage onward (Fig 1B and 1C). Interestingly, the period of oscillations of the detected PeCoWaCo shows a gradual decrease from 150 seconds to 80 seconds between the zygote and 8-cell stages ( Fig 1E, S1 Table, S1 Data). The acceleration of PeCoWaCo rhythm could simply result from the stepwise changes in cell size after cleavage divisions. Indeed, we reasoned that if the contractile waves travel at constant velocity, the period will scale with cell size and shape. This is further supported by the fact that PeCoWaCo are detected at the same rate and with the same oscillation period during the early or late halves of the 2-, 4-, and 8-cell stages (S1G and S1H Fig, S2 Table, S1 Data). Therefore, we set to investigate the relationship between cell size and periodic contractions. Cell size is not critical for the initiation or maintenance of PeCoWaCo First, to test whether the initiation of PeCoWaCo in most 4-cell stage embryos depends on the transition from the 2-to 4-cell stage blastomere size, we prevented cytokinesis. Using transient exposure to Vx-680 to inhibit the activity of Aurora kinases triggering chromosome separation, we specifically blocked the 2-to 4-cell stage cytokinesis without compromising the next cleavage to the 8-cell stage (Fig 2A and 2B, S3 Movie). This causes embryos to reach the 4-cell stage with blastomeres the size of 2-cell stage blastomeres. At the 4-cell stage, we detect PeCo-WaCo in most embryos whether they have 4-or 2-cell stage size blastomeres (Fig 2C, S3 Table, S1 Data). Furthermore, the period of oscillation is identical to 4-cell stage embryos in both control and drug-treated conditions ( Fig 2D, S3 Table, S1 Data). Importantly, we do not measure any change in cell surface tension when treating embryos with Vx-680 indicating that the treatment does not seem to impact the overall mechanics of the actomyosin cortex (S2 Fig, S4 Table, S1 Data). This suggests that 4-cell stage blastomere size is not required to initiate PeCo-WaCo in the majority of embryos. Then, we tested whether PeCoWaCo could be triggered prematurely by artificially reducing 2-cell stage blastomeres to the size of a 4-cell stage blastomere. To reduce cell size, we treated dissociated 2-cell stage blastomeres with the actin cytoskeleton inhibitor Cytochalasin D before deforming them repeatedly into a narrow pipette (Fig 2E and 2F, S4 Movie). By adapting the number of aspirations of softened blastomeres, we could carefully fragment blastomeres while keeping their sister cell mechanically stressed but intact. Importantly, we measured identical surface tensions in intact and fragmented cells indicating that fragmentation does not seem to impact the overall mechanics of the actomyosin cortex (S2 Fig, S4 Table, S1 Data). While the fragmented cell was reduced to the size of a 4-cell stage blastomere, both fragmented and manipulated cells eventually succeeded in dividing to the 4-cell stage. After waiting 1 hour for cells to recover from this procedure, we examined for the presence of PeCoWaCo over the Table). (D) Proportion of zygote (gray, n = 27), 2-cell (blue, n = 52), 4-cell (orange, n = 39), and 8-cell stage (green, n = 34) embryos showing detectable oscillations after Fourier transform of PIV analysis. Light gray shows nonoscillating embryos. Error bars show SEM. Chi-squared p-values comparing different stages are indicated (S1 Table, S1 Data). (E) Oscillation period of zygote (gray, n = 13), 2-cell (blue, n = 18), 4-cell (orange, n = 31), and 8-cell (green, n = 21) stages embryos. Larger circles show median values. Student t test p-values are indicated (S1 Table, S1 Data). PeCoWaCo, periodic cortical waves of contraction; PIV, Particle Image Velocimetry. Cell size does not influence the properties of PeCoWaCo The transition from 2-to 4-cell stage blastomere size is neither required nor sufficient to initiate PeCoWaCo. Nevertheless, the decrease in period of PeCoWaCo remarkably scales with the stepwise decrease in blastomere size (Fig 1E). Given a constant propagation velocity, PeCo-WaCo may reduce their period according to the reduced distance to travel around smaller cells. To test whether cell size determines PeCoWaCo oscillation period, we set out to Table, manipulate cell size over a broad range. For this, we used 16-cell stage blastomeres, which consistently display PeCoWaCo [36] and whose intermediate size permits broad size manipulation (Fig 3A-3D). By fusing varying numbers of 16-cell stage blastomeres, we built cells equivalent in size to 8-, 4-, and 2-cell stage blastomeres (Fig 3E-3G, S5 Movie, S5 Table, S1 Data) with undistinguishable surface tensions (S3A Fig, S6 Table, S1 Data). In addition, by fragmenting 16-cell stage blastomeres, we made smaller cells equivalent to 32-cell stage blastomeres ( Fig 3H-3J, S6 Movie, S5 Table, S1 Data) [42]. Together, we could image 16-cell stage blastomeres with sizes ranging from 10 μm to 30 μm in radius (Fig 3K and 3L, S3B Fig). Finally, to identify how the period may scale with cell size by adjusting the velocity of the contractile wave, we segmented the outline of cells to compute the local curvature, which, unlike PIV analysis, allows us to track contractile waves and determine their velocity in addition to their period ( [22,36]. We find that fused and fragmented 16-cell stage blastomeres show the same period, regardless of their size (Fig 3F, 3I and 3K, S5 Table, S1 Data). This could be explained if the wave velocity would scale with cell size. However, we find that the wave velocity remains constant regardless of cell size ( Fig 3E, 3J and 3L, S5 Table, S1 Data). Therefore, both the oscillation period and wave velocity are properties of PeCoWaCo that are robust to changes in cell size and associated curvature. Fusion of cells causes blastomeres to contain multiple nuclei, while cell fragmentation creates enucleated fragments. Interestingly, enucleated fragments continued oscillating with the same period and showing identical propagation velocities as the nucleus-containing fragments ( Fig 3I). These measurements indicate that PeCoWaCo are robust to the absence or presence of single or multiple nuclei and their associated functions. Together, using fusion and fragmentation of cells, we find that PeCoWaCo oscillation properties are robust to a large range of size perturbations. Therefore, other mechanisms must be at play to regulate periodic contractions during preimplantation development. Cortical maturation during cleavage stages Despite the apparent relationship between cell size and PeCoWaCo during preimplantation development, our experimental manipulations of cell size reveal that PeCoWaCo are not influenced by cell size. PeCoWaCo result from the activity of the actomyosin cortex, which could become stronger during cleavage stages and make PeCoWaCo more prominent as previously observed during the 16-cell stage [36]. Since actomyosin contractility generates a significant portion of the surface tension of animal cells, this would translate in a gradual increase in surface tension. To investigate this, we set to measure the surface tension of cells as a readout of contractility during cleavage stages using micropipette aspiration. Contrary to our expectations, we find that surface tension gradually decreases from the zygote to 8-cell stage (Fig 4A-4C, S7 Table, S1 Data) and noticeably mirrors the behavior of the period of PeCoWaCo during cleavage stages ( Fig 1E). Therefore, PeCoWaCo unlikely result simply from increased contractility. Instead, the tension of blastomeres at the zygote and 2-cell stages may be too high for PeCoWaCo to become visible in most embryos. To reduce the tension of the cortex, we used low concentrations (100 nM) of the actin polymerization inhibitor Latrunculin A (Fig 4D and 4E, S7 Table, S1 Data) [32]. Softening the cortex of 2-cell stage embryos increased the proportions of embryos displaying PeCoWaCo ( Fig 4F, S8 Movie, S7 Table, S1 Data). This suggests that PeCoWaCo become more visible thanks to the gradual softening of the cortex of blastomeres during cleavage stages. Moreover, low concentrations of Latrunculin A decreased the oscillation period of PeCoWaCo down to approximately 100 seconds, as compared to approximately 150 seconds for the DMSO control embryos (Fig 4G, S7 Table, S1 Data). This suggests that modifications of the polymerization rate of the actin cytoskeleton could be responsible for the increase in PeCoWaCo frequency observed during cleavage stages. To investigate the changes responsible for blastomere softening during cleavage stages, we first looked into changes in cortical organization. Using super resolution microscopy on phalloidin-stained embryos, we measured the thickness of the actomyosin cortex, which has been reported to change with surface tension [43]. Using line scans orthogonal to the cell surface, we measured a width at half maximum of approximately 500 nm at the zygote stage (S5A-S5C Fig, S8 Table, S1 Data). This width increased during the 2-cell stage and fell back to its initial levels at the 4-and 8-cell stages (S5A-S5C Fig, S8 Table, S1 Data), suggesting cortical remodeling at the time of PeCoWaCo initiation. To investigate the molecular changes responsible for blastomere softening during cleavage stages, we took advantage of available single-cell RNA sequencing and proteomic data [44]. We noted that several regulators of actin polymerization such as formins and actin related proteins (arps) decrease in their mRNA levels (S5D and S5E Fig Consistently with mosaic sustained expression of GFP-Fmnl3, fewer embryos overexpressing GFP-Fmnl3 in all blastomeres showed PeCoWaCo as compared to those injected with GFP ( Fig 4H-4J, S10 Movie, S7 Table, S1 Data). Interestingly, this effect is clearly present during the first half of the 4-cell stage embryos, but GFP-Fmnl3 expressing embryos recover almost to the levels of GFP expressing embryos by the second half of the 4-cell stage (Fig 4I and 4J, S7 Table, S1 Data). Over long-term development, embryos expressing GFP-Fmnl3 compacted normally and formed blastocysts (S11 Movie). Together, this indicates that overexpressing Fmnl3 has a specific but transient effect. In fly embryos, the effect of Fmnl overexpression was proposed to dampen oscillation by stiffening the cortex [26]. To test the effect of Fmnl3 overexpression on the mechanical properties of cleavage stage mouse embryos, we measured their surface tension. Indeed, we measured surface tensions twice higher for embryos expressing GFP-Fmnl3 than for those expressing GFP alone ( Fig 4K, S7 Table, S1 Data). We conclude that Fmnl3 down-regulation during cleavage stages is required for the softening of the cortex, which elicits the appearance of PeCoWaCo. Together, these experiments using the pulsatile nature of cell contractility reveal the unsuspected maturation of the cortex of blastomeres during the cleavage stages of mouse embryonic development. Table, Discussion During cleavage stages, blastomeres halve their size with successive divisions. Besides the increased number of blastomeres, there is no change in the architecture of the mouse embryo until the 8-cell stage with compaction. We find that this impression of stillness is only true on a timescale of hours since, on the timescale of seconds, blastomeres display signs of actomyosin contractility. During the first 2 days after fertilization, contractility seems to mature by displaying more frequent and visible pulses. We further find that pulsed contractions do not rely on the successive reductions in cell size but rather on the gradual decrease in surface tension of the blastomeres. Therefore, during cleavage stages, cortical softening awakens zygotic contractility before preimplantation morphogenesis. Previous studies on the cytoskeleton of the early mouse embryo revealed that both the microtubule and intermediate filament networks mature during cleavage stages. Keratin intermediate filaments appear at the onset of blastocyst morphogenesis [45] and become preferentially inherited by prospective trophectoderm (TE) cells [46]. The microtubule network is initially organized without centrioles around microtubule bridges connecting sister cells [47,48]. The spindle of early cleavages also organizes without centrioles similarly to during meiosis [47,49]. As centrioles form de novo, cells progressively transition from meiosis-like to mitosis-like divisions [47]. We find that the actomyosin cortex also matures during cleavage stages by decreasing its oscillation period ( Fig 1E) and its surface tension (Fig 4C). Interestingly, this decrease in cortical tension seems to be in continuation with the maturation of the oocyte. Indeed, the surface tension of mouse oocytes decreases during their successive maturation stages [50]. The softening of the oocyte cortex is associated with architectural rearrangements that are important for the cortical movement of the meiotic spindle [51,52]. Therefore, similarly to the microtubule network, the zygotic actomyosin cortex awakens progressively from an egg-like state. The maturation of zygotic contractility may be influenced by the activation of the zygotic genome occurring partly at the late zygote stage and mainly during the 2-cell stage [53,54]. Recent studies in frog and mouse propose that reducing cell size could accelerate zygotic genome activation (ZGA) [55,56]. We find that manipulating cell size is neither sufficient to trigger PeCoWaCo prematurely in most embryos nor required to initiate or maintain them in a timely fashion with the expected oscillation period of the corresponding cleavage stage (Figs 2 and 3). Instead, we find that the surface of blastomeres in the cleavage stages is initially too tense to allow for PeCoWaCo to be clearly displayed (Fig 4). We identify Fmnl3 down-regulation as an essential step in the reduction of blastomeres surface tension during cleavage stages (Fig 4). Taking place at the 2-cell stage, the contribution of the ZGA to zygotic contractility activation is unclear since Fmnl3 down-regulation begins after the zygote stage both at the mRNA and protein levels [44]. The effect of mechanical constraints on pulsed contractions is reminiscent of recent reports in fly embryos in which Fmnl-mediated densification of a persisting actin cytoskeleton dampens pulsed contractions [26]. In addition, the influence of mechanical constrains can also come from external structures. For example, in starfish oocytes, removing an elastic jelly conditions are indicated (S7 Table, Table, S1 Data). Light gray shows nonoscillating embryos. (K) Surface tension of embryos expressing GFP (n = 11) or GFP-Fmnl3 (n = 13) measured during the early phase of the 4-cell stage. Student t test p-value is indicated (S7 Table, surrounding the egg softens them and renders contractile waves more pronounced [35]. Interestingly, the changes in surface curvature caused by cortical waves of contraction may influence the signaling and cytoskeletal machinery controlling the wave [35,57]. Such mechanochemical feedback has been proposed to regulate the period of contractions via the advection of regulators of actomyosin contractility [29]. As a result, the curvature of cells and tissues is suspected to regulate contractile waves [35,58]. Using cell fragmentation and fusion, we have manipulated the curvature of the surface over which PeCoWaCo travel (Fig 3). From radii ranging between 10 μm and 30 μm, we find no change in the period or traveling velocity of PeCoWaCo (Fig 3). This indicates that, in the mouse embryo, the actomyosin apparatus is robust to the changes in curvature taking place during preimplantation development. Therefore, the cleavage divisions per se are unlikely regulators of preimplantation contractility. The robustness of PeCoWaCo to changes of radii ranging from 10 μm to 30 μm is puzzling since neither the oscillation period nor the wave velocity seem affected (Fig 3K and 3L). One explanation would be that the number of waves present simultaneously changes with the size of the cells. Using our 2D approach, we could not systematically analyze this parameter. Nevertheless, we did note that some portions of the fused blastomeres did not display PeCoWaCo. These may be corresponding to apical domains, which do not show prominent PeCoWaCo [36]. Therefore, the relationship between the total area of the cell and the "available" or "excitable" area for PeCoWaCo may not be straightforward [19]. In the context of the embryo, in addition to the apical domain, cell-cell contacts also down-regulate actomyosin contractility and do not show prominent contractions [22]. As cell-cell contacts grow during compaction and apical domains expand [59,60], the available excitable cortical area for PeCoWaCo eventually vanishes [3]. Together, our study uncovers the maturation of the actomyosin cortex, which softens and speeds up the rhythm of contractions during the cleavage stages of the mouse embryo. Interestingly, zebrafish embryos also soften during their cleavage stages, enabling doming, the first morphogenetic movement in zebrafish [61]. It will be important to investigate whether cell and tissue softening during cleavage stages is conserved in other animals. Embryo work Recovery and culture. All animal work is performed in the animal facility at the Institut Curie, with permission by the institutional veterinarian overseeing the operation (APAFIS #11054-2017082914226001). The animal facilities are operated according to international animal welfare rules. Only embryos surviving the experiments were analyzed. Survival is assessed by continuation of cell division as normal when embryos are placed in optimal culture conditions. Mouse lines. Mice are used from 5 weeks old on. (C57BL/6xC3H) F1 hybrid strain is used for wild-type (WT). To visualize plasma membranes, mTmG (Gt(ROSA)26Sor tm4(ACTB-tdTomato,-EGFP)Luo ) is used [62]. Isolation of blastomeres. ZP-free 2-cell or 4-cell stages embryos are aspirated multiple times (typically between 3 and 5 times) through a smoothened glass pipette (narrower than the embryo but broader than individual cells) until dissociation of cells. Cytochalasin D (Sigma, C2618-200UL) 10 mM DMSO stock is diluted to 10 μM in KSOM. To fragment cells, isolated 2-or 16-cell stage blastomeres were treated with Cytochalasin D for 20 minutes before being gently aspirated into a smoothened glass pipette of diameter about 30 or 5 to 10 μm, respectively [59]. Moreover, 2 to 3 repeated aspirations are typically sufficient to clip cells into to 2 large fragments, one containing the nucleus and one without. Cells that did not fragment after 2 aspirations are used as control. For 2-cell stage fragmentation, nucleated fragments divisions were observed in 9/15 of the cases. Enucleated fragments started to deform extensively 6 hours after fragmentation, making it difficult to measure PeCoWaCo and surface tension. GenomONE-CF FZ SeV-E cell fusion kit (Cosmo Bio, Tokyo, Japan, ISK-CF-001-EX) is used to fuse blastomeres [40]. HVJ envelope is resuspended following manufacturer's instructions and diluted in FHM for use. To fuse blastomeres of embryos at the 16-cell stage, embryos are incubated in 1:50 HVJ envelope for 15 minutes at 37˚C followed by washes in KSOM. Latrunculin A (Tocris Bioscience, ref 3973) 10 mM DMSO stock is diluted to 100 nM in KSOM. To soften cells, 2-cell stage embryos are imaged in medium containing Latrunculin A covered with mineral oil for 2 hours. Fmnl3 cDNA isolation, cloning, and in vitro mRNA synthesis. To isolate cDNA of Fmnl3, we performed total RNA extraction from a pool of 50 zygotes using the PicoPure RNA Isolation Kit (Thermo Fisher Scientific, Walthan, MA, USA, KIT0204). DNase treatment is performed during the extraction, using RNase-Free DNase Set (QIAGEN, Hilden, Germany, 79254). Subsequently, a cDNA library is synthesized with oligo(dT) (Thermo Fisher Scientific, 18418012) using the Super-Script III Reverse Transcriptase kit (Thermo Fisher Scientific, 18080044) on all the extracted RNA, according to manufacturer's instructions. As a final step, a fragment of 3,084 bp corresponding to Fmnl3 isoform 202 (MGI:109569) is specifically isolated from the cDNA library, by PCR amplification with forward (fw) and reverse (rv) primers GCATGGACGAGCTGTACAAGGGCAACCTGGAGAGCACCGA and TAGTTCTAGACC GGATCCGGCTAACAGTTTGACTCGTCATG, respectively. To generate the GFP-Fmnl3 plasmid construct for in vitro mRNA synthesis, the Gibson Assembly cloning method was used. Three linear DNA fragments, corresponding to pCS2 + backbone, GFP reporter gene, and Formin like 3 (Fmnl3) cDNAs, are initially generated by PCR amplification. During this step, overlapping ends are incorporated into each fragment. Forward and reverse primers to obtain a 4,087-bp fragment of the pCS2+ backbone: TGACGAGTCAAACTGTTAGCCGGATCCGGTCTAGAACTATAGTGAGTCGT and AGTGAGTCGTATTACCGGATCCGGTCTATAGTGTCACCTAAATC. Forward and reverse primers to obtain a 717-bp fragment encoding GFP: CGGTAATAC GACTCACTATAGGCCGGATCCGGATGGTGAGCAAGGGCGAGGA and TCGGTGCT CTCCAGGTTGCCCTTGTACAGCTCGTCCATGC. Following DNA purification, the assembly of the final construct is achieved by incubating the 3 fragments in the Gibson Assembly Master Mix (NEB, Ipswich, MA, USA, E2611S), according to the manufacturer's instruction. Following the linearization of the pCS2-GFP-Fmnl3 plasmid using Hind III, GFP-Fmnl3 mRNA is transcribed using the mMESSAGE mMACHINE SP6 Kit (Invitrogen, Waltham, MA, USA, AM1340) according to manufacturer's instructions and resuspended in Rnase-free water. GFP mRNA is generated by in vitro transcription of a GFP linear DNA fragment of approximately 750 bp obtained by PCR amplification from the pCS2-GFP-Fmnl3 plasmid, with fw primer ATTTAGGTGACACTATAGAGCC and rv primer CTACTTGTACAGCTCGTC CAT. Microinjection. Glass capillaries (Harvard Apparatus (Holliston, MA, USA) glass capillaries with 780-μm inner diameter) are pulled using a needle puller and micro forged to forge a holding pipette and an injection needle. The resulting injection needles are filled with mRNA solution diluted to 1 μg/μL in injection buffer (5 mM Tris-HCl pH = 7.4, 0.1 mM EDTA). The filled needle is positioned on a micromanipulator (Narishige MMO-4) and connected to a positive pressure pump (Eppendorf FemtoJet 4i). Embryos are placed in FHM drops covered with mineral oil under Leica (Wetzlar, Germany) TL Led microscope. Two-cell stage embryos were injected while holding with holding pipette connected to a Micropump CellTram Oil. Micropipette aspiration. As described previously [22,64], a microforged micropipette coupled to a microfluidic pump (Fluigent, Le Kremlin-Bicêtre, France, MFCS EZ) is used to measure the surface tension of embryos. In brief, micropipettes of radii 8 to 16 μm are used to apply stepwise increasing pressures on the cell surface until reaching a deformation, which has the radius of the micropipette (R p ). At steady state, the surface tension γ of the cell is calculated from the Young-Laplace's law applied between the cell and the micropipette: γ = P c / 2 (1/R p -1/R c ), where P c is the critical pressure used to deform the cell of radius of curvature R c . Eight-cell stage embryos are measured before compaction (all contact angles < 105˚), during which surface tension would increase [22]. Fragmented cells and their control cells are measured 10 to 15 hours after fragmentation. At that point, enucleated fragments are mostly irregular in shape and cannot be measured. Measurements of individual blastomeres from the same embryo are averaged and plotted as such. Using the experiment designer tool of ZEN (Zeiss), we set up nested time-lapses in which all embryos are imaged every 3 to 5 hours for approximately 10 minutes with an image taken every 5 seconds at 2 focal planes positioned 10 μm apart. Embryos are kept in a humidified atmosphere supplied with 5% CO 2 at 37˚C. mTmG embryos are imaged at the 16-cell stage using an inverted Zeiss Observer Z1 microscope with a CSU-X1 spinning disc unit (Yokogawa, Tokyo, Japan). Excitation is achieved using a 561 nm laser through a 63×/1.2 C Apo Korr water immersion objective. Emission is collected through 595/50 band-pass filters onto an ORCA-Flash 4.0 camera (C11440, Hamamatsu). The microscope is equipped with an incubation chamber to keep the sample at 37˚C and supply the atmosphere with 5% CO 2 . Surface tension measurements are performed on a Leica DMI6000 B inverted microscope equipped with a 40×/0.8 DRY HC PL APO Ph2 (11506383) objective and Retina R3 camera and 0.7× lens in front of the camera. The microscope is equipped with an incubation chamber to keep the sample at 37˚C and supply the atmosphere with 5% CO 2 . Stained embryos are imaged on a Zeis LSM900 Inverted Laser Scanning Confocal Microscope with Airyscan detector. Excitation is achieved using a 488-nm laser line through a 63×/ 1.4 OIL DICII PL APO objective. Emission is collected through a 525/50 band-pass filter onto an airyscan photomultiplier (PMT) allowing to increase the resolution up to a factor 1.7. Data analysis Image analysis. Manual shape measurements. Fiji [66] is used to measure cell, embryo, pipette sizes, and wave velocity. The circle tool is used to fit a circle onto cells, embryos, and pipettes. The line tool is used to fit lines onto curvature kymographs. PIV analysis. To detect PeCoWaCo in phase contrast images of embryos, we use PIV analysis followed by a Fourier analysis. As previously [22,40], PIVlab 2.02 running on MATLAB [67,68] is used to process approximately 10 minutes long time lapses with images taken every 5 seconds using 2 successive passes through interrogation windows of 20/10 μm resulting in approximately 180 vectors per embryo. The x-and y-velocities of individual vectors from PIV analysis are used for Fourier analysis. A Fourier transform of the vector velocities over time is performed using MATLAB's fast Fourier transform function. The resulting Fourier transforms are squared to obtain individual PLOS BIOLOGY power spectra. Squared Fourier transforms in the x and y directions of all vectors are averaged for individual embryos resulting in mean power spectra of individual embryos. Spectra of individual embryos are checked for the presence of a distinct amplitude peak to extract the oscillation period. The peak value between 50 seconds and 200 seconds was taken as the amplitude, as this oscillation period range is detectable by our imaging method. An embryo is considered as oscillating when the amplitude peaks 1.777 times above background (taken as the mean value of the power spectrum signal of a given embryo). This threshold value was determined using CutOffFinder [69] (S1 Fig) to minimize false positive and false negative according to visual verification of time-lapse movies. The number of oscillating zygote is likely overestimated, while the number of oscillating 8-cell stage is underestimated (S1 Fig). Two-cell, 4-cell, and 8-cell stages are considered early during the first half of the corresponding stage and late during the second half. Since PeCoWaCo halt during mitosis (S2 Movie), time lapses including dividing cells were excluded from the analysis. Local curvature analysis. To measure PeCoWaCo period, amplitude, and velocity, we analyze the associated changes in surface curvature and perform Fourier analysis. Importantly, since we can only extract these parameters from oscillating blastomeres and embryos, data shown in Fig 3 and S4 Fig come from selected cells and embryos based on their visible oscillation. To obtain the local curvature of isolated blastomeres and embryos, we developed an approach similar to that of [22,36,70]. First, a Gaussian blur is applied to images using Fiji [66]. Then, using ilastik [71], pixels are associated with cell surface or background. Segmentations of cells are then used in a custom made Fiji plug-in (called WizardofOz, found under the Mtrack repository) for computing the local curvature information using the start, center and end point of a 10-μm strip on the cell surface to fit a circle. The strip is then moved by 1 pixel along the segmented cell, and a new circle is fitted. This process is repeated till all the points of the cell are covered. The radius of curvature of the 10-μm strip boundaries are averaged. Kymograph of local curvature values around the perimeter over time is produced by plotting the perimeter of the strip over time. Curvature kymographs obtained from local curvature tracking are then exported into a custom made Python script for 2D Fast Fourier Transform analysis. Spectra of individual cells are checked for the presence of a distinct amplitude peak to extract the oscillation period. The peak value between 50 seconds and 200 seconds was taken as the amplitude, as this oscillation period range is detectable by our imaging method. To measure the wave velocity, a line is manually fitted on the curvature kymograph using Fiji. Cortex thickness measurement. Super resolution images obtained using airyscan microscopy are used to measure cortex thickness. The full width at half maximum of cortical intensity profiles were used to assess cortical thickness by using CortexThicknessAnalysis tool [43] available at https://github.com/PaluchLabUCL/CortexThicknessAnalysis. Statistics. Data are plotted using Excel (Microsoft, Redmond, WA, USA) and R-based SuperPlotsOfData tool [72]. Mean, standard deviation, median, 1-tailed Student t test, and chi-squared p-values are calculated using Excel (Microsoft) or R (R Foundation for Statistical Computing). Statistical significance is considered when p < 10 −2 . The sample size was not predetermined and simply results from the repetition of experiments. No sample that survived the experiment, as assessed by the continuation of cell divisions, was excluded. No randomization method was used. The investigators were not blinded during experiments. Code availability The code used to analyze the oscillation frequencies from PIV and local curvature analyses can be found at https://github.com/MechaBlasto/PeCoWaCo.git. The Fiji plug-in for local curvature analysis WizardofOz can be found under the MTrack repository. Supporting information S1 Fig. (related to Fig 1) Table). Larger circles show median values. (B) Surface tension of mechanical control (n = 14) or fragmented cells (n = 14). Student t test p-value is indicated (S4 Table). Larger circles show median values. (PDF) S3 Fig. (related to Fig 3). Surface tension of fused blastomeres. (A) Surface tension of cells resulting from the fusion of 8, 4, or 2 16-cell stage blastomeres (n = 18, 20 and 14 embryos, respectively). Student t test p-value is indicated (S6 Table, Table. (related to Fig 1). p-Values from chi-squared test for PeCoWaCo detection and from Student t test for period comparisons. Red when above 0.05, green when below 0.01, and black in between. See S1 Data for individual quantitative observations. PeCoWaCo, periodic cortical waves of contraction. 2). p-Values from chi-squared test for PeCoWaCo detection and from Student t test for period comparisons. Red when above 0.05, green when below 0.01, and black in between. See S1 Data for individual quantitative observations. PeCoWaCo, periodic cortical waves of contraction. (DOCX) 3). p-Values from Student t test. Red when above 0.05, green when below 0.01, and black in between. See S1 Data for individual quantitative observations. (DOCX) S6 Table. (related to S3 Fig). p-Values from Student t test. Red when above 0.05, green when below 0.01, and black in between. See S1 Data for individual quantitative observations. (DOCX) S7 Table. (related to Fig 4). p-Values from chi-squared test for PeCoWaCo detection and from Student t test for period and surface tension comparisons. Red when above 0.05, green when below 0.01, and black in between. See S1 Data for individual quantitative observations. PeCoWaCo, periodic cortical waves of contraction. (DOCX) S8 Table. (related to S5 Fig). p-Values from chi-squared test for PeCoWaCo detection and from Student t test for period and surface tension comparisons. Red when above 0.05, green when below 0.01, and black in between. See S1 Data for individual quantitative observations. PeCoWaCo, periodic cortical waves of contraction.
2022-03-26T06:23:38.686Z
2022-03-01T00:00:00.000
{ "year": 2022, "sha1": "c8a3d360367b31f93ac9c5fda7d354484e79915e", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosbiology/article/file?id=10.1371/journal.pbio.3001593&type=printable", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "221e5f2838c7dbeff4a35c2b7fec075900e94bba", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
258803571
pes2o/s2orc
v3-fos-license
Lamotrigine as an alternative treatment for paroxysmal kinesigenic dyskinesia Brief episodes of involuntary movement triggered by purposeful actions are characteristic of paroxysmal kinesigenic dyskinesia (PKD) and can interfere with daily life. Carbamazepine and oxcarbazepine are effective at reducing episodes but have teratogenic risks that limit their therapeutic potential. There is limited information describing alternate sodium channel blockers as first‐line therapies for PKD, and specifically there is no recommendation for the use of alternative agents for females of childbearing potential. We conducted an institutional retrospective chart review of patients with PKD seen between 2013 and 2022. Our research ethics board approved the identification of participants by diagnostic code in the electronic medical records and waived the requirement for written informed consent. Eleven patients were identified with a confirmed diagnosis of PKD. Features of the cohort are outlined in Table 1. Ten patients were started on a sodium channel blocking agent. Five patients received carbamazepine (Table 2), with complete resolution of the movements in four patients and more than 90% reduction in the other individual. One person took phenytoin for several years before transitioning to carbamazepine. Three patients took oxcarbazepine, with a >90% reduction in events in two (Table 2). Two girls were treated with lamotrigine as a first‐line agent. Brief episodes of involuntary movement triggered by purposeful actions are characteristic of paroxysmal kinesigenic dyskinesia (PKD) and can interfere with daily life. 1 Carbamazepine and oxcarbazepine are effective at reducing episodes but have teratogenic risks that limit their therapeutic potential. 2 There is limited information describing alternate sodium channel blockers as first-line therapies for PKD, and specifically there is no recommendation for the use of alternative agents for females of childbearing potential. We conducted an institutional retrospective chart review of patients with PKD seen between 2013 and 2022. Our research ethics board approved the identification of participants by diagnostic code in the electronic medical records and waived the requirement for written informed consent. Eleven patients were identified with a confirmed diagnosis of PKD. Features of the cohort are outlined in Table 1. Ten patients were started on a sodium channel blocking agent. Five patients received carbamazepine (Table 2), with complete resolution of the movements in four patients and more than 90% reduction in the other individual. One person took phenytoin for several years before transitioning to carbamazepine. Three patients took oxcarbazepine, with a >90% reduction in events in two ( Table 2). Two girls were treated with lamotrigine as a first-line agent. Patient 1: Onset estimated age 8 years, her episodes involve an aura ("tickling sensation") in the foot followed by unilateral posturing of the arm or leg 149 (left or right) +/− facial dystonia. Provoked by running or walking. Duration: 30 s. Frequency: "many"/day. Treated at age 15 with lamotrigine 25 mg daily, increased to twice daily, tolerated for 4.5 years at time last seen. Has an affected sibling and pathogenic mutation in PRRT2. Patient 2: Onset age 10 years, she describes a feeling of weakness in the right side of the body, with head turn to the right, right arm posturing towards chest, and muscles feel tense or "paralyzed." Episodes are stereotyped and consistently provoked by walking up stairs. Duration: 5 s. Frequency: "multiple"/day. Treated at age 16 with lamotrigine 50 mg daily, tolerated for 1.5 years at time last seen. Did not pursue genetic testing. Both patients reported complete resolution of their events on lamotrigine with recurrence only with missed doses. We propose lamotrigine as a preferred agent in females of childbearing potential. Pharmacological management of PKD is indicated for frequent, intolerable episodes that interfere with daily life. There are no clinical trials for PKD, so physicians must rely on Class IV evidence to guide management. The literature supports the use of carbamazepine and oxcarbazepine as equivalent first-line agents in the management of PKD. 1,[3][4][5] Lamotrigine has been suggested as a second-line agent for PKD with few reports of use as a first-line agent. [6][7][8] The largest cohort reported 100% attack-free rate after four weeks of lamotrigine in 18 pre-pubescent children. 6 Of particular importance is the evidence that carbamazepine and oxcarbazepine can cause rare but significant fetal malformations when used in females of childbearing age, whereas lamotrigine has the lowest risk of fetal malformation. 2 A recent meta-analysis found a statistically significant increase in major congenital malformations with carbamazepine monotherapy (odds ratio [OR] 1.37) and oxcarbazepine (OR 1.32 *not statistically significant) compared to lamotrigine (OR 0.96). 2 The need for an alternative agent in this population is not adequately addressed in the literature. Our experience suggests that lamotrigine is an effective agent in adolescent post-pubertal patients and may be a safe and effective option for females of childbearing potential. Further studies with larger cohorts will be required to investigate lamotrigine's efficacy and tolerability as a first-line agent for PKD and to better understand the effect of pregnancy on PKD and on lamotrigine therapy.
2023-05-20T15:17:08.751Z
2023-05-17T00:00:00.000
{ "year": 2023, "sha1": "99f3ec48e2ea5df737d691491d10be99ab2323a8", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1002/cns3.20017", "oa_status": "HYBRID", "pdf_src": "Wiley", "pdf_hash": "bfca9096e1d11a75ccf350cae35a4cba300dc98e", "s2fieldsofstudy": [ "Medicine", "Psychology" ], "extfieldsofstudy": [] }
257066648
pes2o/s2orc
v3-fos-license
SAFFRON-103: a phase 1b study of the safety and efficacy of sitravatinib combined with tislelizumab in patients with locally advanced or metastatic non-small cell lung cancer Background Some patients with locally advanced/metastatic non-small cell lung cancer (NSCLC) respond poorly to anti-programmed cell death protein 1 (PD-1)/anti-programmed death-ligand 1 (PD-L1) treatments. Combination with other agents may improve the outcomes. This open-label, multicenter, phase 1b trial investigated the combination of sitravatinib, a spectrum-selective tyrosine kinase inhibitor, plus anti-PD-1 antibody tislelizumab. Methods Patients with locally advanced/metastatic NSCLC were enrolled (Cohorts A, B, F, H, and I; N=22–24 per cohort). Cohorts A and F included patients previously treated with systemic therapy, with anti-PD-(L)1-resistant/refractory non-squamous (cohort A) or squamous (cohort F) disease. Cohort B included patients previously treated with systemic therapy, with anti-PD-(L)1-naïve non-squamous disease. Cohorts H and I included patients without prior systemic therapy for metastatic disease, no prior anti-PD-(L)1/immunotherapy, with PD-L1-positive non-squamous (cohort H) or squamous (cohort I) histology. Patients received sitravatinib 120 mg orally one time per day plus tislelizumab 200 mg intravenously every 3 weeks, until study withdrawal, disease progression, unacceptable toxicity, or death. The primary endpoint was safety/tolerability among all treated patients (N=122). Secondary endpoints included investigator-assessed tumor responses and progression-free survival (PFS). Results Median follow-up was 10.9 months (range: 0.4–30.6). Treatment-related adverse events (TRAEs) occurred in 98.4% of the patients, with ≥Grade 3 TRAEs in 51.6%. TRAEs led to discontinuation of either drug in 23.0% of the patients. Overall response rate was 8.7% (n/N: 2/23; 95% CI: 1.1% to 28.0%), 18.2% (4/22; 95% CI: 5.2% to 40.3%), 23.8% (5/21; 95% CI: 8.2% to 47.2%), 57.1% (12/21; 95% CI: 34.0% to 78.2%), and 30.4% (7/23; 95% CI: 13.2% to 52.9%) in cohorts A, F, B, H, and I, respectively. Median duration of response was not reached in cohort A and ranged from 6.9 to 17.9 months across other cohorts. Disease control was achieved in 78.3–90.9% of the patients. Median PFS ranged from 4.2 (cohort A) to 11.1 months (cohort H). Conclusions In patients with locally advanced/metastatic NSCLC, sitravatinib plus tislelizumab was tolerable for most patients, with no new safety signals and overall safety profiles consistent with known profiles of these agents. Objective responses were observed in all cohorts, including in patients naïve to systemic and anti-PD-(L)1 treatments, or with anti-PD-(L)1 resistant/refractory disease. Results support further investigation in selected NSCLC populations. Trial registration number NCT03666143. Conclusions In patients with locally advanced/ metastatic NSCLC, sitravatinib plus tislelizumab was tolerable for most patients, with no new safety signals and overall safety profiles consistent with known profiles of these agents. Objective responses were observed in all cohorts, including in patients naïve to systemic and anti-PD-(L)1 treatments, or with anti-PD-(L)1 resistant/ WHAT IS ALREADY KNOWN ON THIS TOPIC ⇒ Anti-programmed cell death protein 1 (PD-1) or anti-programmed death-ligand 1 (PD-L1) therapy has clinical benefit in locally advanced/metastatic non-small cell lung cancer (NSCLC), but some patients have poor responses or develop resistance. Preliminary clinical data from studies in selected NSCLC populations suggest that combining multitargeted tyrosine kinase inhibitors (TKIs) with PD-(L)1 inhibitors may improve responses and warrants further investigation. WHAT THIS STUDY ADDS ⇒ This trial is the first study of combination therapy with the multi-TKI sitravatinib, which targets TAM (TYRO3, AXL, MER) and split kinase family receptors, plus the anti-PD-1 monoclonal antibody tislelizumab in patients with locally advanced/metastatic NSCLC. Results indicated no unexpected safety signals, with objective tumor responses observed across a broad range of NSCLC treatment settings, including in patients naïve to or previously treated with systemic treatment, naïve to anti-PD-(L)1 treatment or with resistant/refractory disease, and with either nonsquamous or squamous histology. Open access refractory disease. Results support further investigation in selected NSCLC populations. Trial registration number NCT03666143. BACKGROUND As monotherapy, first-line use of anti-programmed cell death protein 1 (PD-1) and anti-programmed deathligand 1 (PD-L1) antibodies in patients with locally advanced or metastatic non-small cell lung cancer (NSCLC) has been reported to prolong progression-free survival (PFS) and overall survival (OS) compared with chemotherapy, with the greatest benefits typically seen in tumors with elevated levels of PD-L1 expression (tumor proportion score ≥50%). 1 2 A meta-analysis of metastatic NSCLC trials estimated 1-year PFS rates for first-line PD-1 blockade of 40.3% among patients with NSCLC with PD-L1 ≥50%, 35.0% in those with PD-L1 of 1-49%, and 19.9% in patients with PD-L1 <1%. 3 Similarly, this analysis estimated 2-year OS rates in patients receiving first-line PD-1 blockade of 47.5%, 34.9%, and 16.7% in the respective PD-L1 subgroups. 3 These data highlight the need for alternative regimens for patients who may achieve only limited clinical benefit from first-line anti-PD-(L)1 monotherapy. In later lines of therapy, anti-PD-(L)1 therapies have been shown to prolong OS versus docetaxel, although the benefits for long-term PFS are less clear, and objective response rates (ORRs) are typically limited. [4][5][6][7] In addition, many patients with NSCLC treated with PD-(L)1 inhibitor therapy (at first-line or later stages) have cancers that are refractory to treatment, or that develop resistance and progress following an initial response, 7 highlighting the need for effective treatment options in subsequent lines. A key investigational strategy for improving treatment outcomes is the combination of anti-PD-(L)1 therapies with other agents that have immunomodulatory and antitumor properties. 8 9 A broad spectrum of such combinations is currently under exploration in NSCLC, 8 9 including the combination of sitravatinib with anti-PD-(L)1 therapy. Sitravatinib (MGCD516) is an orally available, spectrumselective tyrosine kinase inhibitor (TKI) targeting TAM family receptors (TYRO3, AXL, MER) and split kinase family receptors (including vascular endothelial growth factor receptor 2 (VEGFR-2), KIT, and the plateletderived growth factor receptor family). 10 TAM receptor tyrosine kinases are expressed on antigen-presenting cells, such as macrophages, and are involved in immune system homeostasis, particularly in the regulation of phagocytotic clearance of dying cells and suppression of inflammation. 11 TAM receptor signaling has been implicated in tumor metastasis, and receptor overexpression has been reported in various cancers, including lung cancer. 11 Targeting TAM family receptors affects macrophage polarization, favoring an immunostimulatory macrophage phenotype (M1) over an immunosuppressive phenotype (M2), thereby promoting an antitumor immune microenvironment. 10 Meanwhile, targeting VEGFR and KIT reduces the number of regulatory T cells and monocytic myeloid-derived suppressor cells (MDSCs), thereby further relieving immunosuppression 12 13 and creating an immune microenvironment that favors the antitumor activity of PD-(L)1 inhibition. 10 13 Preclinical studies demonstrate that sitravatinib reduces the number of MDSCs and increases the ratio of M1/ M2-polarized macrophages, promoting the expansion of antitumor cytotoxic T cells, which may help overcome resistance to immune checkpoint inhibitors and augment antitumor immune responses. 10 In vivo, sitravatinib has been shown to have potent antitumor activity in mice and enhance the efficacy of PD-1 inhibition. 10 The present phase 1b study was therefore conducted to characterize the safety, tolerability, and preliminary antitumor activity of sitravatinib in combination with the anti-PD-1 monoclonal antibody tislelizumab. Tislelizumab has high binding affinity for PD-1, with different binding epitopes and more complete blockade of PD-L1 binding to PD-1 compared with nivolumab and pembrolizumab. 14 In addition, tislelizumab was specifically engineered to minimize Fcγ receptor binding on macrophages. 15 Phase 3 trials in patients with locally advanced or metastatic NSCLC have shown that tislelizumab monotherapy as second-line or third-line therapy improved OS, PFS, and ORR compared with docetaxel, 16 while combining tislelizumab with chemotherapy as first-line therapy improved ORR and PFS versus chemotherapy alone. 17 18 In the present trial, the combination of sitravatinib and tislelizumab was assessed in patients with a variety of advanced solid tumors. The NSCLC cohorts encompassed patients with non-squamous and squamous histology, varying levels of tumor cell (TC) PD-L1 expression, and those naïve to or previously treated with systemic therapy. Discrete cohorts were included for patients who had resistant/refractory disease on or after prior anti-PD-(L)1 therapy, enabling assessment of the ability of sitravatinib plus tislelizumab treatment to overcome such resistance. Study design and patient population An open-label, multicenter, single-arm, non-randomized phase 1b clinical trial was conducted in Australia and China, where 16 sites enrolled patients with NSCLC. The study enrolled nine cohorts of patients with various advanced solid tumors, including five cohorts of patients with NSCLC (figure 1; online supplemental table 1). All patients in the NSCLC cohorts were required to be aged ≥18 years, with histologically or cytologically confirmed disease, at least one measurable lesion (as defined by Response Evaluation Criteria in Solid Tumors [RECIST] V.1.1), with the selected target lesion(s) not previously treated with local therapy, or with progression following local therapy, with an Eastern Cooperative Oncology Group performance status ≤1, and no documented epidermal growth factor receptor mutation (wild-type Open access status was required for non-squamous cohorts), anaplastic lymphoma kinase/proto-oncogene tyrosine-protein kinase ROS1 rearrangement, or B-Raf proto-oncogene, serine/threonine kinase mutations. Additional cohort-specific inclusion criteria are summarized in figure 1 and in online supplemental file, which include full inclusion and exclusion criteria. Interventions All patients were allocated to receive sitravatinib 120 mg orally one time per day plus tislelizumab 200 mg intravenously every 3 weeks, until study withdrawal, disease progression, unacceptable toxicity, or death. In the event of significant toxicities, the dose of sitravatinib could be reduced to 80 mg or 60 mg one time per day, Figure 1 Study design of SAFFRON-103: an open-label, multicenter, non-randomized, Phase 1b trial.* *This manuscript is focused on the five cohorts of patients with locally advanced or metastatic NSCLC, as indicated in bold font (cohorts A, B, F, H, and I, respectively). As summarized in online supplemental table 1, additional inclusion criteria for each of these NSCLC cohorts included: wild-type EGFR status, without documented ALK rearrangement, ROS1 rearrangement, or BRAF mutations for cohorts A, B and H; no documented EGFR mutation, BRAF mutation, ALK rearrangement, or ROS1 rearrangement for cohorts F and I. For patients in cohorts A and B with unknown EGFR mutation status, as well as patients in cohorts H and I, archival/fresh biopsy tumor tissues (formalin-fixed paraffin-embedded (FFPE) blocks with tumor tissues or unstained FFPE slides) were required during the screening period. If no archival tissue(s) could be provided, a fresh biopsy was mandatory. Documented test results were defined as those identified by local or central tissue-based testing. Full inclusion and exclusion criteria are provided in the online supplemental file, † For cohorts A and F: disease progression on or after 1-3 lines of systemic therapy, including anti-PD-(L)1 therapy as the most recent treatment for metastatic NSCLC; for cohort B: disease progression on or after 1-2 lines of systemic therapy, without prior exposure to an anti-PD-(L)1 therapy, ‡ The protocol was amended for cohort F from anti-PD-1/PD-L1 antibody treated or naïve metastatic, squamous NSCLC, to include anti-PD-1/PD-L1 antibody treated metastatic, squamous NSCLC, § No prior treatment with systemic therapy in the metastatic setting. Ab, antibody; ALK, anaplastic lymphoma kinase; BRAF, B-Raf proto-oncogene, serine/threonine kinase; ECOG PS, Eastern Cooperative Oncology Group performance status; EGFR, epidermal growth factor receptor; IV, intravenous; non-sq, non-squamous; NSCLC, non-small cell lung cancer; PD-1; programmed cell death protein 1; PD-L1, programmed death-ligand 1; PO, orally; QD, every day; Q3W, every 3 weeks; R/R, resistant/refractory; RCC, renal cell carcinoma; RECIST, Response Evaluation Criteria in Solid Tumors; ROS1, proto-oncogene tyrosine-protein kinase ROS1; sq, squamous. Open access with re-escalation not recommended but permitted on a case-by-case basis. Dose reductions were not permitted for tislelizumab. For both drugs, treatment could temporarily be suspended if required for suspected drug-related toxicities (for up to 28 days for sitravatinib and up to 12 weeks for tislelizumab). Treatment beyond investigatorassessed disease progression was permitted in cases of suspected pseudoprogression, with the patient's consent. Endpoints and assessments The primary endpoint was the characterization of safety and tolerability, assessed throughout the study by monitoring adverse events (AEs) and serious AEs, relevant physical examination, ECGs, and laboratory assessments as needed. Treatment-emergent adverse events (TEAEs) were defined as those with an onset date (or a worsening in severity from baseline) on, or after, the first dose of study drug and up to 30 days following study drug discontinuation or initiation of new anticancer therapy, whichever occurred first, or up to 90 days after the last dose of tislelizumab for potential immune-mediated AEs (imAEs) (regardless of whether a new anticancer therapy is initiated). AEs were graded based on National Cancer Institute Common Terminology Criteria for Adverse Events V.5.0 and coded using Medical Dictionary for Regulatory Activities (MedDRA) V.24.0. Assessment of the incidence of potential imAEs was based on sponsor identification using a predefined list of MedDRA preferred terms derived from the known potential imAEs of tislelizumab and other anti-PD-1 antibodies. Evaluation of antitumor activity was a secondary endpoint and included investigator-assessed evaluation per RECIST V.1.1 of ORR, disease control rate (DCR), PFS, and duration of response (DoR). Tumor assessments were performed using CT scans (preferred) or MRI of the chest, abdomen, and pelvis, as well as any other known or suspected sites of disease. Imaging was performed approximately every 6 weeks during the first year of the study, and approximately every 9 weeks thereafter. Exploratory endpoints included OS and exploration of the potential predictive role of PD-L1 expression with regard to antitumor activity. PD-L1 assessment was conducted by the VENTANA SP263 immunohistochemistry assay by a central laboratory using archival or fresh biopsy tumor tissue. PD-L1 expression was determined by the percentage of TCs with any membrane staining above background. Subgroup analysis of PD-L1 TC expression used cut-offs of 1% (for cohorts A, B, and F) or 50% (for cohorts H and I). A full list of all study endpoints is provided in the online supplemental file. Statistical analyses The study planned to enroll 220-240 patients overall, including approximately 20 patients in each of the NSCLC cohorts. The sample size was not driven by statistical considerations. Safety analyses were performed in the safety analysis set, encompassing all patients who received ≥1 dose of either study drug, with results summarized using descriptive statistics. PFS and OS analyses used the safety analysis set (where applicable, patients without post-baseline tumor assessment for PFS were censored at day 1). Tumor response analyses used the efficacy evaluable analysis set, which included all dosed patients with measurable disease at baseline per RECIST V.1.1 and who had ≥1 evaluable post-baseline tumor assessment, unless treatment was discontinued due to disease progression or death before tumor assessment. ORR and DCR were determined and are reported with Clopper-Pearson two-sided 95% CIs. Median PFS, DoR, and OS were estimated using Kaplan-Meier methodology, with 95% CIs estimated using the Brookmeyer and Crowley method. Patients and treatment In total, 220 patients with NSCLC were screened and 122 were enrolled in the study between January 3, 2019, and February 10, 2021 (online supplemental figure 1). Of these 122 patients (the total NSCLC population), 115 were included in the five cohorts, with each cohort including 22-24 patients (the safety analysis set for these five cohorts) (table 1; online supplemental figure 1). The remaining seven patients had previously treated squamous NSCLC and were included in the total NSCLC study population for safety analyses, but excluded from the five cohorts. These patients were enrolled prior to a protocol amendment that limited cohort F to those with resistant/refractory disease after anti-PD-(L)1 inhibitor therapy, and were either PD-(L)1 inhibitor naïve (n=6) or had not received anti-PD-(L)1 therapy as the most recent treatment (n=1). In addition, one patient with PD-L1 <1% was enrolled in cohort H in violation of the protocolmandated PD-L1-positive status for this cohort; this case was classified as a protocol deviation (data were included in the safety and efficacy analyses but excluded from the PD-L1 subgroup analyses). As of the data cut-off (November 8, 2021), among the total NSCLC population, 86.1% of the patients had discontinued treatment, 2.5% continued to receive tislelizumab monotherapy, 11.5% continued to receive the combination, and no patients were receiving sitravatinib monotherapy (online supplemental figure 1). The median study follow-up was 10.9 months (range: 0.4-30.6) and varied between cohorts, from a median of 9.1 months in cohort A to 12.1 months in cohort B (online supplemental figure 1). Patient demographics and baseline characteristics were generally balanced across cohorts, with the exception of characteristics dictated by cohort-specific eligibility criteria (table 1). Among all patients, the median age was 61.0 years (range: 25-79 years), most patients were male (79.5%), Asian (87.7%), and had metastatic disease at study entry (94.3%) (table 1). In the anti-PD-(L)1 therapy resistant/refractory cohorts (A and F), the majority of (table 2). Overall, the incidence and nature of TRAEs appeared generally consistent between individual study cohorts (table 2 and online supplemental table 4). TRAEs leading to discontinuation of either drug occurred in 23.0% of the patients (online supplemental table 4), with immune-mediated lung disease and diarrhea each reported in three patients (2.5%), hemoptysis, cardiac failure, and PPE syndrome each reported in two patients (1.6%), and all other causes (including increased AST and increased ALT) reported in a single patient only. TRAEs related to sitravatinib led to sitravatinib discontinuation in 17.2% of the patients, with hemoptysis, immune-mediated lung disease, and diarrhea each reported in two patients (1.6%), and all other causes in single patients only. TRAEs related to tislelizumab led to tislelizumab discontinuation in 9.0% of the patients, with the only cause reported in more than one patient being immune-mediated lung disease (in three patients [2.5%]). Dose modification of sitravatinib (including dose reduction and/or interruption) owing to TRAEs occurred in 71.3% of the patients (online supplemental table 4), most commonly due to PPE syndrome in 17.2% of the patients. Increased AST and ALT led to sitravatinib dose modification in 7.4% and 8.2% of the patients, respectively. For tislelizumab, dose modification (including dose delay [drug withheld beyond the visit window] or interruption of the infusion) owing to TRAEs occurred in 41.8% of the patients, most commonly due to ALT increase, diarrhea, and hepatic function abnormal, each occurring in 4.1% of the patients. Increased AST led to tislelizumab dose modification in 3.3% of the patients. Serious TRAEs were reported in 36.1% of the patients, with diarrhea and hepatic function abnormal the most common events (4.1%). TRAEs leading to death were reported in five patients (4.1%) and included: two cases of 'death' (no further reason provided) and one of multiple organ dysfunction syndrome related to both study drugs (the primary cause of death was disease progression); one case of ischemic stroke related to sitravatinib only; and one case of cardiac failure with respiratory failure related to tislelizumab only. Potential imAEs with tislelizumab (as per the predefined list of preferred terms derived from the known potential imAEs of tislelizumab and other anti-PD-1 antibodies, and regardless of whether these events were considered by investigators to be treatment-related) were reported in 54.9% of the patients in the total NSCLC population (table 3). The most common categories of potential imAEs were immune-mediated hypothyroidism (34.4%), immune-mediated pneumonitis (20.5%), and immunemediated hepatitis (10.7%) (table 3). Of the 25 patients who experienced immune-mediated pneumonitis (by category), eight patients (32.0%) experienced ≥Grade 3 events, including pneumonia in seven patients and immune-mediated lung disease in one patient. Though two patients (8.0%) had Grade 5 pneumonia imAEs, both were considered as infectious pneumonia that was not related to treatment. Among patients with immunemediated pneumonitis (by category), seven patients (28.0%) discontinued treatment due to such events, including cases of pneumonia (two patients), immunemediated lung disease (three patients), interstitial lung disease (one patient), and pneumonitis (one patient). The most frequent individual types of potential imAEs were hypothyroidism (30.3%), pneumonia (12.3%), and hyperglycemia (9.0%) (online supplemental table 5). Across all cohorts, all responses were partial, with no complete responses identified (table 4 and figure 2). Disease control was achieved in the majority of patients (78.3-90.9% of the patients per cohort), and few patients experienced progressive disease as their best overall of systemic therapy, without prior exposure to an anti-PD-(L)1 therapy; full eligibility criteria are provided in the online supplemental file, †Data reported are for the incidence of TRAEs by preferred term reported in ≥10% of the total NSCLC study population ‡Among the total NSCLC population, AST increased and ALT increased TRAEs led to treatment discontinuation in one patient each (both of these TRAEs were in the same patient, and both tislelizumab and sitravatinib were discontinued). AST increased and ALT increased TRAEs led to sitravatinib dose modification in 7.4% and 8.2% of the patients, respectively, and led to tislelizumab dose delay in 3.3% and 4.1% of the patients, respectively, §Among the total NSCLC population, PPE syndrome TRAEs led to sitravatinib treatment discontinuation in 1 (0.8%) patient and sitravatinib dose modification in 21 (17.2%) patients. Open access response (4.5-14.3% of the patients per cohort). Among responders, median DoR ranged from 6.9 months (95% CI: 3.5 to 7.6) in cohort F to 17.9 months (95% CI: 2.9 to 17.9) in cohort B, and was not reached in cohort A (95% CI: 3.1 to not evaluable [NE]). ORR by subgroup based on PD-L1 expression level is shown in online supplemental table 6. Higher PD-L1 expression was associated with a trend towards increased ORR in patients with non-squamous NSCLC who had not received prior systemic therapy (cohort H): ORR was 44.4% (95% CI: 13.7% to 78.8%) in the PD-L1 1-49% subgroup and 63.6% (95% CI: 30.8% to 89.1%) in the PD-L1 ≥50% subgroup. No clear association was found between ORR and PD-L1 expression in other cohorts. Antitumor activity: PFS Among patients with PD-L1-positive NSCLC who had not received prior systemic therapy, median PFS was 11.1 months (95% CI: 5.5 to NE) in patients with nonsquamous histology (cohort H) and 5.4 months (95% CI: 2.8 to 8.6) in those with squamous histology (cohort I) (figure 3A). Among the cohorts of patients who had previously been treated with systemic therapy, in those who were anti-PD-(L)1 naïve and had non-squamous histology (cohort B), median PFS was 7.0 months (95% CI: 2.7 to 11.2), while in patients with anti-PD-(L)1 resistant/refractory disease, median PFS was 4.2 months (95% CI: 2.7 to 5.8) in patients with non-squamous histology (cohort A) and 5.3 months (95% CI: 4.1 to 7.1) in those with squamous histology (cohort F). The 95% CI for median PFS overlapped for all five cohorts. Analysis of median PFS by PD-L1 expression subgroups is shown in online supplemental table 6. Higher PD-L1 expression was associated with longer PFS in patients with non-squamous NSCLC who had not received prior systemic therapy (cohort H): PFS was 7.2 months (95% CI: 1.3 to 11.1) in the PD-L1 1-49% subgroup and 11.8 months (95% CI: 5.5 to NE) in the PD-L1 ≥50% subgroup. No clear association was found between median PFS and PD-L1 expression in other cohorts. Immune-mediated type 1 diabetes mellitus 1 (4. DISCUSSION This open-label, multicenter, phase 1b study was designed to evaluate the safety, tolerability, and preliminary antitumor activity of sitravatinib and tislelizumab in solid tumors. We report findings from the five locally advanced/ metastatic NSCLC cohorts: Although approximately half of the patients (51.6%) in the SAFFRON-103 NSCLC cohorts experienced a ≥Grade 3 TRAE, less than one-quarter of patients (23.0%) discontinued treatment due to a TRAE, indicating that the combination was tolerable for most patients. Most Grade 1/2 TRAEs and more than half of the ≥Grade 3 TRAEs were manageable with treatment interruption, dose modification, or active supportive treatment. The most common TRAEs included elevated liver enzymes and diarrhea, and the most common ≥Grade 3 TRAEs were hypertension and PPE syndrome. PPE syndrome has previously been reported with multitargeted TKIs; 19 few cases in the present study were ≥Grade 3 events, and only one led to sitravatinib discontinuation. Similarly, although almost half of the patients experienced increased AST and ALT TRAEs, most were Grade 1/2 and ≥Grade 3 events were rare. Only one patient discontinued from treatment due increased ALT or AST. Hepatic toxicities and diarrhea are commonly associated with both agents due to different Open access toxicity mechanisms, and therefore the incidences of these AEs with combined therapy in the present study may reflect the compound effect of these discrete mechanisms. [19][20][21] With regard to the incidence of hypertension, sitravatinib targets VEGFR2, and hypertension is a common and dose-dependent AE of VEGF inhibitors. 22 However, prior evidence suggests that AEs associated with multitargeted TKIs can be controlled by prophylactic measures, such as use of antihypertensive and frequent emollients. 23 Reassuringly, sitravatinib-related hypertension did not lead to sitravatinib treatment discontinuation in any patients in the present study. The safety profile of sitravatinib plus tislelizumab observed here in patients with NSCLC was consistent with that observed in other cancer types. [24][25][26] The safety profile of the combination treatment was also in line with that known for anti-PD-(L)1 and multitargeted TKI monotherapies. 19-21 27 For example, in a phase 1/1b study of sitravatinib monotherapy in patients with heavily pretreated advanced solid tumors, sitravatinib demonstrated a manageable safety profile, with hypertension the most commonly reported ≥Grade 3 TRAE (in 20.7% of the patients), consistent with the present study. 27 The safety profile of sitravatinib plus tislelizumab in our study is also consistent with that reported in the phase 2 MRTX-500 trial of sitravatinib plus nivolumab in patients with nonsquamous NSCLC who progressed on or after checkpoint inhibitor therapy, in which 66% of the patients experienced Grade 3-4 TRAEs, with hypertension again being the most common (in 22% of the patients). 28 Potential imAEs were identified from all reported TEAEs using a group of predefined MedDRA preferred terms, derived from the known imAEs of tislelizumab and other anti-PD-1 agents. This approach provided a comprehensive assessment of potential imAE incidence, but also had limitations as it did not consider the nature of the imAE or the relationship between the drug and event as assessed by investigators. Significant deviation might be expected for immune-mediated pneumonitisthe most common imAE category reported in this studyand more specifically pneumonia, because etiologically infective lung inflammation is commonly observed in patients with advanced NSCLC, regardless of the given treatment. Additionally, the incidence of potential imAEs was confounded by the use of combination treatment, as some toxicities might be attributed to both sitravatinib and tislelizumab, such as diarrhea and elevated liver enzymes. Objective responses were observed across all five NSCLC cohorts, with an ORR range of 8.7%-57.1%. The trends in median PFS and median OS across the cohorts were largely consistent with those observed for ORR. The greatest ORR was observed among patients with PD-L1-positive treatment-naïve NSCLC, the highest being in those with non-squamous histology (cohort H, 57.1%), followed by those with squamous NSCLC (cohort I, 30.4%). These treatment-naïve patients with either type of histology were also the only cohorts included in this analysis that required PD-L1-positive disease for enrollment, and this outcome is in line with previous data showing higher efficacy of PD-(L)1 blockade among patients with increasing levels of PD-L1 positivity. [1][2][3] Indeed, anti-PD-(L)1 monotherapy is now a standard firstline treatment option for patients with advanced NSCLC with high PD-L1 expression. 29 In particular, the effect of the sitravatinib and tislelizumab combination in the PD-L1-positive treatment-naïve non-squamous NSCLC cohort was encouraging in the context of data for anti-PD-1 monotherapy in similar settings. In KEYNOTE-042, a phase 3 study evaluating pembrolizumab monotherapy as first-line treatment in patients with locally advanced or metastatic squamous and non-squamous NSCLC, the ORR with pembrolizumab monotherapy was 27.3% in patients with PD-L1 tumor proportion score ≥1%. 30 There is no reported ORR specifically for patients with non-squamous histology and squamous histology in this study. The higher ORR in the present study in cohort H, though of small sample size, indicates that sitravatinib in combination with tislelizumab might bring additional benefit to this group of patients. Although addition of sitravatinib to tislelizumab may have superior clinical efficacy when compared with that of anti-PD-L1 monotherapy, the combination requires further evaluation in this population in a larger, randomized trial. There remains an unmet therapeutic need for patients with advanced NSCLC with anti-PD-(L)1 resistant/refractory disease, especially for those who have also already received platinum-based chemotherapy. 7 29 31 The available treatments include docetaxel, pemetrexed, or erlotinib, 29 which have historically been shown to have limited survival benefit after first-line platinum-based chemotherapy and are associated with toxicities. [31][32][33][34][35] In the cohorts of patients in the present study who had received prior systemic therapy and had resistant/refractory disease following an anti-PD-(L)1 antibody as their most recent therapy, the response rate with sitravatinib plus tislelizumab was moderate in those patients with nonsquamous histology (cohort A, ORR of 8.7%), and promising for those patients with squamous histology (cohort F, ORR of 18.2%). In context, there is very limited clinical data available on the effects of re-treatment with an anti-PD-(L)1 monoclonal antibody after failure of immune checkpoint inhibitor therapy, with most of the relevant studies being retrospective with a limited sample size; the majority of these analyses reported an ORR of 0% to 8.3% (except one study which reported an ORR of 27%), with median OS ranging from 5.8 to 7.5 months. [36][37][38][39][40][41] Previously, there has been a lack of evidence for combination therapy with multitargeted TKIs and anti-PD-(L)1 agents in patients with squamous anti-PD-(L)1 resistant/ refractory NSCLC. Among such patients in cohort F in the present study, ORR and DCR were 18.2% and 90.9%, respectively, with a median PFS of 5.3 months. Although the sample size was small, to our knowledge, these data represent the first report of the antitumor activity of an anti-PD-1 inhibitor plus a multitargeted TKI in patients Open access with anti-PD-(L)1 resistant/refractory squamous NSCLC. These preliminary findings from cohort F indicate that treatment with sitravatinib and tislelizumab may overcome anti-PD-(L)1 treatment resistance for squamous NSCLC, potentially through the immunomodulatory effects of sitravatinib, which has been observed in animal models and in patients. In murine tumors, treatment with sitravatinib led to significantly decreased tumor-associated immunosuppressive myeloid cells such as MDSC cells and M2 macrophages, and increased the number of effective CD4+ T cells and exhausted CD8+ T cells characterized by PD-1 and cytotoxic T-lymphocytes-associated protein 4 expression. 10 In addition, sitravatinib demonstrated potent antitumor activity alone and enhanced the efficacy of PD-1 blockade when combined with PD-1 blockade in anti-PD-(L)1 refractory murine tumor models. 10 Immunomodulatory effects of sitravatinib, leading to a less immunosuppressive tumor microenvironment, have previously been reported in clinical studies in patients with oral cavity cancer, when used in combination with an anti-PD-1 antibody (nivolumab). 42 With regard to the cohort of patients with nonsquamous anti-PD-(L)1 resistant/refractory disease (cohort A), although a response was observed in only 2 of 23 patients (ie, 8.7%), the DCR was 78.3%, and median PFS and median OS were 4.2 months and 10.1 months, respectively. Several other studies have explored responses to anti-PD-(L)1 and multitargeted TKI combination therapy in patients with non-squamous NSCLC. In the phase 2 MRTX-500 trial, the combination of sitravatinib plus the anti-PD-1 therapy nivolumab resulted in an ORR of 18% in patients with non-squamous NSCLC with disease progression on or after anti-PD-(L)1 therapy (with or without platinum-doublet chemotherapy). 28 However, this analysis of the MRTX-500 trial only included a subgroup of patients who had experienced clinical benefit with the prior anti-PD-(L)1 therapy, and then progressed 28 (ie, resistant disease per the definition used in the present trial), and thus excluded patients who had no initial response to anti-PD-(L)1 therapy (ie, refractory disease). In contrast, cohort A of the present study included patients with non-squamous disease that was resistant or refractory to prior anti-PD-(L)1 therapy. Consequently, given the differences in patient population, it is not possible to compare response rates in the present study with those in the MRTX-500 trial. In another study in non-squamous metastatic NSCLC (COSMIC-021), treatment with the combination of the multitargeted TKI cabozantinib and the anti-PD-L1 agent atezolizumab resulted in an ORR of 19% among the cohort of 81 patients who had disease progression following a prior immune checkpoint inhibitor and ≤2 prior lines of systemic therapy. 43 While the ORR in this cohort of COSMIC-021 compared favorably with that in cohort A of the present study (8.7%), the proportion of patients with progressive disease was higher (16.0% vs 8.7%), while DCR (80% vs 78%), median PFS (4.5 vs 4.2 months), and median OS (13.8 vs 10.1 months) were similar. Although the ORR with sitravatinib plus tislelizumab in the present study was moderate in this difficultto-treat patient population with advanced non-squamous NSCLC resistant or refractory to anti-PD-(L)1 therapy, we believe further evaluation in a larger sample size study is warranted given the DCR, PFS, and OS results. Regarding the potential predictive role of various thresholds of PD-L1 expression, TC PD-L1 expression ≥50% was associated with a trend towards increased ORR and median PFS in patients with anti-PD-L1-naïve, PD-L1-positive+, non-squamous NSCLC who had not previously received systemic therapy (cohort H). However, no clear association was found between the assessed PD-L1 expression thresholds and outcomes in the other cohorts. While prior studies have reported that patients with NSCLC with high PD-L1 expression benefit the most from anti-PD-(L)1 monotherapy, 1-3 our results suggest the potential for sitravatinib and tislelizumab combination therapy to provide higher ORRs across various thresholds of PD-L1 expression. However, these results should be interpreted cautiously as the sample sizes were small within each PD-L1 expression subgroup, and further study is required. The results of the present study add to the clinical evidence supporting the rationale for combining anti-PD-(L)1 and multitargeted TKI therapies and corroborate the need for continued investigation of such combinations in phase 3 trials. An ongoing phase 3 trial is assessing sitravatinib plus tislelizumab compared with docetaxel monotherapy in patients with locally advanced or metastatic NSCLC with disease progression after platinum-based chemotherapy and anti-PD-(L)1 antibody treatment ( ClinicalTrials. gov NCT04921358). 44 Further ongoing phase 3 studies in patients with advanced/ metastatic NSCLC with disease progression on/after platinum-based chemotherapy and anti-PD-(L)1 antibody treatment are evaluating other combinations of multikinase inhibitors combined with anti-PD-(L)1 therapies, including cabozantinib plus atezolizumab (in the CONTACT-01 trial; ClinicalTrials. gov NCT04471428), 45 lenvatinib plus pembrolizumab (in the LEAP-008 trial; NCT03976375), 45 and famitinib plus camrelizumab ( ClinicalTrials. gov NCT05106335). 46 The strength of this phase 1b study can be attributed to the inclusion of a broad spectrum of NSCLC cohorts. However, there were several limitations, including the inherent nature of its open-label, single-arm study design, the low number of patients per arm, and a lack of geographical/racial diversity in the enrolled patient population. CONCLUSION In this phase 1b study in patients with non-squamous or squamous locally advanced or metastatic NSCLC who were either previously treated or treatment-naïve, sitravatinib plus tislelizumab was tolerable for most patients, and the overall safety profile was consistent with the known profiles of these agents, with no new safety signals Open access observed. Objective responses were observed across NSCLC cohorts, including in patients who were naïve to systemic treatment and naïve to anti-PD-(L)1 treatment, and in those with anti-PD-(L)1 resistant/refractory disease. These results support further investigation of sitravatinib plus tislelizumab in selected patient with NSCLC populations.
2023-02-22T16:04:39.575Z
2023-02-01T00:00:00.000
{ "year": 2023, "sha1": "7ddef8b1ae200e007f27f38f6892a7e2f2d5ec74", "oa_license": "CCBYNC", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "ac184153d9718a74c3e7e031c23e674bb44a5210", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
252754820
pes2o/s2orc
v3-fos-license
Effectiveness of feedback control and the trade-off between death by COVID-19 and costs of countermeasures We provided a framework of a mathematical epidemic modeling and a countermeasure against the novel coronavirus disease (COVID-19) under no vaccines and specific medicines. The fact that even asymptomatic cases are infectious plays an important role for disease transmission and control. Some patients recover without developing the disease; therefore, the actual number of infected persons is expected to be greater than the number of confirmed cases of infection. Our study distinguished between cases of confirmed infection and infected persons in public places to investigate the effect of isolation. An epidemic model was established by utilizing a modified extended Susceptible-Exposed-Infectious-Recovered model incorporating three types of infectious and isolated compartments, abbreviated as SEIIIHHHR. Assuming that the intensity of behavioral restrictions can be controlled and be divided into multiple levels, we proposed the feedback controller approach to implement behavioral restrictions based on the active number of hospitalized persons. Numerical simulations were conducted using different detection rates and symptomatic ratios of infected persons. We investigated the appropriate timing for changing the degree of behavioral restrictions and confirmed that early initiating behavioral restrictions is a reasonable measure to reduce the burden on the health care system. We also examined the trade-off between reducing the cumulative number of deaths by the COVID-19 and saving the cost to prevent the spread of the virus. We concluded that a bang-bang control of the behavioral restriction can reduce the socio-economic cost, while a control of the restrictions with multiple levels can reduce the cumulative number of deaths by infection. Supplementary Information The online version contains supplementary material available at 10.1007/s10729-022-09617-0. • Our study distinguished between cases of confirmed infection and infected persons in public places to investigate the effect of isolation. • A feedback controller approach based on the active number of hospitalized persons is proposed to reduce the damage from the novel coronavirus. • Early initiating behavioral restrictions is a reasonable measure to reduce the burden on the health care system. • A bang-bang control of the behavioral restriction can reduce the socio-economic cost, while a control of the restrictions with multiple levels can reduce the cumulative number of deaths by infection. Introduction The number of novel coronavirus (COVID-19) cases, caused by severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2), has been increasing worldwide since late 2019. The actual numbers of infected persons, isolated persons, and infection-related deaths depend on the effective reproduction number, which is defined as the average number of people infected by an infectious person by the time of his or her recovery. Reducing the reproduction number is necessary to suppress this epidemic. Previous studies have shown that this can be achieved by reducing three factors, namely: the susceptibility of uninfected persons, contact rates in the population, or the infectiousness of infected persons [15]. Regarding measures against infectious diseases undertaken by policymakers, two fundamental strategies exist: suppression and mitigation [17]. The suppression strategy involves reducing the number of cases to a low level and is used in diseases with high mortality rates and low infection rates. In contrast, mitigation strategies involve slowing and reducing the peak of infections and are used in diseases with low mortality rates and high infection rates. The vaccine against the COVID-19 has been developed and released by some medicine companies since the outbreak. However, in the present study, we focus on the period without the aid of vaccines or specific medicines and reducing contact rates in the population by conducting non-pharmaceutical interventions. Anti-contagion policies and measures have been discussed in some countries, and their effects, estimated, (e.g., [11,15,16,24]). The Japanese government undertook several countermeasures, such as declaring a state of emergency, implementing priority preventive measures, and urging people to avoid the "Three Cs," which refer to closed spaces, crowded places, and close-contact settings [8,34]. The data obtained from observational research has revealed the features of the SARS-CoV-2 infection [22,[42][43][44]64]. Nishiura et al. [43] reported that the serial interval of SARS-CoV-2 infection is close to or shorter than its median incubation period. This implies that transmission may occur before the onset of clinical symptoms or during asymptomatic infection. Such transmission may reduce the effectiveness of simple public measures, such as isolating symptomatic persons and tracing and quarantining their contacts [18]. In addition, it is important to estimate the exact number of infected persons in order to appropriately implement public health policies. Some studies assessed cases of unobserved infection and argued that the pandemic had been more broadly spread than the number of confirmed cases (e.g., [6,7,52,63]). Many researchers have proposed new epidemic models to describe the behavior of the novel coronavirus, extending and modifying the Susceptible-Infectious-Recovered (SIR) or Susceptible-Exposed-Infectious-Recovered (SEIR) models. The epidemic model was established to design a strategy for managing the pandemic and studying the impact of non-pharmaceutical interventions, such as lockdown [14,47], testing [48], contact tracing, and isolation [23]. Senapati et al. [54] revealed that greater intervention effort is required to control the disease outbreak within a shorter period of time. Wood et al. [65] investigated the effectiveness of increasing healthcare capacity and extending the period of isolation. Some studies distinguish between and incorporate both asymptomatic and symptomatic persons, who play an important role in the COVID-19 pandemic (e.g., [3,9,19,20,25,29,40,54,62]). Moreover, the infectiousness of asymptomatic infected cases has been reported to be lower than that of symptomatically infected cases [22,39]. Gevertz et al. [19], Kuniya and Inaba [29], and Senapati et al. [54] incorporated the differences into their epidemic models. The increase in detection and isolation of asymptomatically infected persons appears to be effective as susceptible persons are prevented from being exposed to the virus from infected persons, including those that are asymptomatic. We divided the non-pharmaceutical interventions into two parts, namely: 1) the detection and isolation of asymptomatically infected persons and 2) behavioral restrictions, such as requesting restricted business hours and physical distancing. Some researchers use "social" distancing; however, we use "physical" distancing to emphasize in-person contact. There is a trade-off between the negative impact on the economy and the reduction of infection-related deaths as a result of behavioral restrictions. Implementing behavioral restrictions contributes to reducing the reproduction number and preventing the spread of the virus; however, intense and prolonged restrictions decrease economic activities. To balance preventing the epidemic and maintaining economic activities is important for policymakers [9,26,30,58]. Thunström et al. [58] conducted a benefit cost analysis of physical distancing measure to control the COVID-19 outbreak. Lasaulce et al. [30] found the optimal trade-off between economic and health impact by solving the optimization problem confined to the number of Intensive Care Units patients with the SEIR model given the duration of interest for the epidemic is six months. Accordingly, we prepared the following indicators: the cumulative number of deaths by COVID-19, the socio-economic cost caused by the behavioral restrictions, the total number of isolated patients, and the total number of tests taken to detect infected persons. This paper aims to reduce the damage caused by COVID-19 and provide some insights into the pandemic by utilizing mathematical modeling, taking Tokyo, Japan as an example. We recommend a feedback controller approach to decide the degree of behavioral restrictions to be undertaken during the epidemic management period, which policymakers can adjust based on observational data. The feedback control system is expected to be a robust and effective means against uncertainty. Dias et al. [13] proposed a control law of physical distancing within the SIR model, using the number of hospitalized persons as the feedback signal. Furthermore, during an epidemic, it is necessary to determine the proper timing during which to take preventive measures as well as establish the appropriate degree of behavioral restrictions. Di Lauro et al. [12] investigated the optimal timing of a one-time intervention using three indices, as follows: impact on attack rate, peak prevalence, and timing of infections. We conducted simulations of the feedback control of the degree of behavioral restrictions and demonstrated its effect and the timing at which to reduce the indicators by adjusting it. We also investigated the effects of detecting and isolating asymptomatically infected persons. Model Our basic model is the SEIIIR model, which is a modified version of the model that Kuniya and Inaba [29] proposed as an extended SEIR model. The infection spreads through asymptomatic and symptomatic persons. We assume that some infected persons recover without developing any symptoms, while others develop them later on in the course of their infection. Hereafter, the former and latter are described as asymptomatic and presymptomatically infected persons, respectively. where S, E, I 1a , I 1b , I 2 , and R represent the number of susceptible, exposed, asymptomatic, presymptomatically infected, symptomatically infected, and recovered persons, respectively. N is the total population size, including the number of deaths. N = S(t) + E(t) + I 1a (t) + I 1b (t) + I 2 (t) + R(t) . 1 is the transmission rate of asymptomatic persons, while 2 is that of symptomatically infected persons. 1 is the recovery rate of asymptomatic persons, while 2 is that of symptomatically infected persons. is the reciprocal of the latent period. is the reciprocal of the difference between the incubation period and the latent period. p is the proportion of infected persons who develop symptoms. In other words, 1 − p refers to those who were infected and recovered without the onset of any symptoms. Note that those who are infected but do not have any symptoms are divided into I 1a and I 1b , but they cannot be distinguished by appearance. Figure 1(a) shows a schematic diagram of Eq. 1. The basic reproduction number ℜ 0 is as follows (see Appendix A for the derivation): (1) I 1a , I 1b , and I 2 are not isolated and have the opportunity to infect susceptible persons. Let are the reproduction numbers for the asymptomatic and symptomatic infection, respectively. Note that ℜ 0 = ℜ 01 + ℜ 02 . According to He, X. et al. [22], 44 percent of infection cases arise from the asymptomatic infection. Thus, in our context, we assume ℜ 01 = 0.44ℜ 0 and ℜ 02 = 0.56ℜ 0 . 1 and 2 are calculated using Eq. 2 and these equations. The SEIIIR model is modified and extended into the SEIIIH-HHR model by incorporating three different compartments for isolation: H 1a , H 1b , and H 2 . (3) where H 1a . H 1b , and H 2 represent the number of isolated asymptomatic, isolated presymptomatically infected, and isolated symptomatically infected persons, respectively. A schematic of Eq. 5. is shown in Fig. 1(b). The total population is constant for any time t; f is the degree of the behavioral restrictions. While f = 0 represents the absence of behavioral restrictions, f > 0 means that some policies, such as restriction of movement, are implemented. is the reciprocal of the time from onset to isolation. The parameter denotes the recovery rate. The reciprocals of 2 and h1 are the mean time periods from symptom onset to recovery and the average isolation period for those who are isolated at home or in hotels, respectively. People in compartment H 1a recover without the onset of symptoms, whereas people in H 1b develop some symptoms and are transferred to H 2 . Note that those in H 1a and H 1b cannot be distinguished in terms of appearance. The transition from compartment H 1b to H 2 means that an infected person is detected as a positive case and develops some symptoms later. The transition rate is assumed to be the same as . In this study, we assume that isolated persons without any symptoms stay at home or in hotels and do not occupy beds in hospitals or other healthcare facilities. The compartment R includes death. Note that for simplicity, the loss of immunity is ignored in this model within the management period. It is assumed that those who get sick die of infection at a rate. Let D(t) be the number of deaths by COVID-19 in those who are newly confirmed cases from time 0 to t. we calculate it as follows: where is the case fatality rate, defined as the ratio of deaths to the number of confirmed infected persons. There is a time lag between infection and recovery or death, but the difference is negligible. Feedback control of behavioral restrictions This study explored the effectiveness of the feedback control of behavioral restrictions. The degree of behavioral restrictions f = f (t) is changed based on the number of isolated symptomatically infected persons H 2 (t) and its trend of increasing or decreasing Ḣ 2 (t) . Pataro et al. [50] introduced a framework for optimizing the required levels of public health policies and referred to the importance of finely tuning the level of restriction on the population's mobility. In this study, we assume that the intensity of the intervention, such as behavioral restrictions, can be divided into, at most, four levels. Hereafter, the feedback control which has J levels of behavioral restrictions is referred to as "J-level." We define f 1 as the mean degree of behavioral restrictions under the emergency state, which was executed in Tokyo from April 7 to May 25, 2020. In our simulation, let f 1 = 0.6 from the utilization ratio of major stations in the capital area [38]. We assume that the J-level has J + 1 situations and f(t) is discretely changed: 0, f 1 ∕J , 2 × f 1 ∕J , ⋯ , (J − 1) × f 1 ∕J , and f 1 . For example, the 1-level uses only two different situations: an emergency situation ( f (t) = f 1 ) and its release ( , and f 1 . The 1-level means the "bang-bang control" on the analogy of the control theory. The feedback control with J > 1 is collectively denoted by "multilevel." Examples of dynamics of H 2 (t) and f(t) different levels of feedback control are demonstrated in the Supplementary file. To mimic the actual transition, the maximum behavioral restriction is initially implemented. In the multilevel feedback algorithm, Δf , which is the increment and decrement of the degree of behavioral restrictions, is narrowed when f(t) is changed to execute the appropriate degree, while f is constant in the algorithm of the 1-level. For example, if the 4-level is adopted, f is 0.6 at first and changes to f = 0.3 , 0.15, 0.15, and ⋯ . The transition of f with different levels of feedback control is demonstrated in the Supplementary file. Loewenthal et al. [33] argued that it is important to shorten the response time for initiating physical distancing, rather than extending the period of lockdown. We introduce T 1 and T 2 as the response and execution times, respectively. T 1 is the period from the time when H 2 (t) reaches a criterion and Ḣ 2 (t) ≠ 0 to the time when f(t) is raised or lifted. We assume that T 1 = 7 days is a valid response time for administrative services in terms of feasibility and changeability. T 2 is the period from initiating the change in f(t) to restarting the monitoring of H 2 (t) . We assume that T 2 = 14 days is a valid execution time. H c (t) refers to the capacity of healthcare facilities or the number of beds for infected persons who can receive sufficient healthcare treatment. We also introduce two thresholds G up and G down as parameters determined by policymakers, and they satisfy 0 < G up ≤ 1 and 0 < G down ≤ 1 . Decreasing G up lowers the thresholds to raise the degree of behavioral restrictions f(t) and prevents H 2 (t) from exceeding H c (t) . In contrast, increasing G down loosens the criteria to lower f(t) and shortens their duration. Hereafter, we define c r (t) as the ratio of H 2 (t) to H c (t) , and let Then the condition that the behavioral restriction changes depends on c r (t) . This c r (t) means the occupied rate of healthcare facilities at time t. If c r (t) > 1 , the capacity of healthcare facilities is overwhelmed. When c r (t) exceeds G up and Ḣ 2 (t) > 0 , the state of emergency is initiated T 1 days later, and f (t) = f 1 . The state continues T 2 days after initiation, and then f(t) is lifted if c r (t) falls below G down and is raised or lifted discretely in response to c r (t) and Ḣ 2 (t) . The detailed algorithm of feedback control is described in Appendix B. Calculation of indicators In 2020 (fiscal year), the Tokyo prefectural government budgeted about two trillion JPY for the measure against the novel coronavirus. The budget included four purposes: 1) to prevent the spread of the virus (1,174 billion JPY), 2) to reinforce a safety net to support economic activities and civic life (990 billion JPY), 3) to balance the prevention of spreading the virus and economic activities (20 billion JPY), and 4) to reform the social structure to adapt to the epidemic (55 billion JPY) [61]. The basis for calculation is not so clear, and the use is various. Thus, we established the following five indicators which seem essentially important: the cumulative number of infected deaths by COVID-19 of during the management period D(T), the total number of people isolated at home or in hotels C 1 , those who are hospitalized C 2 , those who undertake the reverse transcription-polymerase chain reaction (RT-PCR) or antigen tests C D , and the socio-economic cost caused by the behavioral restrictions C f . C 1 is calculated as the sum of isolated persons without any symptoms during the management period. Symptomatically infected persons are hospitalized if H c (t) ≥ H 2 (t) . However, if the capacity of healthcare facilities is overwhelmed ( H c (t) < H 2 (t) ), we assume that H 2 (t) − H c (t) persons are also isolated at home or in hotels. Then they are added to C 1 . C 2 is the sum of hospitalized persons during the management period and is calculated as follows: C D is the sum of the number of people who take the tests during the management period and is calculated as follows: In reality, the rate of positive results fluctuates daily and may increase with the identification of infection clusters. For simplicity, it is assumed that = 0.05 is based on the data obtained from [59]. C f indicates the intensity of implemented behavioral restrictions and is calculated as follows: This indicator is an abstract non-dimensional measure and satisfies 0 ≤ C f ≤ 1 . C f = 0 means that the usual state is maintained and C f = 1 does that the state of emergency is executed during the management period T. is the nonlinear effect. We assume that = 1 in the manuscript and discuss cases of ≠ 1 in the Supplementary file. As two supplementary indicators, c r,max and c N are introduced to indicate the status of healthcare capacities. c r,max is the maximum ratio of the number of occupied beds to the number of available beds for healthcare treatment during the management period, and is defined as follows: If c r,max < G up , then the state of emergency is not declared and there are no behavioral restrictions within the management period. Moreover, if c r,max > 1 , then the capacity of the healthcare facilities is overwhelmed at least once during this period. c N is defined as the number of days in which c r (t) > 1 is true. Table 1 shows the list of variables, indicators and parameters. Management parameter We conduct the simulation, assuming our policy is implemented in Tokyo, Japan. Let N = 13 942 856 , which supposes the population in Tokyo, Japan, on October 1, 2019, [55]. Parameters in Eq. 5 are determined as follows. Let 1∕ 2 = 13.4 days [5] and 1∕ h1 = 10 days [35]. h2 is calculated as the ratio of those discharged from the hospital to inpatients, including death, in one day based on the data by [36], and our simulation employs h2 = 0.07. We assume that 1 ≈ h1 = 0.1 and h3 ≈ h2 = 0.07 . Let −1 = 2 , and the sensitivity of D(T) and c N to −1 is discussed in the Supplementary file. The asymptomatic ratio 1 − p has been estimated by proposing various methods and using different data (e.g., [4,21,42,45]). The estimated values range from 0.1 to 0.5; therefore, we assume that p ranges from 0.5 to 0.9 and let p = 0.7 . The latent period [51], and the incubation period was 5.1 days [31]. Thus, let −1 = 5.1 − 2.56 = 2.54 days. We assume that the management period is 500 days ( T = 500 ) from January 1, 2020, to May 14, 2021, as the vaccination for people over 64 years of age was issued in Japan on April 12, 2021, and the vaccine doses per capita have rapidly increased since the middle of May [46]. In reality, the fatality rate depends on symptoms, age, and access to appropriate medical care [64]. However, it is assumed to be a constant in this paper. According to the data [36], the number of confirmed cases is 77853 from June 1, 2020, to May 31, 2021, while that of fatalities in the same period is 875. Thus, we obtain = 875∕77853 = 0.011 ⋯. Some studies report that the estimated value of the basic reproduction number, defined as the average number of secondary cases generated by a typical primary case in an entirely susceptible population, varies widely from country to country [32,53]. The basic reproduction number for the epidemic in Japan was also estimated (e.g., [28,56]). Kuniya [28] reported that it was ℜ 0 = 2.6 whose 95% confidence interval was 2.4 to 2.8, and therefore, we adopt ℜ 0 = 2.6 in this paper. Table 2 shows the list of parameters. Table 3 shows the number of beds available for healthcare treatment in Tokyo, Japan. The number of beds available for healthcare treatment has increased [37]. Although data on the number of beds is missing from January 1, 2021, to April 30, 2021, we assume H c (t) = 3300 during this period. Table 4 shows combinations of G up and G down which achieved the goal of the three scenarios when = 0 and p = 0.7 . The strategic planning for achieving scenario A is to initiate behavioral restrictions early and maintain them until the occupied ratio of beds available for healthcare treatment is reduced. In contrast, behavioral restrictions in scenario B are reinforced when the number of hospitalized people increases while scenario C is an intermediate strategy. Using these arrangements, we investigated the level of feedback control that is more effective in reducing the indicators referred to in the previous subsection with different values of . Finally, we explored the performance of feedback control when is governed by a uniform distribution with different p values ranging from 0.5 to 0.9. = (t) varies on a daily basis and ranges from [0.0, 0.03]. Trials were carried out 1000 times and the statistical values were obtained. The simulation starts from S(0) = N − 10 , E(0) = 10 , and I 1 (0) = I 2 (0) = H 1 (0) = H 2 (0) = H 3 (0) = R(0) = 0 . The first deceased person due to COVID-19 was confirmed on February 26, 2020, [60]. In Tokyo, 2035 people died of the infection, and the total period of the state of emergency was 147 days by May 14, 2021, [36]. Figure 2 shows D(T) and c r,max with different detection rate for f = 0 during the management period. Both D(T) and c r,max are monotonically decreasing with , and they are larger as the symptomatic rate p becomes higher. For p = 0.7 as shown in Fig. 2(a), if ≥ 0.0195 throughout the management period, D(T) will be lower than the actual data even without any behavioral restrictions. Moreover, according to Fig. 2(b), the capacity of health care capacity will be overwhelmed if ≤ 0.034 for p = 0.9 . This figure suggests that the detection of infected persons should be strengthened to contain the epidemic when p is high. Figure 3 demonstrates the results of simulations with p = 0.7 and different combinations of G up and G down using three different indicators: D(T), C f , and c N . Since these simulations were conducted under = 0 , the total number of those who take the test to detect infected persons, C D , is zero for any combinations. Decreasing G up and G down contribute to reducing D(T), as shown in Fig. 3(a), (d), (g), and (j). When G up and G down are high, D(T) rises especially, in the 1-level. Results When it comes to C f shown in Figure 3(b), (e), (h), and (k), the same colored clusters radiate from the origin. The figures show a combination of high G up and G down is effective in reducing C f , especially in the lower level feedback control. When G up = 0.05 , C f is large in all the levels of the feedback control. As shown in Fig. 3(c), (f), (i), and (l), a low G up reduces the risk that the capacity of health care facilities is overwhelmed. In the case of p = 0.7 , G up < 0.7 is favorable for keeping the health care system with the exception of some combinations of the 1-level. In addition, the c N of the 1-level trends to be much longer than those in the other levels when G up ≥ 0.7. Combinations of G up and G down are selected so that the indicators can be reduced. Table 4 shows the best combinations of G up and G down for each scenario when = 0 . Fig. 2. The C 1 and C D are increasing as is raised. The behaviors of C 1 and C D are similar and C D is about three times larger than C 1 . In addition, the behavior of the total number of hospitalized persons, C 2 , is also similar to that of D(T). As becomes larger, the period of behavioral restrictions is shorter and its initiation is delayed. Thus, the D(T) and C 2 rise in 0.02 ≤ ≤ 0.0295 . c N = 0 is maintained regardless of in all the levels and the capacity of health care facilities is enough for scenario A. For scenario B shown in Fig. 5, the C f is under 0.27 in all the levels, and however, the other indicators are about 10 times larger than those of scenario A. In the 1-level, the C f is the lowest and the other indicators are the largest of all the levels. The c N of the 4-level is the same as those of the 2-and 3-level and overlaps with them in Fig. 5(f). When = 0 , c N is 83 days in the 1-level and is 31 days in the other levels. This implies that many symptomatically infected persons cannot be hospitalized and are isolated at home or in hotels. The c N is roughly decreasing with increasing , and however, ≥ 0.0225 should be maintained to achieve c N = 0 during For scenario B shown in Fig. 7(d), (e), and (f), means of D(T) in the 1-and 2-levels are increasing as p becomes larger unlike scenario A. On the other hand, means of C f are increasing as p becomes larger, like scenario A. Means of C D are also decreasing with increasing p for multilevel feedback controls. However, the mean of C D in the 1-level rises at p = 0.7. Fig. 7 show the result of scenario C. Means of D(T) and C D for the 4-level are the smallest with the exception of p = 0.5 . The C f is higher in the 4-level, and however, differences of the means between the 4-level and the other levels are decreasing with increasing p. According to D(T), C f , and C D , the 4-level is relatively effective when p is high. Discussion We established the SEIIIHHHR model as a mathematical epidemic model of the COVID-19 and calculated indicators such as the socio-economic cost caused by the behavioral restrictions C f , the total number of those who are isolated at home or in hotels C 1 , the total number of hospitalized persons C 2 , and the total number of those who take the test to detect infected persons C D as well as the cumulative number of infected deaths D(T). We conducted numerical simulations of implementing nonpharmaceutical interventions such as detecting infected persons in public spaces and restricting people's activities. The RT-PCR testing is not only a monitoring but also an intervention measure. As a result of simulations with different detection rate , D(T) and the burden on the health care system are reduced as becomes larger. To develop a measure against the virus with uncertain symptomatic rate, we proposed a feedback control One of the simplest feedback controls is the bang-bang control (1-level) which repeats the state of emergency and the usual state. We explored a better way and suggested the multilevel feedback control in which the band of changing f is narrowed. Three different scenarios were prepared for our simulations by exploring combinations of two parameters G up and G down . We came to some conclusions from the simulations. We found out that increasing G up and G down reduces C f , whereas decreasing G up and G down does D(T). The number of days in which the capacity of health care facilities is overwhelmed c N depends on G up regardless of the number of levels for feedback control. The result of scenario A implied that early initiating and maintaining behavioral restrictions can be reasonable to decrease indicators except for C f . Furthermore, the D(T) in scenario A does not rise so much if the proportion of infected persons who develop symptoms p is high. Gevertz et al. [19] investigated the best timing of initiating and canceling physical distancing and argued that it should start early and relax slowly. Our finding follows this research. According to Figs. 4 and 5, scenario A reduced D(T), C 1 , C 2 , and C D to about one tenth of those of scenario B. On the other hand, its C f is larger by 0.08 than that of scenario B. From these two scenarios, the bang-bang control seemed to be better to reduce C f . However, it must be noted that C f is an abstract measure and the cost to raise f is assumed to be linear. The cost to increase f includes the monetary compensation for businesses damaged by the governmental interventions. A multilevel feedback control is preferable to reduce D(T), C 1 , C 2 , and C D . In scenario C, the 4-level feedback control is effective when p is high. As p becomes higher, C f is increasing and C D is decreasing. The C 1 , C 2 , and C D can be converted into money by multiplying each cost per person. Depending on their unit costs, the favorable scenario may be changed. Our analysis has several limitations. This paper assumed the distribution of population is homogeneous while that in reality is heterogeneous. We did not consider other important factors such as the time delay for aggravation of symptoms, the age group of patients, the increase of the number of suicides caused by recession, and the influence of superspreading events reported in [44]. The Japanese government counts the number of deceased individuals who were positive for COVID-19 reported by jurisdictions, and defines it as infected deaths by COVID-19 without specifying the cause of death. However, we calculated the number of infected deaths in those who are newly confirmed cases in the management period. We do not consider how or to what extent we can increase . The number of beds in health care facilities for infected persons with symptoms H c (t) is assumed to be the same as the actual data during the management period in our simulation, but its increase may be also effective in reducing indicators [10,65]. The timing of reinforcing or relaxing the behavioral restrictions might be more effective by using other indicators, such as the reduction in individual consumption due to the restrictions, the estimated number of unconfirmed infections, the number of severely ill persons, the number of deaths, or the positive rate of the test. Their combinations can be effective because indicators were sometimes unstable as shown in in Figs. 4, 5, and 6. In addition, we assumed a time lag of one week because immediate executing or canceling behavioral restrictions may be impossible. If we could reduce the time lag of policy change, we would manage the situation more effectively. We ignored a possibility that a successive long strong behavioral restriction causes the bankruptcy of business for which remote work cannot be substituted. The COVID-19 cases resurged in Japan from November, 2020 to January, 2021, [1,27], and the number of infected deaths also increased in Tokyo [60]. Karako et al. [27] argued that this was because people seemed accustomed to the situation of this epidemic and their level of activity was not reduced during the period. In Japan, no legal penalties are imposed for violating behavioral restrictions called for by the government. In this study, we didn't consider such people's spontaneous behavior change and assumed the degree of behavioral restrictions changes discretely and keeps constant during a certain period in the feedback control. However, people may reduce their mobility restrictions by themselves even though some governmental interventions are being implemented [41,49]. From a point of view of behavioral science, Atkinson-Clement and Pigalle [2] argued that a lack of trust towards government measures reduces compliance. The management period of the simulation is from January 1, 2020 to May 14, 2021, but the outbreak of the SARS-CoV-2 Alpha variants, which has a higher transmissibility [57], was not considered. The framework in the present study can be applied to another infectious disease against which vaccines and specific medicines are not developed in the future. A feedback controller approach is an effective way even after vaccines and specific medicines are developed because of the resurgence of infection cases caused by the loss of immunity. However, the knowledge provided by these models can only be understood in terms of the dynamical system. The structure of the model and its parameters need to be validated and improved in response to the appearance of variants which have different properties and the development of pharmaceutical interventions. Moreover, it must be stressed that if the value of statistical life is not converted to economic loss, then there is no objective optimal solution and that evaluations made during the decision-making process are arbitrary. The Japanese government was late to start administering the COVID-19 vaccination, but the vaccine doses per capita have been rapidly increasing since the middle of May, 2021 [46]. We will consider a better measure against the epidemic under insufficient data and cost-effectiveness of a variety of anti-contagion measures including pharmaceutical interventions such as vaccination. The basic reproduction number is derived from Eq. 1 as follows: The linearized system at the disease-free steady state for Eq. 1 is where u, v, w, and x denote the linearized forms of E, I 1a , I 1b , and I 2 , respectively. And The next generation matrix with large domain K is calculated as The basic reproduction number ℜ 0 is equivalent to the spectral radius of K. (A1) Funding No funding was received for conducting this study. Data availability Not applicable Code availability The simulation code was available in the Supplementary material. Conflicts of interest The authors have no conflicts of interest to declare that are relevant to the content of this article.
2022-10-08T06:17:39.033Z
2022-10-06T00:00:00.000
{ "year": 2022, "sha1": "97359f7f7f4e0ce8615921b52d11533aedcf08ef", "oa_license": null, "oa_url": "https://link.springer.com/content/pdf/10.1007/s10729-022-09617-0.pdf", "oa_status": "BRONZE", "pdf_src": "PubMedCentral", "pdf_hash": "77829d7edddb5a964fafc775cb94e6c22ffd6c8e", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
56450492
pes2o/s2orc
v3-fos-license
The Effect of Node Speed on Mobile Ad Hoc Network Performance : Announcement between mobile nodes is a challenging network especially via Mobile Ad hoc Network (MANET), where mobile nodes are capable of moving on a continual basis, which is changeable and challenging to analyse the performance of its routing protocols and application. In this paper, we evaluate the performance of routing protocols over MANET for Ad Hoc on demand Distance Vector (AODV), Destination-Sequenced Distance Vector (DSDV), as well as Dynamic Source Routing (DSR). We use Network Simulator NS-allinone-2.31 tool for the practical testing and evaluation. We have tested the routing protocols AODV, DSDV, and DSR. So, different performance features are investigated including, Packet loss, Packet Delivery Fraction (PDF), End-to-End Delay. The results indicate that performance of routing protocols where the number of nodes start from 5 to 100 nodes during the entire scenario for node speeds equivalents to 10m/s, besides the delivery of packet size equivalents to 512kb. Introduction In Mobile Ad Hoc Network (MANET) the mobile nodes can work as router and characterized as random changing topology and dynamic in the last rapidly changing topology when mobile node communication with another node is very important to get routing protocol with multi hop paths. The new generation for mobile should be supported high frequency and high bandwidth for mobile area coverage so as to share wireless LAN in anticipated load situation when wireless mobile hosts collection from network without any base station (BS). It is very important for the enrolled mobile node to give support to any host even to send information or packet to another destination on this network however in MANET, the host act as route and all host consist of hops [1]. Mobile nodes that changes environment in MANET can result to failure of routing algorithms which is a big challenge in ad hoc network high number of node mobility which can be affected by rapid and unpredictable nodes [2]. As we have mentioned earlier in MANET, mobile host connection together can act as routing protocol. MANET consists of variant routing protocols which are: proactive, reactive and hybrid routing protocol as shown in Figure.1. Proactive routing protocols are called "table-driven" routing protocols where a mobile node that connects with another mobile node get routing table immediately and most proactive routing protocol inherited this properties from algorithms used with wired network which have modified the wired network protocols. It uses the updated information from time to time to get information about the topology due to increasing the number of nodes or changing the location of the mobile nodes. Reactive routing protocols are called "on demand" routing protocols, because these protocol are on demand, the basic is idea to create a route to another destination, when any mobile host demand a routing path to mail information, then all nodes for the routing path will be searched. However, the rout remains the important function of reactive routing table. The reactive routing protocol in scalability have more advantage than proactive and has several kinds of routing path such as the Dynamic Source (DSR) protocol and Ad Hoc on Demand Distance vector(AODV) [3]. Hybrid routing protocols combines the properties for both proactive routing protocols and reactive routing protocol. The proactive routing protocols was recommend for maintaining distance for all nodes although in MANET the topology network changes randomly and subsequently does not effected on MANET. Second category is the reactive routing protocols or on demand routing protocols which offers the routing paths through which the node forward information to the host only. It consists of two phases which are route discovery and route maintain phase [4]. The last category is the hybrid protocol which is the combination of Proactive and Reactive routing protocols. It has been exercised on domain operation for inter-domain [5]. However, the property for hybrid protocols is efficient, adaptive, as well as simple to avoid excessive control head [6]. In this paper, we evaluate the performance of routing protocols over MANET for Ad Hoc on demand Distance Vector (AODV), _______________________________________________________________________________________________________________________________________________________________ Destination-Sequenced Distance Vector (DSDV), as well as Dynamic Source Routing (DSR) are evaluated. In our previous work [7], we have measured the performance of the these routing protocols with node speed equivalents to zero. However, in this work, we investigate the effect of nodes speed, so we have used the number of nodes staring from 5 till 100 for node speed equivalents to 10m/s, besides the delivery of packet size equivalents to 512kb instead of 256k/b. Moreover, the simulation area we have used in this work was 1Km*1Km instead of 500m*500m in our previous work. Also, the transmitted maximum packet in the queue was 10,000 based on the UDP protocol. The remainder of this paper is organized as follows. In the next section, we describe DSR, DSDV, and AODV protocols, which are used in our proposed scheme. In section 3, we introduce our experimental work. Finally, in section 4 we conclude the paper. Dynamic Source Routing Protocol (DSR) DSR routing protocol was considered as one of reactive routing protocols, as soon as it was designed for multi-hops using with mobile and wireless ad hoc network nodes. DSR have permitted connection of collection of mobile nodes freely without at all infrastructure and main center as administration as well as selforganization [8]. The most important two circumstances consequence on DSR routing protocols on its mechanism are route discovery and route maintenance and both of them can be exercise with mobile node able to maintenance and discover the source route for any neighbour host to transmit information to neighbour host as the host receives information from other node which updates the routing protocol and recognizes neighbour to its neighbour inside network topology. The node updates periodically for routing information to avoid loop freed because DSR have to utilize source routing to construct loop free inconsequential at what time it have completed update on routing information for the node. The Destination-Sequenced Distance Vector routing protocol (DSDV) DSDV routing protocol has been considered as one of proactive routing protocols. So, in this classification, the DSDV routing protocol was behaved first made groups of mobile nodes communicate using no infrastructure such as base station, while nodes be able to exchange information between them. When the mobile is out of coverage of this infrastructure and wants to be exchange data, the control message generate path between nodes to exchange information by creating ad hoc routing protocols. So, when routing Ad Hoc on demand Distance Vector (AODV) The ad Hoc on demand Distance vector was designed for ad Hoc network to provide elementary communication between nodes during communicate with borderline control overhead as well as borderline latency. In AODV, there is no maintenance route for the entire nodes via network topology merely, the route is established when node requested and hold up route for specific of time depending on how long time node is needed while the link was broken and node attempt to discover route to offer free loop to reach it as well as free loop to be able to create through using sequence number where sequence number increment from time to another to realize change that occur at the network topology. AODV have multicaste and uni-cast communication however, this kind of communication could be streamline by increasing uni-cast and multi-cast by searching and be able to obtained information for route. AODV is proposed for only symmetric link between neighbour nodes where AODV is not suitable to work with wired and wireless media. So, the router table for AODV consists of IP address, next hop, destination, as well as the sequence number that maintaining the entire list nodes, and the structure for AODV routing protocols for Ad Hoc Networking [9,10]. Figure 2 below shows the packet loss, it can be observed when the number of nodes are 20, AODV has the better recorded performance than DSR, while AODV reached to 0.1958, and DSR was 0.8878 as DSDV is 11.6991. By increasing the number of nodes, DSDV is better than before when it was at small number of nodes for packet loss, however it is still the worst routing protocol. Also, it can be observed that the AODV routing protocol is better than DSR when the number of nodes is increased, for instance, when it reached 90 nodes, DSR was 1.3718 and AODV was 0.1433. Because AODV and DSR have forwarded packet with better metric for route for DSDV 33.9013 due to the needless of advertising the routing information, there was no change in network topology. Packet Delivery Fraction (PDF) As shown below in Figure 3 which shows the packet delivery fraction, it can be observed when the number of nodes is 5, the AODV has better performance than DSR, while AODV is reached to 392 bytes, and DSR is 330 bytes, but DSDV was at 320. So, by increasing the number of nodes DSDV was better than before when it was at small number of nodes for PDF, at the same time it is still the worst routing protocol. Also, it can be observed that the AODV routing protocol is better than DSR routing protocol when the number of nodes is increased, for instance, when it reached to 90 nodes, DSR obtained better performance than AODV as well as the DSDV, where the DSDV had the worst performance, this is due to the reason that AODV and DSR construct the routing information when they were created and are more adaptive. However, DSDV End -to-end Delay In Figure 4, we observe that when the number of noses is 30, the information was carried about DSDV, DSR, and AODV, the performance for routing table for DSDV is 5.54842 m/s, DSR is 7.3481 m/s, and AODV is equalled to 7.45089 m/s. Based on this information, the DSDV has a better performance than DSR, but has the worst performance in AODV routing protocol. This is because, DSDV is proactive routing protocols and has information about the entire destination. By increasing the number of nodes, the DSR was still obtained the same value for delay, but realized that the DSDV becomes better than before because it was able to avoid extra traffic with incremental updates instead of full dump updates and AODV performance still the worst routing protocols. For instance, from node 100 it reached maximally 107.479 m/s due to no routing protocols available when nodes were requested. Conclusion In this paper, we have exploited a simulation for different routing protocols over MANET. We have focused on DSR, AODV and DSDV routing protocols in which we are able to investigate the performance of the routing protocols through varying the number of nodes for entire circumstances starting from 5 to 100 nodes with entire nodes speed equivalents to 10 m/s, and delivery packet size equivalents to 512 kilo bytes. We have used NS-Allinone-2.31 tool for simulating and investigating the variants of routing protocols. We have realized well performance over MANET through the impression of computing packet loss, PDF, and the delay. We found the best performance for AODV routing protocols was over PDF, packet loss. However, for end-to-end delay the greatest performance was DSDV routing protocol as well as the good performance for AODV routing protocol. For future research work, it would be interesting to evaluate more protocols that have been used in ad hoc networks.
2019-01-23T16:04:46.716Z
2016-03-24T00:00:00.000
{ "year": 2016, "sha1": "7e14c1d14150015fe02d42e139d0f50ed6c4e3f8", "oa_license": "CCBYSA", "oa_url": "https://dergipark.org.tr/tr/download/article-file/259062", "oa_status": "GREEN", "pdf_src": "MergedPDFExtraction", "pdf_hash": "78d7cab56a15ae9c97922594d0915055abd0174a", "s2fieldsofstudy": [ "Computer Science", "Engineering" ], "extfieldsofstudy": [ "Computer Science" ] }
218685999
pes2o/s2orc
v3-fos-license
Clinical Significances of Positive Postoperative Serum CEA and Post-preoperative CEA Increment in Stage II and III Colorectal Cancer: A Multicenter Retrospective Study Background: Carcinoembryonic antigen (CEA) is the most common serum tumor marker in colorectal cancer (CRC). Nevertheless, few previous studies demonstrated the impacts of postoperative CEA and post-preoperative CEA increment on prognosis of CRC. Methods: Patients with stage II and III CRC were included from January 2009 to December 2015. All clinical and follow-up data were collected. Patients were divided into four different groups according to the levels of postoperative serum CEA and post-preoperative CEA trends. Chi-square test was used to analyze the relationship between clinical variables and categorized postoperative CEA and CEA increment. Cox proportional hazard regression was used for univariate and multivariable analyses. The log-rank test was performed to compare PFS and OS among groups. Results: Patients, 1,008, who underwent radical surgery, were enrolled. Our results showed that positive postoperative CEA and CEA increment were related to clinical stage, T stage, N stage, tumor differentiation, and lymphatic invasion (p < 0.05). Univariate and multivariable analysis results suggested that positive postoperative CEA and CEA increment were independent prognostic factors for PFS (HR = 3.149, 95% CI, 2.426–4.088, p = 0.000 for postoperative CEA; HR = 2.708, 95% CI, 2.106–3.482, p = 0.000 for CEA increment) and OS (HR = 3.414, 95% CI, 2.549–4.574, p = 0.000 for postoperative CEA; HR = 2.373, 95% CI, 1.783–3.157, p = 0.000 for CEA increment). The survival analyses revealed positive postoperative CEA, and CEA increment predicted worse prognosis. Furthermore, our results indicated that the 3- and 5-year PFS rates were 86.6 and 78.4% in group A, but decreased to 25.3 and 7.2% in group D (p < 0.001). Similarly, the 3- and 5-year OS rates for group A were 92.5 and 83.9%, much higher than group D (p < 0.001). In other words, patients with both postoperative CEA elevation and CEA increment had the worst prognosis. Conclusions: Positive postoperative CEA and CEA increment were independent prognostic factors for stage II and III CRC. Additionally, postoperative CEA and CEA increment had significant impacts on PFS and OS of CRC. INTRODUCTION Colorectal cancer (CRC) is one of the most commonly diagnosed cancers worldwide with high morbidity and mortality rates (1). In recent years, although the treatment of colorectal cancer has been greatly developed, 5-year survival rate is only 67% for patients with rectal cancer, slightly higher than 64% with colon cancer (2). In China, incidence rate of CRC has been increasing year by year from 2000 to 2011 due to westernization of lifestyle (3). The carcinoembryonic antigen (CEA) is mainly secreted by solid tumors. In CRC, CEA has always been recommended as a reliable tumor marker by the National Comprehensive Cancer Network (NCCN) and the American Society of Clinical Oncology (4). CEA plays an important role in diagnosis, postoperative recurrence, and metastasis, and the effect of chemotherapy of CRC (5)(6)(7). High levels of preoperative serum CEA always indicate worse prognosis and shorter progressionfree survival time (PFS) in CRC (8,9). Besides, postoperative CEA level is an independent prognosis index for CRC and its positivity reflects the probability of liver metastasis after surgery (10,11). Increased postoperative CEA level at short intervals indicates the possibility of CRC recurrence and suggests that patients should be followed up more frequently (12). For metastasis CRC (mCRC), baseline level of CEA predicts the efficacy of some chemotherapy drugs and provides different information of overall survival time (13,14). Several studies have also elucidated the effect of post/preoperative CEA ratio on the treatment of CRC, and post/preoperative CEA ratio <1 reveals a better prognosis than CEA ratio >1 for CRC (15,16). Generally speaking, the value of CEA in prognosis of CRC has well been demonstrated. However, few studies have systematically analyzed the significances of postoperative CEA level and post-preoperative CEA increment for the prognosis of stage II and III CRC after radical resection. Therefore, we conducted this multicenter retrospective clinical trial to analyze the importance of postoperative CEA and post-preoperative CEA increment in survival of CRC patients. Data Collection This study was a multicenter retrospective clinical study, and it was registered in the Chinese Clinical Trial Registry (Approval No. ChiCTR1800016906). Our study was also approved by the ethics committee of Shanghai Jiao Tong University Affiliated Sixth People's Hospital (Approval No. 2018-KY-031K). All of the patients are from Shanghai Jiao Tong University Affiliated Sixth People's Hospital, the Second Affiliated Hospital of Harbin Medical University, and the Sixth Affiliated Hospital of Sun Yat-sen University. All patients were pathologically diagnosed as stage II and III CRC from January 2009 to December 2015. Written informed consents were obtained from all patients in this study. The criteria for exclusion were as follows: (1) without available postoperative CEA value within 12 weeks after surgery; (2) loss to follow-up; (3) unsuitable pathological type; (4) without available preoperative CEA value. The clinical characteristics of the patients, including gender, age, CEA value, and pathological reports, were all acquired from electronic patients' records and the departmental database. Pathological reports are detailed description of the surgical excised colorectal tissues, including T stage, N stage, pathological type, tumor differentiation, lymphatic invasion, and vascular invasion. Pathological stage was defined according to the 8th AJCC criterion for CRC. T stage meant the depth of primary tumor infiltration and N stage represented the number and extent of lymph node metastasis. Preoperative CEA value was tested within 1 week before surgery, and postoperative CEA value was gained within 12 weeks after surgery but before medical treatment. The value of CEA > 5 ng/ml is defined as positive (17). Patients were grouped as follows: (Group A) normal postoperative CEA (≤5 ng/ml) and without post-preoperative CEA increment; (Group B) normal postoperative CEA and with CEA increment; (Group C) positive postoperative CEA (>5 ng/ml) and without CEA increment; (Group D) positive postoperative CEA and with CEA increment. All patients were followed up according to current guidelines, including serum tumor markers, colonoscopy, chest X-ray, and CT (or MRI). Survival status and recurrence/metastasis status were updated by telephone, email, and medical history. Progression-free survival (PFS) was defined as the time from surgery to cancer metastasis or recurrence. Overall survival (OS) was defined as the time from surgery to death. Statistical Analysis All data in this study were analyzed by IBM SPSS STATISTICS 22.0 software and GraphPad Prism 6. Categorical variables were compared using Pearson chi-square test. Survival rates, 3 and 5 years, were assessed by the Kaplan-Meier method, and the log-rank test was used to compare the differences in survival rates among different groups. Cox regression was used to test the effect of various indicators on the prognosis of CRC and estimate hazard ratios (HRs) and 95% CIs. Pvalues were two-sided, with statistically significant differences at P < 0.05. Patients' Information A total of 1,832 patients with stage II and III CRC were enrolled in our study. According to our inclusion and exclusion criteria, 1,008 patients with complete clinical and follow-up data eventually were included. Furthermore, based on our grouped rules, the final number of patients in each group was as follows: Our study included 605 males (60.0%) and 403 females (40.0%). The age of 671 patients (66.6%) was over 60. Patients, 573 (56.8%), were diagnosed as stage II CRC, slightly more than stage III CRC. Specifically, 546 patients (54.1%) had T3 stage, and 423 patients (42.0%) had T4 stage, while there were only 39 patients with T1 and T2 (3.9%). Patients, 435, had lymph node metastasis, including 280 cases (27.8%) with N1, and 155 cases (15.4%) with N2. The pathological type of 950 patients (94.2%) was adenocarcinoma. CRC tissues, 794 (78.8%), were well or moderately differentiated, while 214 tissues (21.2%) were poorly differentiated. According to our data, lymphatic invasion was observed in 298 patients (29.6%), and vascular invasion was found in 170 patients (16.9%). Only 186 patients (18.5%) had elevated postoperative CEA value even after radical surgery. Post-preoperative CEA increment was found in 256 patients (25.4%). The median follow-up time was 46 months. According to our follow-up data, 292 patients (29.0%) had recurrence or metastasis, and 224 patients (23.0%) died ( Table 1). Positive Postoperative CEA and Post-preoperative CEA Increment Were Associated With Several Clinicopathologic Characteristics To investigate the correlation of postoperative CEA level and CEA increment with clinical and pathological parameters, we did the chi-square tests to analyze. Our results showed that positive postoperative CEA was related to clinical stage, T stage, N stage, tumor differentiation, lymphatic and vascular invasion (all values of p < 0.05) ( Table 2). In contrast, there was no significant difference in gender, age, or pathological type ( Table 2). Besides, our data also found that post-preoperative CEA increment was significantly different in terms of clinical stage, T stage, N stage, tumor differentiation, and lymphatic invasion (all values of p < 0.05) ( Table 2). However, there was no significant difference in terms of gender, age, pathological type, and vascular invasion ( Table 2). In addition, the high levels of postoperative CEA and CEA increment suggested recurrence or metastasis and poor prognosis of CRC (p < 0.05) ( PFS and OS Differences Among Four Groups As described in the Experimental section, we divided the patients into four groups (A, B, C, and D). As shown in Figure 3, group A had the best prognosis, while group D had the worst. The 3and 5-year PFS rates decreased from 86.6 and 78.4% in group A to 25.3 and 7.2% in group D (p < 0.001) (Figure 3A). Consistent with the trend of PFS, the 3-and 5-year OS rates for group A were 92.5 and 83.9%, much higher than rates of group D (only 38.7 and 20.0%, p < 0.001) (Figure 3B). These results showed that patients in group D with positive postoperative CEA and CEA increment had the worst prognosis, while patients in group A with normal postoperative CEA and without CEA increment had the highest PFS and OS rates. For groups B and C, we could see that they had similar PFS, but the OS of group C was worse. This phenomenon suggested that elevated postoperative CEA may have more important effects on the prognosis of stage II and III CRC patients. Subgroup Analyses to Test the Effect of CEA Levels on Prognosis of CRC To investigate the effect of perioperative abnormal CEA on the prognosis of CRC, we performed subgroup analyses. The subgroups were as follows: (Subgroup A) patients with positive preoperative CEA but normal postoperative CEA after radical surgery, n = 252; (Subgroup B) patients with normal preoperative CEA but positive postoperative CEA after radical surgery, n = 41; (Subgroup C) patients with positive preoperative CEA and positive postoperative CEA after radical surgery, n = 145. K-M curves illustrated that sub-B and sub-C had a worse PFS and OS than A, and sub-B was the worst (p < 0.001) (Figure 4). By comparing sub-A and sub-C, the results demonstrated that positive postoperative CEA patients had a poor prognosis even after radical resection. Furthermore, the survival of sub-B was worse than that of sub-C, indicating that patients with normal preoperative CEA but positive postoperative CEA had the worst prognosis in these subgroups. DISCUSSION In our study, 1,008 patients with stage II and III CRC were enrolled. Our results suggested that positive postoperative CEA was associated with clinical stage, T stage, N stage, tumor differentiation, lymphatic and vascular invasion, while postpreoperative CEA increment was related to clinical stage, Our subgroup analyses revealed high hazard of recurrence and poor survival in patients with perioperative CEA elevation, consistent with a recent study (18). Similar to our study, the early postoperative CEA percent drop may be a helpful factor for the prognosis of colon cancer, but the influence of preoperative and postoperative CEA trends on survival has not well been demonstrated (19). Huang et al. conforms that CEA reduction ratio is a prognostic factor in rectal cancer patients who receive chemoradiotherapy and radical surgery (20). Serum CEA alone is less sensitive to detecting CRC recurrence, even though the threshold is low (21). Nevertheless, CT or CEA test each provides a reliable rate of recurrence with minimal follow-up after surgical treatment, and combining CEA and CT shows no advantage (22). Another study suggests that postoperative CEA limit is 15 ng/ml, with a high chance of recurrence after resection for colorectal liver metastasis (11). The same conclusion in a retrospective cohort analysis shows that patients with normal postoperative CEA have 14.9% higher 3-year RFS than patients with elevated post-operative CEA (17). Serum CEA is also correlated with RAS-mutant allele fraction (23). According to current guidelines, patients undergoing radical surgery for stage II and III CRC need to test serum CEA every 3-6 months (24)(25)(26)(27). However, these guidelines do not have individual follow-up and adjuvant therapy advice. Therefore, in clinical practice, should we refer to the levels of perioperative serum CEA when we give patients treatment recommendations? In addition, we also found that the number of preoperative elevated tumor markers also had important impact on the prognosis of CRC, including CEA, CA19-9, CA242, and CA125 (28). We believe that serum tumor markers have great value in CRC, but these markers have not been paid enough attention in the clinic. Thus, our study may provide some references for clinical workers in this field. Admittedly, there are some shortcomings in our research. First, this is a retrospective study, while prospective studies demonstrating the significance of CEA in CRC are more convincing. Second, our study included only one indicator, CEA. Other serum tumor markers were not analyzed. Finally, patients in our study are all Chinese. In general, the treatment of stage II and III CRC after radical surgery still has some controversial problems (29,30). Our study demonstrates the effects of postoperative CEA level and CEA increment on the prognosis of stage II and III CRC. Thus, our results will provide useful information for clinical references in the follow-up treatment of CRC patients. CONCLUSION Positive postoperative CEA and CEA increment are independent prognostic factors for stage II and III CRC. Patients with elevated postoperative CEA level and positive CEA increment have the worst PFS and OS compared to other groups. Our results may be helpful to the adjuvant treatment of stage II and III CRC after radical surgery. DATA AVAILABILITY STATEMENT The datasets generated for this study are available on request to the corresponding author. ETHICS STATEMENT The studies involving human participants were reviewed and approved by The ethics committee of Shanghai Jiao Tong University Affiliated Sixth People's Hospital. The patients/participants provided their written informed consent to participate in this study.
2020-05-20T13:08:10.944Z
2020-05-20T00:00:00.000
{ "year": 2020, "sha1": "27949f53e012666e231c9b1371c91129722e2d4b", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fonc.2020.00671/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "27949f53e012666e231c9b1371c91129722e2d4b", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
257119625
pes2o/s2orc
v3-fos-license
Impact of Cavitation Jet on the Structural, Emulsifying Features and Interfacial Features of Soluble Soybean Protein Oxidized Aggregates A cavitation jet can enhance food proteins’ functionalities by regulating solvable oxidized soybean protein accumulates (SOSPI). We investigated the impacts of cavitation jet treatment on the emulsifying, structural and interfacial features of soluble soybean protein oxidation accumulate. Findings have shown that radicals in an oxidative environment not only induce proteins to form insoluble oxidative aggregates with a large particle size and high molecular weight, but also attack the protein side chains to form soluble small molecular weight protein aggregates. Emulsion prepared by SOSPI shows worse interface properties than OSPI. A cavitation jet at a short treating time (<6 min) has been shown to break the core aggregation skeleton of soybean protein insoluble aggregates, and insoluble aggregates into soluble aggregates resulting in an increase of emulsion activity (EAI) and constancy (ESI), and a decrease of interfacial tension from 25.15 to 20.19 mN/m. However, a cavitation jet at a long treating time (>6 min) would cause soluble oxidized aggregates to reaggregate through an anti-parallel intermolecular β-sheet, which resulted in lower EAI and ESI, and a higher interfacial tension (22.44 mN/m). The results showed that suitable cavitation jet treatment could adjust the structural and functional features of SOSPI by targeted regulated transformation between the soluble and insoluble components. Introduction Soybean is an important crop with seeds that contain abundant protein of approximately 40% [1]. The soybean protein has different physiological impacts including dropping blood lipids, blood pressure, and inhibiting cardiovascular and cerebrovascular disease indirectly [2]. Therefore, soybean proteins have been extensively exploited in food and feed plants due to their superior nutritious rate, high functional features, and low price [3]. Studies have revealed that in 2019, global soy production reached 366.67 million tons [4], which caused huge storage and transportation pressures. In addition, soy protein is vulnerable to oxidative attack during storage and transportation. The parties within the molecule re-syndicate to create oligomers following disclosure, owing to the oxidative denaturation of soy protein, which further forms macromolecular aggregates due to hydrophobicity and electrostatic attraction [5]. It is challenging to use oxidized soy protein in food manufacturing, because the formation of insoluble aggregates in protein aggregates is a significant factor in the loss of some biological and functional features of proteins, for instance, protein solubility, emulsifying effects, and emulsifying stability [6]. The physical control of oxidized protein aggregates is currently the subject of extensive research. The degree of whey protein isolate (WPI) aggregation's cross-linking could be controlled, and its gel and emulsifying characteristics could be improved, expanding the use of WPI in food processing, according to [7]. A decrease in the aggregate concentration of According to a prior work, the soluble oxidized accumulates of soybean protein (SOSPI) were produced [14]. To generate a 10 mg mL −1 soybean protein mix, the soybean protein was liquified in a phosphate buffer solution (PBS) with a phosphate dose of 0.01 mol L −1 , a pH of 7.2, and 0.5 mg mL −1 of NaN 3 . The final concentration of AAPH was increased to 0.5 mmol L −1 by adding AAPH. The soybean protein oxidized accumulate mix with a 12 h oxidation period was prepared after oxidating treatment for 12 h at 37 • C and became murky. Dialysis was performed at 4 • C for 72 h using a 14,000 kDa dialysis bag, and the deionized water (dH 2 O) was replaced every 6 h. The samples were gathered and given the designation OSPI after spray drying. The soybean protein oxidized accumulate mix underwent a 12 h oxidation process before standing centrifugated at 4 • C for 20 min at a rapidity of 9000 rpm to discrete the soluble components from the insoluble components. The samples were gathered and given the name SOSPI after spray drying. Formulation of Samples for Cavitation Jet Treatment The 2 L OSPI (25 • C, 10 mg mL −1 ) was poured into the SL-2 cavitation jet machine (Zhongsen Huijia Technology Development Co., Ltd., Beijing, China) to treat at 80 MPa for six diverse times: 2, 4, 6, 8, 10, and 15 min. The SOSPI was liquified in 0.01 mol/L PBS (pH 7.2, comprising 0.5 mg mL −1 NaN 3 ). After treatment, the protein was immediately chilled in an ice bath for 15 min, and trailed by 20 h of centrifugal treatment at 4 • C and 9000 rpm to remove the insoluble parts. Spray drying was used to create all sample solutions, which were given the names SCOSPI-2 min, SCOSPI-4 min, SCOSPI-6 min, SCOSPI-8 min, SCOSPI-10 min, and SCOSPI-15 min. Three groups of parallel samples were taken. Measuring the Particle Size Dispersal Based on the technique labeled by Ma et al. (2019), the particle size dispersal (PSD) was estimated via a laser scattering Mastersizer S (Malvern, UK) and a 300 inverse Fourier lens with the relief of a He-Ne laser λ = 633 [15]. The protein's refractive index was 1.33 when the amount was made at room temperature (RT, 25 ± 2 • C). Before measurement, the samples were diluted with dH 2 O to 50 mg/mL, and the particle sizes ranged between 0-10,000 µm. Measurement of the Molecular Weight Circulation Following Ma et al. (2019) [15], examples of soybean protein were examined using an HPLC unit (Milford, MA, USA). Briefly, the molecular weight of the proteins at 280 nm was determined via a Waters 2175 UV finder (Milford, MA, USA). Measurement of the Fourier Transform Infrared Spectroscopy (FTIR) Spectroscopy A Bruker Vertex 70 was used to analyze the materials using Fourier transform infrared (FTIR) (Bruker Optics GmbH, Ettlingen, Germany). At 0.5 cm −1 tenacity and RT (25 ± 2 • C), a total of 64 scans were found between 4000 and 400 cm −1 . The secondary structure was determined using the FTIR spectra's secondary-derivation and deconvolution processes, and it was based on the amide I band (1600-1700 cm −1 ). According to Tang et al. (2009), the method involved the secondary structure of the proteins being examined using Peakfit Ver., 4.12 software, and the algorithm utilized was Gaussian peak fitting [16]. Measuring of the Fluorescence Emission Spectra According to the technique used by Jiang et al. (2014), the fluorescence emission spectra of the materials were found via a Hitachi F-7000 fluorescence spectrophotometer (Hitachi Inc., Tokyo, Japan) [17]. The soybean protein trials were thinned in 0.01 mol L −1 PBS to a protein dose of 0.2 mg mL −1 to produce emission spectra at an excitation wavelength of 295 nm and from 300 to 400 nm. By employing a fixed 5 nm for both the emission and excitation in triplicate, the bandwidths were attained. Measurement of the Sulfhydryl Content According to the Wu et al. (2019) approach, the amounts of disulfide bonds and free sulfhydryl (SH) assemblies were measured [18]. DTNP was used in a variation of Ellman's approach to ascertain the SH cluster insides in the trial. The molar extinction constant (13,600 M −1 cm −1 ) was utilized to represent the SH contents as a nmol mg −1 protein. Measuring of the Transmission Electron Microscopy (TEM) TEM was dedicated by utilizing a previously described technique [19]. After being diluted 350 times in dH2O, the sample was dispensed in 30 µL droplets and applied on a carbon net (200 mesh). The surplus was wiped away using permeable paper after 120 s. The net was air-dehydrated on sieve paper after the samples were dyed for 3 min with a 2% uranyl acetate solution. Benefitting a TEM-JEM-1230 (JEOL, Tokyo, Japan) with a hastening voltage of 80 kV, the morphology of the sample was examined. Measuring of the Emulsifying Activity Index (EAI) and Emulsion Solidity Index (ESI) The Kevin et al. (1978) approach was used to evaluate the EAI and ESI [20]. A highrapidity homogenizer (T-25 homogenizer, IKA, Staufen, Germany) was used to combine a 15 mL sample of a 0.1% (w/v) protein mix with 5 mL of maize oil at 7200× g for 10 min to create an emulsion. The emulsion was then detached from the lowest of the centrifuge tubes and normalized for 0 and 30 min before being diluted 100 times with 5 mL 0.1% sodium dodecyl sulfate. A spectrophotometer was used to test the absorbance at 500 nanometers (Beckman DU 500, Fullerton, CA, USA). The EAI and ESI were stated as: where A 0 is the absorbance at 0 min of the thinned emulsion, DF is the dilution aspect (×100), c is the model dose (g mL −1 ), φ is the pictorial path, θ is the portion of the oil (0.25), and A 30 is the absorbance after 30 min. Measurement of the Confocal Laser Scanning Microscope (CLSM) The Leica TCS SP2 CLSM was used to study the microstructure of emulsions. To create an emulsion, 15 mL of a 0.1% (w/v) protein mix was normalized with 5 mL of maize oil at 7200× g for 30 min. A 1 mL of emulsion was added to the dye (40 µL), which included 0.02% Nile red dye and 0.1% Nile blue dye. After that, a coverslip was put on top of the colored emulsion in the middle of the slide. To prevent the water from evaporating, silicone oil was sprayed to the superiority of the coverslip. The emphasis plane was originally changed following an inspection with a 100× impartial lens, while the slide was mounted on a laser confocal microscope phase. Pre-examining was performed with Ar ion at 488 nm and a He/Ne ion laser at 633 nm. A fluorescence figure was composed with a visualizing intensity of 1024 × 1024. Measuring of the Quantity of Adsorbed Proteins at Interface (AP%) According to Liang and Tang, the amount of adsorbed proteins at the interface (AP%) of these emulsion samples was calculated [21]. A 10,000 g centrifuge was used to spin each new emulsion (1 mL) for 45 min at RT. A cream coat (or concerted oil droplets) at the upper of the tube and the aqueous stage of the emulsion at the bottom were visible after centrifugation. A 0.22 µm filter was utilized to sieve the supernatant after the cream layer was delicately detached using a syringe (Millipore Corp.). The Lowry technique was utilized to estimate the filtrate's protein content, with a BSA serving as the reference. To estimate the protein intensity (C s ) in the upper phase, the initial protein mix was likewise centrifuged under identical circumstances. The AP (%) was expressed as: where C s is the content of preliminary protein solution in the supernatant (mg), C f is the content of protein in filtrate after centrifugation (mg), and C 0 is the preliminary protein intensity of the protein mixes concerned for the emulsion formulation (mg). Measurement of the Interfacial Tension Various materials' surface tension was estimated via an automated surface tensiometer (DCAT21, Data Physics Instruments GmbH, Filderstadt, Germany). A total of 20 mL of the sample mix was then put into a 25 mL cylinder after the protein model had first been dissolved in dH 2 O (1%, m/v). The apparatus's measuring variety was always between 1 and 100 mN m −1 , with a SD that never went beyond 0.03 mN m −1 . Measurement of the Viscoelastic Properties The Sun et al. (2012) approach is used to assess the viscoelastic characteristics of emulsions [22]. An RST-CPS rheometer was used to measure the sample emulsions' rheological characteristics (Brookfield, Middleboro, MA, USA). At a temperature of 40 • C, the samples were sandwiched between two parallel plates with 1 mm space among them. A strain examining the analysis performed at an incidence of 1 Hz was used to identify the linear viscoelastic area of each sample. Each protein sample's elastic and storage moduli were determined in the linear viscoelastic area. Measurement of the Apparent Viscosity Rendering to the technique delineated by Swa et al. (2020), rheological tests were carried out via an AR 1500 regulated stress rheometer (TA, West Sussex, UK) outfitted with cone and bowl geometries (40 mm, angle 1 • , and gap 0.100 mm) [23]. The same technique was used to create the sample emulsions. The sample emulsions were divided into 2.0 mL aliquots and placed on the stage for measurement at 25 ± 0.1 • C. After 5 min, the viscosity ranged from 0 to 200 s −1 . Using the program, the measuring was performed in triplicate. We matched the investigational flow curves to Sisko's pattern that provided the finest fit and was signified by: where η is the ostensible viscosity (Pa·s), η 0 is the vintage ostensible viscosity (Pa·s), K is the consistency index (Pa·s n ), γ is the shear ratio (s −1 ), and n is the performance index (dimensionless). Statistical Analysis Statistical assessment was accomplished via SPSS ver. 20.0. The outcomes were imperiled to Duncan's multiple series and ANOVA tests. All the rates gained are stated as the mean ± SD in triplicate. A p-value ≤ 0.05 was measured significantly. Particle Size Distribution and Molecular Weight Circulation The SEC-HPLC and particle size dispersal can characterize the molecular weight, size, and aggregation degree of the soluble components in soybean protein oxidized aggregates treated by cavitation jet. It can be seen from Figures 1 and 2 with SPI, the particle size of OSPI showed a unimodal particle size and lifted to the right, meaning the average particle size increased significantly. Furthermore, the elution time of the first molecular weight peak of OSPI diminished and the peak quantity increased. However, as a soluble component in OSPI, the particle size of SOSPI displayed a bimodal particle size, the initial particle size peak transferred to the left, and the average particle size decreased. The elution time of the first molecular weight peak of SOSPI increased and the peak area decreased. The results revealed that after the oxidation treatment, the oxidized accumulates with a huge particle size and a high molecular weight were insoluble aggregates, and the soluble components were proteins with a small particle size and a low molecular weight. Radicals in the oxidative environment could induce proteins to form insoluble oxidative aggregates through covalent crosslinking, but they will also attack protein side chains to form small molecular weight soluble proteins [24]. With the increase of the cavitation jet treatment time, the retention time of the initial elution peak and the peak area of the protein components with a small molecular weight of SCOSPI decreased, and the particle size peak of SCOSPI lifted to the right. When the cavitation jet treatment time was 8 min, the first particle size peak of SCOSPI moved to the maximum right, and the average particle size achieved the highest. The outcomes displayed that the molecular weight and particle size of SCOSPI with the cavitation jet treatment increased, and the low molecular weight and small particle size protein components declined. The cavitation jet treatment could promote the depolymerization of insoluble aggregates in OSPI and transform them into soluble oxidized aggregates through high shear and cavitation effects, developing an increase of the particle size and molecular weight of the soluble oxidized accumulates [8]. Moreover, the cavitation jet would intensify the collision between the small molecule soluble aggregates, and then polymerize into a bigger particle size and molecular weight soluble protein molecule, resulting in the reduction of small molecular weight protein components [16,25]. When the cavitation jet treatment time exceeded 8 min, the first particle size peak of SCOSPI moved to the left and the retaining time of the initial elution peak and the peak area of protein substances with small molecular weight of SCOSPI amplified, indicating that when the treatment time was too long, the protein molecular weight of SCOSPI decreased and the small molecular weight protein component increased. The thermal effect and free radical effect of the cavitation jet, on the one hand, could promote the further aggregation among proteins to form insoluble aggregates, which were removed by centrifugation. On the other hand, it would split some peptide chains, resulting in soluble protein components dominated by small molecular weight and particle size protein molecules [16,26]. Combined with the research of the team in the early stage [27], cavitation jets can break losing the disulfide bonds and protein skeleton structures that declined the amassed sizes and molecular weights of oxidized aggregates. However, how these components of protein aggregates mutually transform is unclear. Through the particle size and molecular weight of this research, we can obtain that a cavitation jet can also induce the insoluble aggregates to break down under high shear stress and transform into soluble aggregates, ensuing in the rise of the particle size and molecular weight of the solvable accumulates. Consequently, a suitable cavitation jet treatment could adjust the structural and functional attributes of OSPI by inducing the cleavage of insoluble oxidized aggregates and transforming them into soluble aggregate components. increased. However, as a soluble component in OSPI, the particle size of SOSPI displayed a bimodal particle size, the initial particle size peak transferred to the left, and the average particle size decreased. The elution time of the first molecular weight peak of SOSPI increased and the peak area decreased. The results revealed that after the oxidation treatment, the oxidized accumulates with a huge particle size and a high molecular weight were insoluble aggregates, and the soluble components were proteins with a small particle size and a low molecular weight. Radicals in the oxidative environment could induce proteins to form insoluble oxidative aggregates through covalent crosslinking, but they will also attack protein side chains to form small molecular weight soluble proteins [24]. With the increase of the cavitation jet treatment time, the retention time of the initial elution peak and the peak area of the protein components with a small molecular weight of SCOSPI decreased, and the particle size peak of SCOSPI lifted to the right. When the cavitation jet treatment time was 8 min, the first particle size peak of SCOSPI moved to the maximum right, and the average particle size achieved the highest. The outcomes displayed that the molecular weight and particle size of SCOSPI with the cavitation jet treatment increased, and the low molecular weight and small particle size protein components FTIR Spectroscopy Fourier transform infrared spectroscopy can be utilized to elucidate the secondary structure change of proteins during aggregation and disaggregation [28]. Figure 3 is the FTIR spectra, and Table 2 is the secondary structure of oxidized accumulates and soluble oxidized aggregates after the cavitation jet treatment. Oxidized treatment raised the compounds of β1, β-turn, and γ-random coil in the OSPI and declined the compounds of α-helix. Compared with OSPI, the components of β1 and γ-random coil in SOSPI declined, and the compounds of α-helix increased. α-helix has structured secondary structures featured by high inflexibility and recurrence structure, while γ-random coil has unordered secondary structures featured by plasticity and the deficiency of a recurrence structure [29]. The marker structure of aggregation (β1) is created by molecular interactions during protein oxidation [30]. Changes in the constancy of the H-bond between the amino parties and the polypeptide chain's carbonyl parties are primarily responsible for the changes in the amount of α-helices [31]. Since the hydrogen connection among the amino and carbonyl groups in the polypeptide chain is unstable, oxidation may attack the amino acid residues in the primary peptide chain, reducing the amount of α-helix present. The spatial structure of a protein heavily influences its functional activities, and proteins with a suitably organized and compact structure exhibit beneficial functional behaviors [32]. Compared with other samples, the lowest α-helix content of OSPI referred that excessive oxidation would seriously demolish the ordered structure of protein. This might be one of the important reasons for the decline of OSPI functional activity [33]. Comparing the results of OSPI and SOSPI, we can find that oxidized protein with several β1 existed in OSPI, while SOSPI has more rigid and ordered structures. With the increase of the cavitation jet treatment time, β1 of SCOSPI increased first, then decreased and then increased, and other structures showed no obvious regular change trend. Combined with the outcomes of particle size and molecular weight, the superior pressure and superior shear strengths of the cavitation jet at a fleeting treatment time could lead to the cleavage of protein accumulates by weakening the protein-protein interactions and induce insoluble aggregates with high contents of β1 to transform into soluble aggregates resulting in the increase of the β1 contents [34]. Nevertheless, after the treatment time of the cavitation jet exceeded 6 min, the components of β1 of SCOSPI decreased first and then increased. The cavitation jet with long treatment time could induce the soluble aggregate in OSPI to aggregate further, due to the thermal impact and extra-speed instability and formed the insoluble aggregates with high β1 components which were centrifuged and removed, resulting in the decrease of the β1 content. When the cavitation jet treatment time was 8-15 min, the β1 content of SCOSPI increased. Combined with the previous research results of the team [27] during this timeframe, the β1 of the protein oxidized accumulates and soluble aggregates both increased, which showed that continuously extreme cavitation jet treatment can cause the formation of more β1 structures with the aggregation characteristics of soluble and insoluble components. Integrated with the particle size and molecular weight findings, we could find that the particle size and molecular weight of SCOSPI decreased with a long cavitation jet treatment time. This showed that the cavitation jet could depolymerize the soluble components and at the same time could induce the aggregation reaction between protein molecules, resulting in more β1 structures. The above results showed that the control of cavitating jet is an extremely complex process. The cavitation jet might dynamically govern the depolymerization and reaggregation of soluble soybean protein oxidized accumulates through the transformation of the protein spatial structure. Fluorescence Emission Spectra Fluorescence spectra can characterize the polarity changes of aromatic amino acids in the microenvironment, so as to predict alterations in the tertiary structure of soluble protein aggregates [35]. Figure 4 is the intrinsic fluorescence spectra of the soluble soybean protein oxidized aggregates. Compared with SPI, the fluorescence intensity of OSPI declined significantly and λmax was blue shifted. Compared with OSPI, the fluorescence intensity of SOSPI raised and λmax was red shifted. Free radicals in the oxidizing situation could induce the crosslinking, condensation, and nucleation of SPI, and then form the protein aggregation with a tighter structure [36]. Comparing the fluorescence spectrum of OSPI and SOSPI, we could find that the components with tighter structures in the OSPI components exist in insoluble oxidized aggregates. On the other hand, this showed that the change of the structural tightness degree could reflect the conformational change law of the transformation from a soluble protein to an insoluble protein. With the prolongation of the cavitation jet treatment time, SCOSPI λmax initially raised and then declined, and achieved the supreme when the treatment time was 6 min. This showed that the cavitation jet could alter the spatial structure of SCOSPI to regulate the functional activity. The cavitation jet could cleave and break some insoluble soybean protein oxidation aggregates and induce the creation of soluble oxidation accumulates with a loose structure and a larger particle size. This could cause the increase of the number and exposed degree of aromatic amino acid elements of SCOSPI; to show polar solute circumstances, a red move of λmax of SCOSPI was noted. However, with the further extension of the treatment time, some soluble oxidized aggregates of the soybean protein regrouped and transformed into insoluble aggregates, which were centrifuged, resulting in the reduction of the number of aromatic amino acid elements of SCOSPI [37]. In addition, the other soluble aggregates could form β1 structures and aggregate through covalent cross-linking and hydrophobic interaction in the collision, so that the aromatic amino acids of SCOSPI were buried in the structure [38]. Consequently, excessive cavitation jet treatment will, through this dual effect of aggregation and depolymerization together, cause the λmax of SCOSPI blue shift. Sulfhydryl Content SH/SS replacement reactions play a key role in protein accumulation, which can reproduce the impact of physical fields on the depolymerization mechanisms of protein. As shown in Table 3, the free sulfhydryl and total sulfhydryl quantity of OSPI decreased, and the disulfide bond quantity increased compared with SPI. However, compared with OSPI, the free sulfhydryl content of SOSPI rose, and the disulfide bond content decreased. The oxidation treatment could promote the creation of disulfide bonds via the disulfide/sulfhydryl switch reaction. More disulfide bonds reflect the tighter spatial structure of soybean protein, so that the oxidation increased the tightness of the soybean protein molecular space structure [39]. This showed that oxidation could adjust the compactness of the protein structure by changing the disulfide bond, thus affecting its functional activity [40]. In addition, the total sulfhydryl content was also decreasing, and the decay of free sulfhydryl quantity was higher than the raise of disulfide bond content, signifying that oxidation also had a non-reversible oxidation reaction on the soybean protein, inducing the transformation of free sulfhydryl into sulfur compounds without a disulfide bond [41]. For soluble components in OSPI, namely SOSPI, the disulfide bond quantity decreased. Combined with the results of the particle size and fluorescence emission spectra, the oxidized treatment diminished the particle size and amplified the fluorescence intensity of SOSPI, indicating that the oxidized masses of soybean protein after the oxidation treatment were mainly insoluble aggregates, and most of the soluble components show a small particle size and a loose and unfolded structure [40]. With the addition of the cavitation jet treatment time, the contents of free sulfhydryl, total sulfhydryl, and disulfide bonds of SCOSPI amplified first and then declined, but the processing time, corresponding to a maximum value of the three, is inconsistent. This is because there was more than one conversion reaction between free sulfhydryl and disulfide bonds in the system. The cavitation jet treatment could break the core aggregation skeleton of SCOSPI, destroy the Sulfhydryl Content SH/SS replacement reactions play a key role in protein accumulation, which can reproduce the impact of physical fields on the depolymerization mechanisms of protein. As shown in Table 3, the free sulfhydryl and total sulfhydryl quantity of OSPI decreased, and the disulfide bond quantity increased compared with SPI. However, compared with OSPI, the free sulfhydryl content of SOSPI rose, and the disulfide bond content decreased. The oxidation treatment could promote the creation of disulfide bonds via the disulfide/sulfhydryl switch reaction. More disulfide bonds reflect the tighter spatial structure of soybean protein, so that the oxidation increased the tightness of the soybean protein molecular space structure [39]. This showed that oxidation could adjust the compactness of the protein structure by changing the disulfide bond, thus affecting its functional activity [40]. In addition, the total sulfhydryl content was also decreasing, and the decay of free sulfhydryl quantity was higher than the raise of disulfide bond content, signifying that oxidation also had a non-reversible oxidation reaction on the soybean protein, inducing the transformation of free sulfhydryl into sulfur compounds without a disulfide bond [41]. For soluble components in OSPI, namely SOSPI, the disulfide bond quantity decreased. Combined with the results of the particle size and fluorescence emission spectra, the oxidized treatment diminished the particle size and amplified the fluorescence intensity of SOSPI, indicating that the oxidized masses of soybean protein after the oxidation treatment were mainly insoluble aggregates, and most of the soluble components show a small particle size and a loose and unfolded structure [40]. With the addition of the cavitation jet treatment time, the contents of free sulfhydryl, total sulfhydryl, and disulfide bonds of SCOSPI amplified first and then declined, but the processing time, corresponding to a maximum value of the three, is inconsistent. This is because there was more than one conversion reaction between free sulfhydryl and disulfide bonds in the system. The cavitation jet treatment could break the core aggregation skeleton of SCOSPI, destroy the intermolecular force and spatial structure, and induce the conversion of disulfide bonds into free sulfhydryl groups, resulting in the increase of the free sulfhydryl content of the soluble oxidized aggregates. At the identical time, combining with the increase particle size findings of SOSPI after the cavitation jet, cavitation could also promote the transformation from insoluble aggregates with high disulfide bonds content into soluble aggregates, so the number of soluble aggregates increased [42], which instigated the expansion of disulfide bond content and free sulfhydryl content of soluble oxidized aggregates. Nevertheless, when the cavitation jet treatment time was too long, high intensity, long-time cavitation, turbulence, and thermal effects would cause the re-aggregation of soluble aggregates and also cause the cracking of all accumulates, which was an irreversible denaturation for protein [43]. To be more specific, when the treatment time was maximized, the cavitation jet would destroy the spatial structure and intermolecular force of the oxidized aggregates, resulting in the reduction of the disulfide bond content, which is well matched with the finding of a decreased particle size. However, the free sulfhydryl groups, formed by the disulfide bond breaking in OSPI, would aggregate with each other to form SCOSPI with a tighter spatial structure, ending in a reduction in the quantity of free sulfhydryl groups. This is matched with the findings of FTIR spectroscopy and fluorescence emission spectra. At the same time, cavitation jet also induced the irreversible reaction of protein sulfhydryl groupings to generate the sulfur-comprising components with non-disulfide bonds. Moreover, the conversion of soluble oxidized aggregates to insoluble oxidized aggregates will also lead to the fluctuation of the disulfide bond and free sulfhydryl quantity. These factors together caused the reduction of free sulfhydryl and disulfide bond quantity. The inconsistent processing time, corresponding to the maximum value of the free sulfhydryl, total sulfhydryl, and disulfide bond, showed that the process of aggregation and depolymerization and the conversion between soluble and non-soluble was a very complex process and needs further research. Transmission Electron Microscopy (TEM) To better understand SPI and compare the differences among SPI, OSPI, SOSPI, and SCOSPI, the apparent morphology was visualized by TEM, as shown in Figure 5. Compared with SPI, the aggregation degree of OSPI was increased, and OSPI formed a dense network structure with intense central part. The skeleton structure of SOSPI mainly presented short and small wormlike structures. Oxidation led to the conformational changes of SPI and exposed the side chain groups of hydrophobic aliphatic and aromatic amino acids entrenched within, inducing cross-linking aggregation through hydrophobic interaction. Furthermore, they can also attack the sulfhydryl groups of proteins and convert them into disulfide bonds, showing insoluble protein accumulates with a large particle size and highly cross-linked clusters in OSPI [44]. However, SOSPI showed short rod protein molecules with a small particle size; this is because the proteins with a high degree of cross-linking were transformed into insoluble aggregates and removed by centrifugation [25], as the soluble components of the protein are mainly in the shape of short and small rods. With the increase of the cavitation jet treatment time from 2 min to 8 min, the aggregation degree of SCOSPI increased. Most of the protein aggregates heavily bonded, which consisted of agglomerated smaller worm-like particles, and the skeleton structure became larger and more branches appeared. This is well matched with the outcomes of the particle size and molecular weight. On the one hand, it might be because the cavitation jet broke the disulfide bond of the aggregates, and the insoluble aggregation with large, clustered morphology, resembling those of compact reticulation, was cracked. Then, the insoluble aggregation transformed into soluble aggregates, which led to the amplification of the number of soluble aggregates in the supernatant and presented a cluster structure. On the other hand, it might be that under the cavitation treatment, the fragmentation of the skeleton structure increased. This result promoted the mutual collision between soluble protein molecules and the binding probability of free sulfhydryl clusters, resulting in the enlargement and additional branches of the originally short rod-shaped skeleton structure [26]. However, with the further conservation of the cavitating jet treatment time, the mesh skeleton structure was seriously broken and gradually transformed into a short bar structure. When the treatment time reached 15 min, the mesh structure disappeared, and the skeleton structure presented a slender bar. Combined with the above results, the cavitation jet has the dual effects of breaking and reassembling the protein skeleton structure. In addition, it also induces a mutual transformation between the soluble and insoluble aggregates. Therefore, the longtime cavitation jet can induce the soluble aggregates to transform into insoluble aggregates and be removed by centrifugation, and decrease the content of the soluble aggregates. In addition, under high temperatures, great pressure, and the shear force conditions of the long-time cavitation jet, the skeleton structure of the protein was broken. These two works together resulted in the decrease of the cementation and intercross network structure of the soluble aggregates, and the formation of a small and slender skeleton structure. Foods 2023, 12, x FOR PEER REVIEW 13 of 25 size and highly cross-linked clusters in OSPI [44]. However, SOSPI showed short rod protein molecules with a small particle size; this is because the proteins with a high degree of cross-linking were transformed into insoluble aggregates and removed by centrifugation [25], as the soluble components of the protein are mainly in the shape of short and small rods. With the increase of the cavitation jet treatment time from 2 min to 8 min, the aggregation degree of SCOSPI increased. Most of the protein aggregates heavily bonded, which consisted of agglomerated smaller worm-like particles, and the skeleton structure became larger and more branches appeared. This is well matched with the outcomes of the particle size and molecular weight. On the one hand, it might be because the cavitation jet broke the disulfide bond of the aggregates, and the insoluble aggregation with large, clustered morphology, resembling those of compact reticulation, was cracked. Then, the insoluble aggregation transformed into soluble aggregates, which led to the amplification of the number of soluble aggregates in the supernatant and presented a cluster structure. On the other hand, it might be that under the cavitation treatment, the fragmentation of the skeleton structure increased. This result promoted the mutual collision between soluble protein molecules and the binding probability of free sulfhydryl clusters, resulting in the enlargement and additional branches of the originally short rod-shaped skeleton structure [26]. However, with the further conservation of the cavitating jet treatment time, the mesh skeleton structure was seriously broken and gradually transformed into a short bar structure. When the treatment time reached 15 min, the mesh structure disappeared, and the skeleton structure presented a slender bar. Combined with the above results, the cavitation jet has the dual effects of breaking and reassembling the protein skeleton structure. In addition, it also induces a mutual transformation between the soluble and insoluble aggregates. Therefore, the long-time cavitation jet can induce the soluble aggregates to transform into insoluble aggregates and be removed by centrifugation, and decrease the content of the soluble aggregates. In addition, under high temperatures, great pressure, and the shear force conditions of the long-time cavitation jet, the skeleton structure of the protein was broken. These two works together resulted in the decrease of the cementation and intercross network structure of the soluble aggregates, and the formation of a small and slender skeleton structure. Emulsion Capacity and Stability Due to its good emulsifying activity, proteins are usually used in food emulsions and artificial fats. However, emulsifying features differ both on the capability of the protein adsorbed on the oil droplet superficially and the protein intermolecular binding, and it is related to the shape, size, and superficial hydrophobicity of the protein molecules [45]. The EAI and ESI of the emulsions stabilized by the soluble soybean protein oxidized accumulates are shown in Table 4. The oxidization treatment decreased the EAI and ESI of emulsions steadied by OSPI and SOSPI. In addition, the EAI of SOSPI was superior to that of SPI, but the ESI was relatively inferior. The oxidation treatment could form a highly ordered intermolecular β-sheet between proteins, which were poor supports to the flexible body. The molecular flexibility of OSPI decreased and formed insoluble oxidized aggregates whose structure is difficult to relax, resulting in the reduction of protein interface activity and the decline of the binding ability between the protein and oil, which induced the EAI and ESI of OSPI, inferior than that of SPI [46]. However, compared with the OSPI, the SOSPI contained abundant free sulfhydryl groups and short worm-like skeleton structures. In the process of forming the emulsions, the SOSPI moved to the interface in smaller particle size aggregates, which increased the exchange area between the protein and the oil-water interface and made SOSPI easier to absorb and relax at the interface, so that the EAI of SOSPI were higher than that of OSPI. Nevertheless, it is difficult for proteins with a miniature particle size to adsorb stably on the interface for a long time, resulting in a decrease of the ESI of SOSPI. With the extension of the cavitation jet treatment time, the EAI and ESI of SOSPI initially amplified and then declined, and achieved the highest when the cavitation jet treatment time was 6 min. The high pressure, shear, and cavitation effects shaped by the cavitating jet could cleave oxidized aggregates and induce some insoluble aggregates to change into soluble aggregates, so that the content of the soluble protein accumulates increased. The cavitation jet could also abolish the intermolecular binding of SCOSPI, and the structure of SCOSPI is altered. These two results result in the amplification of the number of exterior hydrophobic parties and polar groups, particle size, and molecular flexibility of SCOSPI to improve the emulsifying activity and emulsifying constancy [47]. Additionally, the intercross networks structure, cross-connected by the protein, were useful in the creation of the emulsion [48]. Combined with the TEM results, it can be found that after the cavitation jet treatment, the mesh skeleton structure of SCOSPI became larger than was beneficial to the formation and stability of the emulsion. However, when the cavitation jet treating time exceeded 6 min, the long-time high temperature, pressure, and shear force produced by cavitation jet impacted the hydrophilic and hydrophobic clusters and interior binding of the protein molecules, which unfavored the aptitude of SCOSPI to adsorb at the oil-water interface. In addition, during the long time cavitation jet, the SCOSPI gradually formed a small molecular protein with a more highly ordered β-sheet structure, difficult relaxation, and low molecular flexibility, resulting in the reduction of the emulsifying activity and emulsifying stability of SCOSPI [49]. Confocal Laser Scanning Microscope (CLSM) The CLSM was utilized to explore the fluctuations in the microstructure of the soybean protein emulsion after pretreating it with the cavitation jet, as displayed in Figure 6. The green fluorescence in CLSM micrographs signified the protein piece, the red fluorescence represented the soybean oil, and the vivid yellow fluorescence signified the proteins adsorbed on the oil droplets. The small spherical droplets were evenly distributed throughout the emulsion system, which was prepared from SPI. After the oxidation treating, the flocculation of the emulsion dewdrops prepared from OSPI and SOSPI were serious, and the red areas in the unceasing phase increased significantly and gathered on the surface of emulsion droplets, showing the phenomenon of oil-water separation. The oil-water separation of OSPI was more serious than SOSPI. The oxidative treatment caused the accumulation and solubility of the protein molecules to decrease, causing difficulty for the formation of steady interfacial film in the emulsification procedure, thus showing a red oil droplet accretion area in a large aggregation area [50]. Nevertheless, compared with OSPI, the particle size and skeleton structure of SOSPI were smaller, and it had better adsorption ability at the oil-water interface. Most of the oil droplets are encapsulated in the emulsion droplets prepared by SCOSPI, and there was only a small-scale accumulation of red oil droplets. This shows that macromolecules and insoluble protein oxidation aggregates formed during the oxidation were the key to causing significant decline in emulsifying activity. Therefore, regulating the macromolecules and insoluble proteins aggregates could improve the function activity of OSPI. During the treatment of the cavitation jet, the green area of the emulsion prepared with SCOSPI in the CLSM image increased gradually, and the red area gradually decreased and showed the wrapped state in the green area, and a steady protein interfacial film was shaped at the oil-water interface. The cavitation jet treatment could induce the insoluble oxidation aggregates, which are difficult to relax at the interface, to transform into soluble aggregates with smaller steric hindrance and a more flexible structure, as well as to expand the adsorption, relaxation, and reordering effects of SCOSPI at the interface, so as to improve the steadiness of the oil-water interface [51]. In addition, the number of surface hydrophobic groups and polar groups in SCOSPI were increased, which promoted additional protein molecules to be adsorbed at the oil-water interface to produce a steadier interfacial film, and could improve the interfacial activity and emulsifying features [52,53]. Therefore, the proteins discolored green were consistently and firmly spread across the oil droplets that efficiently avoided coalescence. However, with the further delay of the cavitation jet treatment time, in the CLSM images of the emulsion prepared by SCOSPI, the red area of oil increased gradually, and the green area of the protein was mainly concentrated. Excessive cavitation jet treatment would increase the content of the anti-parallel intermolecular β-sheet, surface hydrophobicity, and ζ-potential reduction of SCOSPI, and the specific surface area declined, which was not encouraging for distribution to crossing and extension on the oil-water interface [54]. Finally, the binding capacities of SCOSPI to oil were weakened, resulting in the increase of free oil droplets and serious oil-water separation. A cavitation jet can affect the flocculation and stability of the emulsion by regulating the transformation between insoluble and soluble aggregates. Quantity of Adsorbed Proteins at Interface (AP%) The quantity of the adsorbed proteins at the interface has a key impact on the constancy of the emulsion. The higher interfacial protein amount, the stronger capability of protein adsorption to the oil-water interface [55]. As shown in Figure 7, the AP% of the emulsion made by OSPI and SOSPI decreased compared with SPI, and the AP% of the emulsion made by SOSPI was lower than OSPI. It is generally considered that proteins undergo a certain amount of structural extension and relaxation when dissolved in an aqueous solution [56]. The more flexible the structure of the proteins, the better it is for structural expansion and the more prone they are to bulk diffusion, adsorption at the interface, unfolding, and rearrangement, inducing the increase of AP% [57]. During oxidation, the level of protein accumulation creased, and the structural flexibility of highly clustered OSPI was reduced and its rigidity was enhanced, which was not favorable to the expansion and adsorption at the interface, resulting in the decrease of AP% [58]. In addition, the decrease of AP% might be largely ascribed to the fabrication of the bridged emulsions, e.g., two particular oil droplets allocated an identical protein particle layer, facilitated by the oxidization-induced aggregation of SPI [59]. At the time of the formulation of the emulsion, large molecules can be transported to the oil-water interface in preference to small molecules because of the convective mass transport effect of high-pressure homogenization, i.e., large molecules can adsorb more quickly than small molecules [60]. Compared with OSPI, SOSPI had a small particle size and low molecular weight, which affected the protein adsorption ratio to the oil-water interface, subsequent in lower AP% [61]. However, the protein with a small particle size, low molecular weight, and a high solubility contributes to improving the connection region with the oil-water interface and the affinity of protein to the interface; foremost there was only a small-scale accumulation of red oil droplets in the emulsion prepared by SOSPI. Therefore, the AP% is not the only consideration in deciding the characteristics of emulsion. Figure 6. CLSM micrographs of natural, oxidized soybean protein, and cavitation jet treated on soluble soybean protein oxidized accumulates at several times (2, 4, 6, 8, 10, and 15 min). Note: (a) represents the proteins, (b) represents the oil droplets, and (c) represents the proteins adsorbed on the oil droplet. Quantity of Adsorbed Proteins at Interface (AP%) The quantity of the adsorbed proteins at the interface has a key impact on the constancy of the emulsion. The higher interfacial protein amount, the stronger capability of protein adsorption to the oil-water interface [55]. As shown in Figure 7, the AP% of the emulsion made by OSPI and SOSPI decreased compared with SPI, and the AP% of the emulsion made by SOSPI was lower than OSPI. It is generally considered that proteins undergo a certain amount of structural extension and relaxation when dissolved in an aqueous solution [56]. The more flexible the structure of the proteins, the better it is for structural expansion and the more prone they are to bulk diffusion, adsorption at the interface, unfolding, and rearrangement, inducing the increase of AP% [57]. During oxidation, the level of protein accumulation creased, and the structural flexibility of highly clustered OSPI was reduced and its rigidity was enhanced, which was not favorable to the expansion and adsorption at the interface, resulting in the decrease of AP% [58]. In addition, the decrease of AP% might be largely ascribed to the fabrication of the bridged emulsions, e.g., two particular oil droplets allocated an identical protein particle layer, facilitated by the oxidization-induced aggregation of SPI [59]. At the time of the formulation of the emulsion, large molecules can be transported to the oil-water interface in preference to small molecules because of the convective mass transport effect of high-pressure homogenization, i.e., large molecules can adsorb more quickly than small molecules [60]. Compared with OSPI, SOSPI had a small particle size and low molecular weight, which affected the protein adsorption ratio to the oil-water interface, subsequent in lower AP% [61]. However, the protein with a small particle size, low molecular weight, and a high solubility contributes to improving the connection region with the oil-water interface and the affinity of protein to the interface; foremost there was only a small-scale accumulation of red oil droplets in the emulsion prepared by SOSPI. Therefore, the AP% is not the only consideration in deciding the characteristics of emulsion. With the expansion of the cavitation jet treatment time, the AP% of SCOSPI boosted first and then diminished, and touched the highest when the treatment time was 6 min. The high-velocity turbulent flow, high-speed shearing, and large pressure produced by the cavitation jet acted on cross-linked aggregates to realize the transformation from insoluble aggregates to soluble aggregates and directional regulation for amorphous soluble aggrega-tion, which improved the soluble protein content and molecular flexibility of the soluble proteins. It promoted more proteins with structural relaxation and overall flexibility to be adsorbed, unfolded, and rearranged at the interface. Then, it enhanced the AP% of the emulsion prepared by SCOSPI, and increased to interfacially prepare and bulk stabilize the oil-water systems [62]. Nevertheless, when the cavitation jet treating time was too long, the majority of the soluble protein components were dominated by a small molecular weight and particle size protein molecules, and the content of β1 in SCOSPI was increased, designating that the content of the methodical structure augmented and that the structure was relatively tight and complex, which was not favorable for the adsorption and unfolding of protein at the oil-water interface and the formation of a dense interface interfacial film. These results together led to the decrease of AP% of emulsion prepared by SCOSPI [63]. Interfacial Tension A key element in the investigation and analysis of emulsion stability is the interfacial tension at the liquid-liquid interface, which can also describe the exterior activity of proteins at the oil-water interface [64]. As shown in Figure 8, the interfacial tension of the emulsion prepared with natural soy protein was 21.29 mN/m. After the oxidation treatment, the interfacial tension of the emulsions made by OSPI and SOSPI were raised, and SOSPI was higher than OSPI. The oxidation treatment would promote a protein aggregate to form insoluble oxidation aggregates with a larger particle size, low molecular flexibility, and poor solubility [39]. In addition, the exact surface region of the protein molecules was reduced, and the steric hindrance was increased, which were not beneficial to the adsorption and reordering of protein at the oil-water interface, resulting in an increase in the protein interfacial tension [65]. The soluble components in SOSPI were more easily dissolved in the water phase, the particle size was lesser, and the ordered structure was greater. The ability to form stable interfacial film was significantly reduced, resulting in a higher interfacial tension and lower emulsion stability. After the cavitation jet treatment, the interfacial tension of SCOSPI decreased first and was then amplified, achieving the bottom when the treatment time was 6 min. Combined with the results of the particle distribution and TEM, it could be seen that the particle size of SCOSPI increased and the protein skeleton widened, indicating that the cavitation jet could break the insoluble soybean protein oxidation aggregate and transform it into soluble oxidation aggregate, resulting in the increase of hydrophobic clusters of soluble protein components and a more complex conformational space [53]. Protein molecules were not easily soluble in the aqueous phase, which improved the expansion and reordering of the protein molecules at the interface and declined the interfacial tension [66]. However, when the cavitation jet treatment time was too long, the quantity of β1 After the cavitation jet treatment, the interfacial tension of SCOSPI decreased first and was then amplified, achieving the bottom when the treatment time was 6 min. Combined with the results of the particle distribution and TEM, it could be seen that the particle size of SCOSPI increased and the protein skeleton widened, indicating that the cavitation jet could break the insoluble soybean protein oxidation aggregate and transform it into soluble oxidation aggregate, resulting in the increase of hydrophobic clusters of soluble protein components and a more complex conformational space [53]. Protein molecules were not easily soluble in the aqueous phase, which improved the expansion and reordering of the protein molecules at the interface and declined the interfacial tension [66]. However, when the cavitation jet treatment time was too long, the quantity of β1 increased and the γ-random coil ( Table 2) decreased. Additionally, particle size (Table 1) was reduced and soluble oxidized aggregates with a more orderly structure and lower molecular flexibility were formed, with the result that adsorption energy barricaded at the boundary was higher, and the adsorption efficacy was decreased. This affected the adsorption and evolving of the protein at the oil-water interface and caused an increase of interfacial tension [67]. Combined with the results of EAI, ESI, CLSM, and AP%, we can find that when the cavitation jet treatment was 6 min, the soluble soybean protein oxidized aggregates showed the best emulsification interface characteristics. This provided a simple and effective technology for the application of soybean protein in the food industry. Viscoelastic Properties Rheological quantities deliver evidence on the physical performance and steadiness of lotion [68]. Elastic modulus G is a measurement of elasticity and signifies the storing modulus of the energy of stress that could be reinstated when the stress is liberated, while the viscous modulus G" signifies the viscous substances, which assumes the flow defiance of the sample [69,70]. G' and G" of emulsion are shown in Figure 9. Both the elastic modulus (G ) and viscous modulus (G") gradually amplified within the oscillation frequency range. In all samples, the G was superior to the G" and exhibited an elastic character. It showed that the protein on the interface formed a viscoelastic adsorbed film and suggested an elastic network structure of emulsion. The G and G" of the emulsion formulated by OSPI were both higher than SPI, while the SOSPI showed the opposite results. The rheological features of the interface layer were mainly impacted by the hydrophobic interaction and disulfide bonds among proteins adsorbed at the oil-water interface [71]. After the oxidation treatment, the quantity of the disulfide bonds in the protein aggregates increased significantly. In addition, under the hydrophobic interaction, they were bound to the proteins that have been adsorbed on the interface layer. Therefore, the formation of the oxidative aggregates enhanced the binding between protein molecules at the interface, thus amending the interfacial elastic modulus of the emulsion prepared by OSPI. However, the soluble soybean protein aggregates were transformed into insoluble oxidized aggregates after the oxidation treatment. The soluble components with smaller molecular proteins were more likely to dissolve in the water phase and were unable to shape a protein-bound film, resulting in a decline in G and G" of emulsion equipped by SOSPI [72]. With the cavitation jet treatment, the G' of the emulsion fabricated by SCOSPI increased first and then declined, and the G" presented no obvious change. With the delay of cavitation jet treatment time, the skeleton of soluble oxidized aggregates became wider, the particle size was larger, the exposure of hydrophilic and lipophilic groups in components raised, the electrostatic revulsion among emulsion droplets was also amplified, the protein content adsorbed on the oil-water interface layer was more and more, and the thickness of the interfacial film slowly augmented, subsequent to the increase of interfacial elastic modulus. When the cavitation jet treating time was maximized, the SCOSPI through β1 would form protein aggregates with a high aggregation degree, small particle size, and weak reticular structure, which had negative impacts on the interface activity and resulted in the decrease of G' [73,74]. thus amending the interfacial elastic modulus of the emulsion prepared by OSPI. However, the soluble soybean protein aggregates were transformed into insoluble oxidized aggregates after the oxidation treatment. The soluble components with smaller molecular proteins were more likely to dissolve in the water phase and were unable to shape a protein-bound film, resulting in a decline in G′ and G″ of emulsion equipped by SOSPI [72]. With the cavitation jet treatment, the G' of the emulsion fabricated by SCOSPI increased first and then declined, and the G″ presented no obvious change. With the delay of cavitation jet treatment time, the skeleton of soluble oxidized aggregates became wider, the particle size was larger, the exposure of hydrophilic and lipophilic groups in components raised, the electrostatic revulsion among emulsion droplets was also amplified, the protein content adsorbed on the oil-water interface layer was more and more, and the thickness of the interfacial film slowly augmented, subsequent to the increase of interfacial elastic modulus. When the cavitation jet treating time was maximized, the SCOSPI through β1 would form protein aggregates with a high aggregation degree, small particle size, and weak reticular structure, which had negative impacts on the interface activity and resulted in the decrease of G' [73,74]. Figure 10 and Table 5 depict the emulsions' rheological behavior. All the emulsions' flow curves could be matched with Sisko's model. With flow performance indices fluctuating from 0.059 to 0.271, all the emulsions displayed shear-thinning behavior. Intermolecular bindings among aggregated molecules, which result in the creation of weak transient networks, may be the motive of shear-thinning actions for the stabilized emulsions [65]. Emulsions steadied by OSPI exhibited a higher ostensible viscosity and K than those stabilized by SPI, but those steadied by SOSPI had an inverse relationship between their apparent viscosity and K. The volume proportion of the dispersed phase and the size of Figure 10 and Table 5 depict the emulsions' rheological behavior. All the emulsions' flow curves could be matched with Sisko's model. With flow performance indices fluctuating from 0.059 to 0.271, all the emulsions displayed shear-thinning behavior. Intermolecular bindings among aggregated molecules, which result in the creation of weak transient networks, may be the motive of shear-thinning actions for the stabilized emulsions [65]. Emulsions steadied by OSPI exhibited a higher ostensible viscosity and K than those stabilized by SPI, but those steadied by SOSPI had an inverse relationship between their apparent viscosity and K. The volume proportion of the dispersed phase and the size of the combinations created from the proteins determined the rheological parameters of the emulsion. The higher the number and size of these masses, the higher the viscosity [75,76]. This led to a higher initial apparent viscosity in the emulsion made from oxidized soy protein aggregates. After the oxidation process, the initially soluble oxidized aggregates eventually changed into insoluble oxidized aggregates. A reduction in the K and apparent viscosity resulted from the remaining soluble components, which were primarily made up of tiny protein molecules that tended to dissolve in the aqueous phase and enclose oil droplets [42,77,78]. The ostensible viscosity and K of the emulsion produced by SCOSPI grew initially and subsequently dropped with the length of the cavitation jet treatment time. Particle size and TEM results show that an effective cavitation jet treatment could encourage the conversion of insoluble aggregates into soluble aggregates, increase the particle size and skeleton structure of soluble soybean protein oxidation masses, and ultimately, increase the ostensible viscosity and K of the emulsion created by SCOSPI. However, the prolonged cavitation jet treating period might lead to an increase in insoluble aggregates that were unfavorable to the steadiness of the emulsion, lowering K and its apparent viscosity. By modifying the structure and content of the soy protein soluble oxidized accumulates and controlling the reciprocal transformation of protein constituents, the cavitation jet treatment can alter the rheological characteristics of the emulsion. Apparent Viscosity particle size and skeleton structure of soluble soybean protein oxidation masses, and ultimately, increase the ostensible viscosity and K of the emulsion created by SCOSPI. However, the prolonged cavitation jet treating period might lead to an increase in insoluble aggregates that were unfavorable to the steadiness of the emulsion, lowering K and its apparent viscosity. By modifying the structure and content of the soy protein soluble oxidized accumulates and controlling the reciprocal transformation of protein constituents, the cavitation jet treatment can alter the rheological characteristics of the emulsion. Conclusions Oxidized treatment influenced the structure of the SOSPI, causing a decline in their emulsifying properties and interfacial features. A cavitation jet at a short treating time can break the insoluble soybean protein oxidation aggregate and transform it into a soluble oxidation aggregate, causing the expansion of particle size, protein skeleton, and disulfide bond content. This also improved the emulsion activity and state of SOSPI and raised the quantity of adsorbed proteins at the interface while decreasing the interfacial tension of the emulsion. A long cavitation jet treatment time could induce the soluble oxidized aggregate to gradually form a small molecular weight protein with difficult relaxation and low molecular flexibility, which were not favorable to the solidity of emulsion, resulting in the decrease of EAI, ESI, apparent viscosity, K, and an increase of interfacial tension.
2023-02-24T16:22:31.319Z
2023-02-21T00:00:00.000
{ "year": 2023, "sha1": "df12437c7abaae2017b8b0d95c8f7734d5a554e6", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2304-8158/12/5/909/pdf?version=1676968151", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "5edb409f0ad4fb860ab5d5dfc7c5548eefc79400", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "extfieldsofstudy": [] }
251979486
pes2o/s2orc
v3-fos-license
Focus-Driven Contrastive Learning for Medical Question Summarization Automatic medical question summarization can significantly help the system to understand consumer health questions and retrieve correct answers. The Seq2Seq model based on maximum likelihood estimation (MLE) has been applied in this task, which faces two general problems: the model can not capture well question focus and and the traditional MLE strategy lacks the ability to understand sentence-level semantics. To alleviate these problems, we propose a novel question focus-driven contrastive learning framework (QFCL). Specially, we propose an easy and effective approach to generate hard negative samples based on the question focus, and exploit contrastive learning at both encoder and decoder to obtain better sentence level representations. On three medical benchmark datasets, our proposed model achieves new state-of-the-art results, and obtains a performance gain of 5.33, 12.85 and 3.81 points over the baseline BART model on three datasets respectively. Further human judgement and detailed analysis prove that our QFCL model learns better sentence representations with the ability to distinguish different sentence meanings, and generates high-quality summaries by capturing question focus. Introduction A growing number of health questions are raised by consumers on websites nowadays, which are usually written in natural language and including detailed and peripheral information not related to the answers. Summaries of such questions can greatly improve the performance in retrieving relevant answers (Ben Abacha and Demner-Fushman, 2019). Accordingly, the medical question summarization task is defined as summarizing the consumer health questions (CHQ) into frequently asked questions (FAQ), which are shorter but remain essential information of the original question to get correct * *Corresponding author. Input question: consumer health question (CHQ) subject: gender dysphoria message: no health care on my son suffering from gender dysphoria what can we do to help him he worked out of high school no problems now not working and about shutting himself in his room 24/7 theres nothing this condition in our area we live in [location].no help in area what can we do he has had bad thoughts already please help us with some sort of info thank yuo [name] [location] Golden summary: frequently asked question (FAQ): Where can I find information on treatment and resources for gender dysphoria? Summary by BART (baseline): What are the treatments for weight loss? Summary by our model: What are the treatments for gender dysphoria? Table 1: An example of medical question summarization in MeqSum dataset, where the question focus is highlighted in green. Summaries generated by BART and our model are also listed. answers. An example of medical question summarization is shown in Table 1. The Seq2Seq neural models have been widely used in abstractive summarization (Nallapati et al., 2016;Lewis et al., 2020; and show promising potentials, and they have also been applied in medical question summarization and achieve current state-of-the-art results. Ben Abacha and Demner-Fushman (2019) apply the pointergenerator model for this task. Yadav et al. (2021a) present a reinforcement learning framework with question-type identification reward and questionfocus recognition reward. Mrini et al. (2021b) propose a multitask learning method by treating recognizing question entailment as an auxiliary task. In the medical question summarization task, the input question CHQ is always lengthy and contains redundant information, where some salient medical entities and the semantic focus of question are vital to understand users' intention. But it still remains a challenging task for the existing methods to capture the question focus. As described in the example Figure 1: Sketch of our proposed contrastive learning framework. M s ,M h represents the memory bank that contains simple negative samples and hard negative samples respectively. R f , R c , R g denotes the sentence representation of FAQ, CHQ and generated summary. L ctrS and L ctrH are contrastive learning loss on simple negative samples and hard negative samples respectively. + indicates the positive sample, and − indicates the negative sample. . 1, the focus "gender dysphoria" is mis-replaced by "weight loss" in the summary generated by the finetuned BART, resulting in a completely different meaning from the original sentence. For the medical question summarization task, the generated question summary is required to semantically close to the reference question. However, in most of current pre-trained models such as BART (Lewis et al., 2020), the model adopts maximum likelihood estimation (MLE) and mainly focuses on the accuracy of the prediction of masked tokens, but does not guarantee to the semantic similarity or dissimilarity of the whole sentences. To address this issue, some previous works adopt reinforcement learning (RL) in text summarization task (Li et al., 2019;Paulus et al., 2018), but RL suffers from the noise gradient estimation problem (Greensmith et al., 2004), which makes the training process unstable and sensitive to hyper-parameters. To alleviate these problems, we propose a novel question focus-driven contrastive learning (QFCL) framework for medical question summarization, as illustrated in Figure 1. In our model, we introduce a "double anchors" strategy for contrastive learning, by utilizing the sentence representation of CHQ as an anchor and the generated summary as another anchor, and regarding the golden refer-ence FAQ as the positive sample. In addition, we present a "focus-driven hard negatives generator" to construct hard negative samples, by replacing the focus phrases with other phrases sharing the same attribute. Through contrastive learning, we minimize the distance between CHQ/generated summary and golden reference, and maximize the distance between CHQ/generated summary and other negative samples. By using the double anchors, our model is able to extract sentence-level semantic features to alleviate the problem of MLE. With the help of hard negatives generator, the model learns to pay more attention to question focus and thus produces high quality summary. We conduct extensive experiments on three medical question summarization datasets: Meqsum (Ben Abacha and Demner-Fushman, 2019), Health-CareMagic and iCliniq (Zeng et al., 2020). Our proposed model outperforms previous best results by a wide margin, achieving new state-of-the-art results on all three datasets. Compared with the baseline BART, our model brings a relative performance gain of 12.2%, 28.7% and 9.6% on Meqsum, Cliniq and HealthcareMagic respectively. Through analysis, we prove that our model significantly gains the power of distinguishing the semantics between generated summaries and negative samples, and our model generates high-quality summaries capturing more question focuses. Medical Question Summarization The medical question summarization task is defined by Ben Abacha and Demner-Fushman (2019). They construct a benchmark dataset Meqsum, and apply a pointer-generator model to generate question summary. At the question summarization campaign of MEDIQA-21 organized by Ben Abacha et al. (2021), almost all approaches rely on the finetuning of pre-trained transformer models. Transfer learning, knowledge-base, and ensemble methods are widely utilized by participanting teams to achieve better performance (He et al., 2021;Yadav et al., 2021b;Mrini et al., 2021c;Sänger et al., 2021). In this paper, we also base our method on the strong pre-trained BART model. Recently, Yadav et al. (2021a) propose a RL framework with two question-aware semantic rewards: question-type identification reward (QTR) and question-focus recognition reward (QFR). QTR is to identify whether the question types are consistent with the gold question, and QFR is designed to capture question focus. But in their work, the question types and question focuses in the dataset should be manually labeled, which is both time-consuming and labor-intensive for large-scale datasets such as HealthcareMagic and iCliniq. Moreover, the RL training process is unstable. Mrini et al. (2021b) claim an equivalence between medical question summary and recognizing question entailment(RQE), and employ multitask learning to train the model to not only perform next-word-prediction but also carry question entailment recognition. These two studies demonstrate that the pre-trained models achieve better performance after capturing the underlying sentence semantics of generated questions. Different from these works, we exploit contrastive learning to obtain focus-aware question representations. Contrastive Learning Different from the traditional methods which learn representations in pixel-level for computer vision tasks, contrastive learning encodes high-level features to distinguish different objects and has achieved great success (Henaff, 2020;Misra and van der Maaten, 2020;He et al., 2020), and it has also been applied in several NLP tasks such as machine translation (Pan et al., 2021), pre-training (Chi et al., 2021) and question answering (Yang et al., 2021). In the field of summa-rization, Liu and Liu (2021) present a contrastive framework to bridge the gap between the learning objective and evaluation metrics, Cao and Wang (2021) design several negative sample construction strategies to solve the factual inconsistency problem. In contrast, we use the MoCo structure to handle with the large volume of negative samples, and propose a new negative sample construction method. prove that large size of negative samples can improve the performance of contrastive learning, but it also brings heavy burden on computation cost. To address this issue, He et al. (2020) propose MoCo, which maintains a queue as the memory bank to store negative samples. MoCo adopts two encoders with the same structure: key encoder and query encoder, where the key encoder is momentum updated from the query encoder. Model Given an input question CHQ, which is written by consumers and contains lengthy and complex information, the medical question summarization task aims to automatically generate a question summary that is a frequently asked question (FAQ), capturing the essential information to help efficiently retrieve correct answers. A more detailed structure of our proposed QFCL model is presented in Figure 2. Contrastive Learning Architecture We employ the pre-trained BART (Lewis et al., 2020) as our basic model to generate question summaries. For contrastive learning, we adopt the MoCo architecure (He et al., 2020), which contains a key encoder E k with the same structure as the BART encoder E q , and a queue to store simple negative samples with large volume. The simple negative samples in the queue are progressively replaced by current mini-batch of representations extracted from the key encoder. All samples in the queue will be used as negative samples in the next batch. In addition, QFCL employs a hard negatives generator to generate hard negative samples. In our model, the BART encoder E q and the decoder are updated via back propagation by combining three types of loss functions, as described in the subsequent sections. The parameters of E k are frozen and updated slowly towards that of E q : where m is a momentum coefficient. At the inference, only the BART encoder and decoder are retained, other parts such as the key encoder, the queue, and the hard negatives generator are all discarded. Simple Negative Samples In the medical question summarization task, the input question CHQ should be semantically close to its reference summary FAQ but different from other question summaries. Therefore, we regard the CHQ c i in the i-th pair as the anchor, FAQ f i in the same pair as the positive sample and randomly select f j from other different pairs to serve as simple negative samples. Let R s denote the average decoded output of an arbitrary sentence s, the objective function of the simple contrastive learning is defined as: where R ci indicates the sentence representation of the i-th CHQ extracted from E q , and R f i and R f j are extracted from the key encoder E k for the i-th and j-th FAQ respectively. The operation sim is to calculate the cosine similarity, τ is a temperature hyper-parameter. M s is the memory bank which contains one positive sample and K simple negative samples in the queue with respect to an anchor. The above simple negative samples are randomly selected. As claimed by (Kalantidis et al., 2020), hard negative samples that are more similar to positive samples can facilitate the model to get better performance. Inspired by this, we build a bridge between hard sample generation and question focus prediction. Question Focus Identification As mentioned before, the question focus is essential to understand a consumer health question. If some focus phrases are missing in the generated summary, the semantic will drift far away from the original user's intention. So we construct difficult negative samples based on the question focus to enhance contrastive learning. Specially, we replace the focus phrases with some other phrases of the same attribution, and keep other words of the sentence unchanged. An example of hard negative sample generation is shown in Figure 3. One issue for our method is how to automatically annotate question focus. Yadav et al. (2021a) manually labeled the question focus in MeqSum dataset. However, this is quite time-consuming and labor-intensive, driving us to find a method which can automatically mark the question focus in larger datasets, such as HealthcareMagic and iCliniq. We analyzed the manually labeled MeqSum dataset, and found that in 340 of the total 500 records (up to 68%), the question focuses are the overlap phrases between CHQ and FAQ. Accordingly, we hypothesize that the same phrases appearing both in the source question and the golden summary have a high probability to be key-phrases. This idea is also proved to be effective in (Li et al., 2020). Since the question focus is usually a phrase rather than a single word, we need to split one sentence into phrases. We apply the chunker (Akbik et al., 2018) to the CHQ and FAQ text, and record the chunk label of each phrase. Then the consistent phrases appearing both in CHQ and FAQ are labeled as the question focuses. Hard Negative Sample Generation We constructed a dictionary by concatenating all phrases of the FAQ sentences in the train set. To generate hard negative samples, the question focuses are randomly replaced by other phrases of the same chunk label from the dictionary. As shown in Figure 2, "breast cancer" is replaced by "diabetes" since they share the same label "NP". We repeat this process N h times to construct N h different hard negative samples for each CHQ-FAQ pair. Contrastive Learning on Hard Negative Samples The sentence representation of hard sample R h is extracted from the key encoder E k . We define the hard loss function of contrastive learning as: where M h denotes the memory bank containing one positive sample and N h hard negative samples. This loss function forces the model to not only shorten the distance between CHQ and FAQ, but also expand the gap between the CHQ and hard negative samples. In this way, we achieve the goal of making the model pay more attention to the question focus, and obtain a focus-aware representation. Contrastive Learning at Decoder An imbalance existing in the above method is that contrastive learning is only utilized at the encoder. We fine-tuned BART on iCliniq dataset, and found that the decoder lacks the ability to distinguish the representations between the generated summary and the positive samples/unrelated negative samples, as s + g_f aq , s − g_sim , s − g_hard shown in Figure 4. Therefore, we try to improve the similarity between the generated summary and its reference FAQ, and at the same time enlarge the dis-similarity between the generated summary and other unrelated questions. Specially, we regard the generated summary as an extra anchor, and denote the representation of the generated summary as g i . Since the output summary should be semantically consistent with the corresponding FAQ, we consider the representation of the FAQ f i in the same pair as the positive sample, and select the simple negative samples randomly from the queue and generate hard negative samples using the hard negatives generator. The object functions of contrast loss L ctrGS and L ctrGH at the decoder end are defined in a similar style as Equation 2 and 3, except that the anchor c i is replaced by another anchor g i . Overall Objective Function For predicting next tokens in the generated summary, we use the cross entropy loss L ce : In our model, the overall loss function consists of five parts: the cross entropy loss L ce and four different loss functions of contrastive learning: L ctrCS , L ctrCH for the anchor at the encoder end, L ctrGS , L ctrGH for the anchor at the decoder end. We define the contrastive learning loss with respect to these two anchors as: where α, β are hyper-parameters to control the balance between simple negatives and hard ones. The weights of contrastive learning loss at the encoder and decoder are considered as equal, and the overall loss is defined as: 4 Experiments Datasets We conduct experiments on three English benchmark medical question summarization datasets, including Meqsum, HealthcareMagic and iCliniq. Meqsum is a high-quality dataset from NIH 1 , constructed by Ben Abacha and Demner-Fushman (2019). Mrini et al. (2021a) extracted Health-CareMagic and iCliniq datasets from MedDialog (Zeng et al., 2020) , which are collected automatically from the online healthcare service platforms 2 3 . MeqSum's and HealthcareMagic's summaries are written by medical experts in formal style, while iCliniq's are patient-written. We list some statistics of these datasets in table 2. Following previous works, we adopt ROUGE (Lin, 2004) Training Details We utilize BART-large (Lewis et al., 2020) in huggingface 5 as our pre-trained model. The learning rate of BART baseline is set to 3e-5 as the same with Mrini et al. (2021b). For contrastive learning in QFCL, the learning rate is optimized to 1e-5. Betas of Adam optimizer is set to 0.9 and 0.999. Batch size is set to 16. The number of hard negative samples n h is set to 64. For Moco, the queue size K is set to 4096, temperature τ is 0.07, and the momentum coefficient m is 0.999. In Equation 5, α and β are set to 1 and 0.5 respectively through grid search on MeqSum development set. Experiments were all performed on a single NVIDIA RTX 3090 GPU. The average runtimes of each epoch for Meq-Sum, iCliniq and HealthcareMagic are 4.2h, 0.6h and 0.1h respectively. Overall Performance We report our experimental results in Table 3. Our model achieves new state-of-the-art results on all three datasets. Compared with the previous best results, we obtain an improvement of 0.99 ROUGE-L score on MeqSum, 8.44 on iCliniq, and 0.51 on HealthcareMagic, respectively. MTL+Data augmentation (Mrini et al., 2021b) obtains the previous state-of-the-art results on iCliniq and HealthcareMagic, which utilizes the question entailment data to augment summarization data. In contrast, our method doesn't need other classification models or external data. The work of ProphetNet+QTR+QFR (Yadav et al., 2021a) gets the previous best result on MeqSum, which presents a reinforcement learning-based framework with question-aware rewards. Comparing with this competitive model, our method obtains consistent better performance on all metrics, with 2.28 improvement on R1, 4.66 improvement on R2 and 0.89 improvement on RL. We did not compare the results of (Yadav et al., 2021a) on the other two datasets, since their method requires manually labeled question focuses and question types. Ablation Study We perform ablation study to evaluate the impacts of different components employed in QFCL, and report the results in Table 3. In particular, for Meqsum dataset, due to the small size which may cause the training unstable, we conducted five separate experiments and computed the average ROUGE score of these five checkpoints as the final result. Compared with the base BART model, we obtain an absolute improvement of 5.33 points on average. T-test is implemented on such five ROUGE scores and the p-value is less than 1e-2, validating that this improvement is significant. On Cliniq the absolute improvement is 12.85 points and on HealthcareMagic 3.81 points. In comparison to BART, the relative improvements of our model are 12.2%, 28.7% and 9.6% on Meqsum, Cliniq and HealthcareMagic respectively. The results demonstrate that each component of our model is helpful. On MeqSum, there is an increase of 3.15 points for BART+S compared to the baseline, indicating that the contrastive learning on simple negative samples largely improves model performance. It shows an continuous increase of 0.77 points for BART+S+H, and the highest ROUGE-L score is obtained when three parts are all implemented in our model. It suggests that each component in QFCL contributes positively, and metrics like ROUGE evaluating the similarity between whole sentences benefit from our contrastive learning strategy. Human Evaluation To quantitatively assess the results, we compare our method with the baseline BART through human judgement. We randomly selected 50 samples from each of three datasets, and hired 3 graduate students to categorize each generated summary into one of the following categories: 'Incorrect', 'Acceptable', and 'Perfect'. We compute the average number of each category, and report the result in Table 4. The average Spearman correlation co- Case Study To clearly show the output question summary, we list two samples to compare our model with BART in Table 5. In Case 1, BART captures the question focus "Ampicillin" but misses "drink alcohol", and in Case 2 it misses the question focus "breast milk". In contrast, our model successfully extract multiple question focuses from the lengthy CHQ, and generate summaries which more conform to the meaning of original questions. Correlation of Sentence Representations Since the auxiliary structures are discarded at the inference stage, we make further analysis to check that whether the retained model has the ability to distinguish different sentence-level semantics when facing unknown data. We train QFCL and BART on the training set for 20 epochs and save each checkpoint, and evaluate these checkpoints on the development set. QFCL Suggest better nourishment for baby other than breast milk Four types of sentence representations are extracted from these checkpoints: CHQ's representation R c , FAQ's representation R f , hard negatives' representation R h , and the generated summary's representation at decoder end R g . Then we calculate the cosine similarity between them, and draw the relationship between these similarity scores and the epoch numbers, as shown in Figure 4. Regarding the anchor CHQ in the curve of iCliniq, s + c_f aq , s − c_sim and s − c_hard are very close to each other at epoch 0, suggesting that the initial encoder lacks the ability to capture different semantics. With the increase of training steps, s + c_f aq changes smoothly, while s − c_sim decreases sharply to near zero and s − c_hard decreases gradually and converges at a middle level between s + c_f aq and s − c_sim . Figure 4: Correlation between sentence representation similarities and epoch numbers on dev set. The red lines are about the anchor CHQ. s + c_f aq is the average cosine similarity between CHQ and related FAQ, s − c_sim is between CHQ and simple negative samples (other FAQs), s − c_hard is between CHQ and hard negative samples. The green lines are about the anchor of generated summary. s + g_f aq is the average cosine similarity between the generated summary and FAQ, s − g_sim is between generated summary and simple negatives, s − g_hard is between generated summary and hard negatives. The epoch number equaling 0 denotes the initial pre-trained model. This suggests that, powered by contrastive learning, our model has learned to distinguish sentences of different meanings at the encoder end. With the generated summary as another anchor, we find out that s + g_f aq , s − g_sim , s − g_hard are all near to 0 initially, which depict that the decoder is also weak in representing sentence-level semantics. After training, s + g_f aq increases significantly, s − g_hard converges between s + g_f aq and s − g_sim , and s − g_sim keeps very low all the time. It suggests that the decoder has strengthened its power to distinguish different semantics as the same to the encoder end. Another chart is drawn to show this relationship Table 6: Accuracy of question focuses in generated summaries. C1-C5 means 5 different checkpoints trained by each model. for BART baseline in Figure 4. The similarities between the anchor and the positive samples, negative samples are very close, and never improve significantly with the progress of training. This situation suggesting that the BART baseline has a relatively weaker performance to distinguish the sentences of different meanings at both encoder and decode, since it only focuses on the prediction of next tokens. We also draw this correlation curve on Meq-Sum and HealthcareMagic. The curve of Health-careMagic is similar to iCliniq. On MeqSum, our model can still distinguish sentences with different semantics better than the baseline, but the signal is not as significant as iCliniq or HealthcareMagic due to the limited size of training set. Capturing Question Focus To study whether our model pays more attention to the question focus, we evaluate the accuracy of question focuses in generated summaries. We use the sequence labeling model trained by Yadav et al. (2021a) to predict question focuses on the Meq-Sum dataset, and regard the 812 predicted question focuses in test set as the gold-standard. For QFCL and BART, we train five checkpoints and generate summaries on these checkpoints, and compute the accuracy of question focuses on test set. As shown in Table 6, the average accuracy is 37.09% for BART and 45.44% for QFCL. Our model exceeds the baseline by 8.35 points for question focus generation. P-value of t-test on these two sets of results is 1.04e-3, indicating that this improvement is statistically significant. Conclusion In this paper, we introduce a novel question focusbased contrastive learning framework QFCL for medical question summarization. In the proposed model, we adopt a "double anchor" strategy, by considering both the input question CHQ and the generated summary as comparing anchors. And we exploit a "hard negatives generator" to generate hard negative samples based on the question focus. Our model significantly improves the performance on three medical question summarization datasets, and achieves new state-of-the-art results. In the future, we would like to find a more effective way to do question focus recognition.
2022-09-02T06:42:24.908Z
2022-09-01T00:00:00.000
{ "year": 2022, "sha1": "249d35ef4e5e017095975a3fb9e7dca5e365a1e7", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "ACL", "pdf_hash": "c12e05afdc4db6d59f0458e130c94ceff2d94f6c", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
11334101
pes2o/s2orc
v3-fos-license
Photomapping Using Aerial Vehicle Creating a photomap plays a critical role in navigation. Therefore, flying vehicles are usually used to create topdown maps of the environment. In this report we used two different aerial vehicles to create a map in a simulated environment Introduction to USARSim( Unified System for Automation and Robot Simulation) USARSim is a high fidelity simulation of urban search and rescue (USAR) robots and environments intended as a research tool for the study of human-robot interaction (HRI) and multirobot coordination. USARSim is designed as a simulation companion to the National Institute of Standards and Technology (NIST) reference test facility for autonomous mobile robots for urban search and rescue [1]. As the time passed, the USARSim has been used for broader range of robot simulations. The acronym was converted to Unified System for Automation and Robot Simulation. But it is still USARSim. Aerial vehicles in this environment play a very important role [2,6]. They act as the eyes of a bird, flying in the sky and informing the other robots about the environment. The better they take pictures, the better other robots will cooperate. In the other hand two main questions arise. The first one is how to merge these pictures to have a general map. The second question is what to do with this map. In the following parts we will talk about these issues. There are two aerial vehicles implemented in the USARSim, a Blimp named as Passarola and a Quadcopter named as Airobot as shown in Figure 1. Both of these vehicles have the ability to carry a camera to take some pictures. The best and easiest way of using sequence of pictures is to create a photomap from them. This photomap would be very useful for localization purposes. Creating such a map is not easy and takes a long time to develop an algorithm to handle it. There are several image processing algorithms used for registering sequence of images in order to create a photo map. One of these algorithms that uses Fourier Melin Transform has been developed by Dr Bulow is used in our system. Even if there is such an algorithm, we still have some One of the challenging problems in having a photomap is having a decent sequence of images that an image processing algorithm can handle [2,5]. One of these problems is movements of the camera. According to the movements of camera the image processing algorithm has to do some scaling, rotation or translation processes. The less the changes, the better the final map will be. Intrinsically, Passarola has a huge amount of inertia that makes its movements smoother. That makes it move without abrupt changes in altitude or position. In the better word, its movements along X, Y, and Z axises will be smoother. Thus we chose passarola for carrying the camera. In the following parts of this report we will explain what we have done in detail. What we have done with the source code There is an implementation of FMI in Matlab and another in C++ [6,7]. The C++ one was in the experements folder of Jacobotics source code and was basically developed to work with the Philips webcam in order to test some parts of FMI algorithm. Since that camera was a real camera and there are some issues regarding calibration, they had to be considered during photomaping. When we are working in a simulated area, no calibration is required [8]. The first task we did was to change the FMI source code in C++ language. So we added a new parameter to its constructor under the name of calibration. If it is set in the constructor, calibration will be done. Otherwise it won't do calibration. The second task that we did was to move the FMI source code from experimnets to the library folder, and set the pathes and links. Then everybody can use it easily. At the moment it is in the lib/mapping/FMI folder. Then we configured a Blimp to have a downlooking camera and some other things like propellers etc. We added two important sections in Jacobotics source code. One of them is an actuator to drive our blimp, and the other is an image grabber to take pictures and create a photomap of them. The actuator is inside /lib/virtual/actuators/PassarolaDrive. For using this package it is required to create an object from this type and use setMotorSpeeds method. It receives three parameters and returns nothing. This method generally sets the motor speed of one of the propellers. The parameters are xZAngle, thrustPropeller and tailPropeller Where xZAngle, float, is the rotation angle of the support thrust motors bars, that make possible change the altitude of the robot (i.e up/down). The value is the absolute rotation angle, in radians per second. thrustPropeller, float, is the module of the velocity vector to be applied by the front thrusters, to move the robot in the X0Z plane (i.e forward/backward and up/down as the value of XZAngle). The value is the absolute linear velocity, in meters per second. tailPropeller, float, is the rotational velocity (i.e left/right). The value is the absolute rotational velocity, in meters per second. The other section that I added was PassarolaImageGrabber. It is placed in lib/virtual/autonomy/PassarolaImageGrabber. This part is responsible for getting pictures from the image server and store them in files. These files could be used for testing FMI algorithm in offline mode, but it could be easily disabled. Since we are in the phase of testing this algorithm, we prefered to enable this feautre. The next importnat part done in this file is to apply FMI on the sequence of images captured from the camera. Our FMI algorithm works only on special format pictures. Since the image server can take picures only in colored format and rectangular format, we have to convert them to grayscale and square shape. For doing this task we used GraphicMagick library. So for using this part the user has to install this package first. After cropping and converting the picture sequence, we applied FMI algorithm to it. The output of the FMI algorithm is X, Y, Rotation and Scale. These variables associated with the new image can create the whole photomap. In order to create the photomap, we put the new picture in the appropriate place in our GUI. We developed our own GUI using OpenCV to present the output of the algorithm in online mode. One of the outputs is shown in figure 2. The real environment that we applied our algorithm is shown in figure 3. One of the problems of using OpenCV is that it needs a fixed size for the photomap. Since we are creating the map in time, it is a growing map and we can't have a fixed size for it. There are several solutions for handling it, but everything should be done from scratch. So it would be easier to use their GUI. The idea was to send these variables and images to the original GUI . Then we had to send it on Wireless Simulation Server to be captured by that GUI. We talked with them and Ravi said they are to change the Image message and it is not possible to send it to their GUI. The other option is not to use WSS server and use some parts of their source code and develop our own QTObject in order to integrate it with their GUI. But lots of things have to be changed, because their GUI slows down when the number of images is increasing. So it is not again usable and we have to fix this problem of the GUI first. By default QT is repainting everything on its canvas. It doesn't care which part has recetly been changed. This will force high workload on CPU and memory of the computer, and slows down the overal progress. The idea is to render and repaint only the newer part in photomap. Since it was suggested very late, a few days ago, we haven't had enough time to do it yet. But we are trying to do it as soon as possible. Conclusion Having a general map of a disaster situation is very useful in a cooperative multirobot system. In this report we explained our approach for integrating FMI algorithm into Jacobotics source code, in order to have a growing map of the environmet in a disaster situation. We showed that the job was done pretty well, but there are still some tasks to be done in near future. The most important task is to take it to a QT GUI from an OpenCV one.
2014-11-10T11:45:39.000Z
2014-10-01T00:00:00.000
{ "year": 2014, "sha1": "e9b54a9db36f039f6e0c347ada66709622791518", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "e9b54a9db36f039f6e0c347ada66709622791518", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Environmental Science", "Computer Science" ] }
235303014
pes2o/s2orc
v3-fos-license
Association between Interferon-Lambda-3 rs12979860, TLL1 rs17047200 and DDR1 rs4618569 Variant Polymorphisms with the Course and Outcome of SARS-CoV-2 Patients Background: Severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) infection provides a critical host-immunological challenge. Aim: We explore the effect of host-genetic variation in interferon-lambda-3 rs12979860, Tolloid Like–1 (TLL1) rs17047200 and Discoidin domain receptor 1(DDR1) rs4618569 on host response to respiratory viral infections and disease severity that may probe the mechanistic approach of allelic variation in virus-induced inflammatory responses. Methods: 141 COVID-19 positive patients and 100 healthy controls were tested for interferon-lambda-3 rs12979860, TLL1 rs17047200 and DDR1 rs4618569 polymorphism by TaqMan probe-based genotyping. Different genotypes were assessed regarding the COVID-19 severity and prognosis. Results: There were statistically significant differences between the studied cases and control group with regard to the presence of comorbidities, total leucocytic count, lymphocytic count, CRP, serum LDH, ferritin and D-dimer (p < 0.01). The CC genotype of rs12979860 cytokine, the AA genotype of TLL1 rs17047200 and the AA genotype of the rs4618569 variant of DDR1 showed a higher incidence of COVID-19 compared to the others. There were significant differences between the rs4618569 variant of DDR and the outcome of the disease, with the highest mortality in AG genotype 29 (60.4%) in comparison to 16 (33.3%) and 3 (6.2%) in the AA and GG genotypes, respectively (p = 0.007*), suggesting that the A allele is associated with a poor outcome in the disease. Conclusion: Among people who carry C and A alleles of SNPs IFN-λ rs12979860 and TLL1 rs17047200, respectively, the AG genotype of the DDR1 rs4618569 variant is correlated with a COVID-19 poor outcome. In those patients, the use of anti-IFN-λ 3, TLL1 and DDR1 therapy may be promising for personalized translational clinical practice. Introduction Most cases with severe acute respiratory syndrome coronavirus 2 (The SARS-CoV-2) infection demonstrate mild symptoms or are asymptomatic, but some cases may develop complications such as interstitial pneumonia and acute respiratory distress syndrome (ARDS), as seen in patients with advanced age and associated morbidities [1]. The relation between SNPs and the immune response is called "immunogenetic profiling" [2]. Genetic polymorphism plays an important role in cytokine function, which is crucial in the host inflammatory response [3]. Cytokine polymorphisms affect gene transcription and expression, and subsequently the amount of cytokine secreted according to the genotype of the cytokine [4,5]. The cytokine polymorphisms could be associated with rheumatoid arthritis, ankylosing spondylitis and systemic lupus erythematosus [6]. Knowing that cytokines are crucial regulators of the individual response to infections [7], it is becoming clear that the immune system has an evident role in the cytokine storm as dangerous events in the disease known as ARDS [8,9]. Understanding the variables between subjects that could lead to this outcome is the cornerstone for identifying targeted therapeutic strategies [10]. TLL1 rs17047200 SNP may produce catalytically highly active short isoform (TLL1 isoform 2) [16]. Of note, we evaluated a SNP of Tolloid Like-1 (TLL-1) as a complementary activating protease and also as potentially able to stimulate the spike protein of SARS-CoV-2 [17]. Discoidin domain receptor 1 (DDR1) is a tyrosine kinase receptor that plays a role in b1 integrins signaling [18]. DDR1 activation by collagen affects human leukocytes' functions as cell differentiation and cytokine production [19]. DDR1 modulates E-cadherin and integrin CCN3-dependent cell adhesion to collagen type IV [20]. We focused on cytokine polymorphisms at rs12979860 and rs17047200 loci, the allele frequencies reported in previous literatures and DDR1 gene polymorphism in COVID-19 positive patients to explore their relation to coronavirus infection severity and mortality. The goal of the study is to assess the association between rs12979860, TLL1 rs17047200 and DDR1 rs4618569 polymorphism and coronavirus disease of 2019 (COVID-19) outcomes, because the earlier prediction of the host genotype and the severity of the disease will alleviate the economic burden or mortality rate of COVID-19. Study Design and Demographic Data This is a case-control study that was performed on blood samples from patients that were admitted to the pediatric, ENT, chest and internal medicine department at Ain Shams university hospitals. We recruited 141 COVID-19 positive patients, including 23 (16%) from pediatric patients, 87 (61.7%) from the chest department and 31 (22%) from the geriatric department, and 100 healthy controls coming for a routine check-up visit during the period from May 2020 to March 2021. COVID-19 diagnosis depends on the characteristic clinical and computed tomography (CT) scan of the chest and confirmed by the positive qRT-PCR for COVID-19 in a nasopharyngeal swab. COVID-19 was classified as 82 mild cases (58%), 11 severe cases (7.8%) and 48 critical cases (34%). Classification of COVID-19 patients' severity was according to the Egyptian MOH protocol, version 1.4 [21] into mild, moderate, severe or critical groups. Patients with mild cases are either asymptomatic or display leucopenia or lymphopenia and without pneumonia in a CT image. Severe cases had a respiratory rate > 30 breaths/min or a PaO 2 /FiO 2 ratio < 300, SpO 2 ≤ 92% in room air or a CT showing higher than 50% progressive lesion within 1-2 days. Critical cases had a PaO 2 /FiO 2 ratio < 300 despite O2 therapy or SpO 2 ≤ 92% in room air or a respiratory rate >30 breaths/min. A complete history was taken from each patient focusing on (i) history of comorbidities as diabetes, hypertension, asthma or combined; (ii) severity of the disease (whether ventilated or not). A complete blood count-D-dimer, serum ferritin, C-reactive protein (CRP), lactate dehydrogenase (LDH)-was performed on all participants at the time of diagnosis of SARS-CoV-2 infection. Patient nasopharyngeal swabs were collected for viral RNA isolation followed by purification using the QIAamp Viral RNA Mini kit (Cat no. 52906; Qiagen, Tokyo, Japan) based on the manufacturer's guidelines. Extraction of Total RNA from Whole Blood Samples SNPs genotyping was carried out on the extracted RNA. We used the QIAamp RNA Blood Mini kit (Qiagen, Valencia, CA, USA) to purify total RNA from the whole blood samples based on the manufacturer's guidelines. We assessed the RNA concentration and integrity using the Qubit 3.0fluorometer (Invitrogen, serial no. 2321609092). Then, the extracted total RNA was reverse-transcribed into cDNA by the high-capacity cDNA Reverse Transcription kit (A. B., Thermo Fisher Scientific, Waltham, MA, USA) according to the manufacturer's protocol on the Thermo Hybaid polymerase chain reaction PCR express (Thermo Scientific, Missouri City, TX, USA). Endogenous control genes were used to normalize the raw data of the samples followed by comparing the results to a reference sample. In this study, MIQE guidelines were followed to avoid any experimental error during extraction and RNA processing. Statistical Analysis Statistical social packages of SPSS software version 20 were used. Qualitative data were described using the number and percent. Quantitative data were analyzed using the median for non-parametric data and the mean ± SD for parametric data. The statistical significance was considered at the (0.05) level. Qualitative data were analyzed using the Chi-Square test. The statistical relevance of the relationship between genotypes and risk of COVID-19 was expressed by an odds ratio (OR) with its 95% confidence interval (95% CI). The Hardy-Weinberg equilibrium was applied to compare the genotype distribution among the studied groups. Spearman Rank Correlation tests were assessed between different genotypes. Patient Sociodemographic and Clinicopathological Features No statistically significant differences were found between COVID-19 positive patients and controls regarding age or sex with p = 0.252 and 0.201, respectively. The mean age of the patients was 37.42 ± 20.51, and females represent 43.2% of cases. Comorbidities (including diabetes mellitus, hypertension, chronic pulmonary disease, renal disease and cardiac disease) were found in 57.4% of cases and 18% of controls. There was a significant difference between the COVID-19 patients and the healthy control group with regard to the presence of comorbidities, total leucocytic count, lymphocytic count, CRP, serum LDH, ferritin and D-dimer (p < 0.01) ( Table 1). With regard to TLL1 rs17047200, 94 (66.7%) patients were AA, 37 (26.2%) were AT and 10 (7.1%) were TT. The healthy control group was represented as 66 (66%) AA, 26 (26%) AT and 8 (8%) TT (Table 2). From these data, it appears that the AA genotype of the TLL1 rs17047200 cytokine showed a higher incidence of the disease in comparison to the control group, with p = 0.012. With regard to DDR1 rs4618569, 66 (46.8%) patients were AA, 56 (39.7%) were AG and 19 (13.5%) were GG. With regard to the control group, 62 (62%) of them were AA, 24 (24%) were AG and 14 (14%) were GG ( Table 2). From these data, it appears that the AA genotype of the rs4618569 variant of the DDR1 gene had a higher incidence of COVID-19 compared to the healthy control group (p = 0.026) ( Table 2). We applied the Hardy-Weinberg equation to determine whether that the frequency of each genotype obtained agrees with expected values, as calculated from allele frequencies. Its value was 0.45 for cases and 0.83 for controls with regard to the first SNP, and 0.034 for cases and 0.035 for controls with regard to the TLL1 rs17047200SNP and 0.25 for cases and 0.00039 for controls with regard to the DDR1 rs4618569SNP. Haplotype analysis was done with a global haplotype association, p-value = 0.56 (Table 2). Of note, there were no statistically significant differences between the different genotypes regarding the demographic and laboratory features. With regard to the IFN-λ rs12979860SNP, the TC genotype was associated with the presence of comorbidities and an increased mortality rate compared to the other genotypes. With regard to the TLL1 rs17047200 SNP, the AA genotype was associated with an increased risk for comorbidities and an increased risk for ventilation, and a decreased total leucocytic count was associated with disease severity with increased mortality rate; a significant difference was found between the different genotypes of SNP 2 (rs17047200) and CRP and the severity of the disease (p = 0.01, p = 0.02, respectively). With regard to the DDR1 rs4618569SNP, the AG genotype was associated with an increased risk of comorbidities and an increased risk of ventilation, high CRP, Ferritin, D-dimer, and disease severity with increased mortality rate (Supplementary Table S1). Interestingly, regarding the IFN-λ rs12979860 SNP (rs12979860), 27 (45.8%) cases of the TC genotypes were classified as a severe disease compared to 22 (34.9%) and 10 (52.6%) cases in the CC and TT genotype, respectively, with p = 0.283. With regard to the TLL1 rs17047200 SNP, in 36 (38.3%) cases of the AA genotype, the disease was severe in comparison to 16 (43.2%) cases in the AT genotype and 7 (70%) cases in the TT genotype, with p = 0.152. With regard to the DDR1 rs4618569SNP, in 34 (60.7%) cases of the AG genotype, the disease was severe in comparison to 20 (30.3%) cases in the AA genotype and 5 (26.3%) cases in the GG genotype, with p < 0.01* (Supplementary Table S1). Genotypes and Outcome of the Disease There were significant differences between the DDR1 rs4618569SNP and the outcome of the disease, with the highest mortality in the AG genotype: 29 (60.4%) in comparison to 16 (33.3%) and 3 (6.2%) in the AA and GG genotypes, respectively (p = 0.007*), suggesting that the A allele is associated with a poor outcome in the disease. With regard to the IFN-λ rs12979860 SNP, the poor outcome is associated with the C allele and with the A allele in the TLL1 rs17047200 SNP (Table 3). Discussion The COVID-19 crisis represents a worldwide health problem. By now, the global number of confirmed cases of COVID-19 reached 108.2 million cases with a mortality of 2.3 million cases. In Egypt, they reached 177,543 confirmed cases and 10,298 deaths [22]. Developing countries face a critical financial problem with their limited resources that highlights the urgent need for the identification of high-risk patients who develop complications to receive timely and effective therapeutic regimens [23]. A study by Ellinghaus et al. has tested for association between >8 million single nucleotide polymorphisms (SNPs) and the development of respiratory failure in COVID-19 patients [24]. Therefore, understanding crucial players in the cytokine storm-related mortality in COVİD-19 is very important [25]. The work aimed to study the association of rs12979860, rs17047200 cytokines polymorphism and DDR1 gene variant rs4618569 with the progression and outcome in SARS-CoV-2 patients. TaqMan based genotyping revealed that the frequency of the IFN-λ 3 SNP (rs12979860) showed that the CC genotype is more expressed in COVID-19 patients versus healthy controls (p = 0.011). Nineteen (13.5%) patients were TT, 59 (41.8%) were TC and 63 (44.7%) were CC versus 12 (12%), 44 (44%) and 44 (44%) in the healthy control group, respectively. These data suggest that persons who had the C allele (TC and CC) are at a higher risk to develop COVID-19. While the AA genotype of the TLL1 rs17047200 had higher expression in patients versus healthy controls, with p = 0.012. Ninety-four (66.7%) patients were AA, 37 (26.2%) were AT and 10 (7.1%) were TT versus 66 (66%), 26 (26%) and 8 (8%) in the control group, respectively. These results demonstrate that the persons carrying the A allele (AA and AT) are at higher risk of the disease. Concerning the AA genotype of the DDR1 rs4618569 variant, it is more likely to be found in COVID-19 patients versus healthy controls, with p = 5.422. Sixty-six (46.8%) patients were AA, 56 (39.7%) were AG and 19 (13.5%) were GG versus 62 (62%), 24 (24%) and 14 (14%) in the healthy control group, respectively. These results show that persons with the A allele, either AA or AG, are more susceptible to In this COVID-19 global pandemic, we aim to identify host genomic factors that increase susceptibility and the complications of such a viral infection, and to translate these results in a timely manner to enhance patient care [26]. In our study, all patients showed higher levels of TLC, CRP, D-dimer, serum ferritin, LDH and lymphopenia, in agreement with Zhu et al. who found higher levels of these parameters in patients with COVID-19 infection [27]. Upon comparison of COVID-19 infection with different genotypes with regard to the disease severity and prognosis: The TC genotype of the IFN-λ 3 SNP (rs12979860), the AA genotype of TLL1 rs17047200 and the AG genotype of DDR1 rs4618569 variant are associated with more severe symptoms and lower favorable outcomes compared to the other genotypes. In the TC and CC genotypes of the IFN-λ 3 SNP (rs12979860), the disease was severe in 27 (45.8%) and 22 (34.9%) of cases, respectively, in comparison to 10 (52.6%) cases in the TT genotype, p = 0.283. Mechanical ventilation was used in 24 cases of the TC genotype and 21 cases of the CC genotype compared to 9 cases in the TT genotype. In the AA genotype of TLL1 rs17047200, COVID-19 was severe in 36 (38.3%) cases compared to 16 (43.2%) cases in the TA genotype and 7 (70%) cases in the TT genotype, with p = 0.152. Mechanical ventilation was required in 31 cases of the AA genotype compared to 16 cases in the TA genotype and 7 cases in the TT genotype (p = 0.056). In the AG genotype of the DDR1 rs4618569 variant, the disease was severe in 34 (60.7%) cases compared to 20 (30.3%) cases in the AA genotype and 5 (26.3%) cases in the GG genotype (p = 0.001*). Mechanical ventilation was required in 32 cases of the AG genotype in comparison to 17 cases in the AA genotype and 5 cases in the GG genotype (p = 0.001*). These data revealed that the C allele of the IFN-λ 3 SNP (rs12979860), the A allele of TLL1 rs17047200 and the A allele of the DDR1 rs4618569 variant are associated with more aggressive disease, and this may be correlated to changes in the product's level of these loci in blood. The rs12979860 SNP encodes the IFN-λ family of cytokines [28], which shared in the regulation of the host immune response against viral infections [29]. Many associations have been found between interferon lambda 3 rs12979860 polymorphism and various clinical outcomes as a control of the hepatitis C virus infection [30], myeloproliferative neoplasms, dengue virus in children and systemic lupus erythematosus [31]. In a GWAS, a research group declared the association between the TLL1 rs17047200 SNP and hepatocellular carcinoma pathogenesis and found elevated TLL1/TLL1 mRNA in animal models of liver injury and human liver tissues with fibrosis, in comparison with controls [32]. A previous study highlighted the association between rs4618569 of the DDR1 and vitiligo development [33]; the DDR1 gene seems to be an important player in immune responses, which depend on the effective migration of activated leukocytes into infectious or inflammatory tissue sites [34]. Of interest, regarding the prognosis and survival of COVID-19, the TC, AA and AG genotypes of the SNPs IFN-λ 3 SNP rs12979860, TLL1 rs17047200 and DDR1 rs4618569 variant, respectively, showed poor prognosis, where 23, 29 and 29 cases died, respectively, in comparison to lower deaths in the other genotypes. Our data showed an increased risk of death from COVID-19 in patients with advanced age, in agreement with Wang et al. who found that elderly males showed poor prognosis and an increased risk of COVID-19 severity [35], which could be explained by a decline in the immune function with aging and a decrease in the production of CD3 + T cells, and B lymphocytes, along with elevated regulatory T cells [36,37]. Another study confirmed that the poor outcome in elderly patients with SARS-CoV-2 was due to immunosenescence [38]. We suggest larger multicenter studies to explore the validity of these polymorphisms in COVID-19 patients. The study may be limited by the patients and by being a singlecenter study. Conclusions The C and A alleles of SNPs IFN-λ rs12979860, TLL1 rs17047200 correlate with the severity of COVID-19, where the AG genotype of the DDR1 rs4618569 variant are associated with severe COVID-19 cases and poor outcomes. These data suggest that innate immunity is closely linked with the outcome of SARS-CoV-2 infection. Supplementary Materials: The following are available online at https://www.mdpi.com/article/10 .3390/genes12060830/s1, Table S1: title Demographic and laboratory characters of Covid-19 cases according to genotype distribution, Table S2: Correlation between the frequencies of polymorphisms, severity and mortality rates of COVID-19 cases (Spearman's rho). Funding: This study was funded by Ain Shams University, School of Medicine, 2020-1. Institutional Review Board Statement: The study was done based on the guidelines of the Declaration of Helsinki, and received the approval from the Research Ethics Committee, Faculty of Medicine, Ain Shams University, Egypt, dated 13/5/2020, FWA 000016584. Informed Consent Statement: Informed consent was obtained from all subjects involved in the study. Data Availability Statement: The data presented are available on request by the corresponding authors. Conflicts of Interest: All the authors have no conflict of interest to disclose.
2021-06-03T06:17:21.983Z
2021-05-28T00:00:00.000
{ "year": 2021, "sha1": "9efd609a0814508d4328cf108eeac739fd5c2b9d", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2073-4425/12/6/830/pdf", "oa_status": "GREEN", "pdf_src": "PubMedCentral", "pdf_hash": "922a1ece5addad0401962afde7a7f752c477e4ed", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
247189796
pes2o/s2orc
v3-fos-license
Cardiac arrest in spontaneous subarachnoid hemorrhage and associated outcomes OBJECTIVE The authors sought to analyze a large, publicly available, nationwide hospital database to further elucidate the impact of cardiopulmonary arrest (CA) in association with subarachnoid hemorrhage (SAH) on short-term outcomes of mortality and discharge disposition. METHODS This retrospective cohort study was conducted by analyzing de-identified data from the National (Nation - wide) Inpatient Sample (NIS). The publicly available NIS database represents a 20% stratified sample of all discharges and is powered to estimate 95% of all inpatient care delivered across hospitals in the US. A total of 170,869 patients were identified as having been hospitalized due to nontraumatic SAH from 2008 to 2014. RESULTS A total of 5415 patients (3.2%) were hospitalized with an admission diagnosis of CA in association with SAH. Independent risk factors for CA included a higher Charlson Comorbidity Index score, hospitalization in a small or non-teaching hospital, and a Medicaid or self-pay payor status. Compared with patients with SAH and not CA, patients with CA-SAH had a higher mean NIS Subarachnoid Severity Score (SSS) ± SD (1.67 ± 0.03 vs 1.13 ± 0.01, p < 0.0001) and a vastly higher mortality rate (82.1% vs 18.4%, p < 0.0001). In a multivariable model, age, NIS-SSS prognosis, with studies finding that 0% to 9.1% of such patients survive to hospital discharge. 8,10,11AH is well recognized as a cause of cardiac arrhythmia and neurogenic myocardial injury, which can manifest as sudden circulatory collapse. 12A massive surge in intracranial pressure (ICP), a resultant reduction cerebral perfusion pressure (CPP), and primary injury to the hypothalamus and brainstem vasomotor centers are thought to trigger the cascade of events leading to CA. [13][14][15][16] The catecholamine surge that results from catastrophic SAH is thought to contribute as well to myocardial dysfunction in the form of neurogenic stunned myocardium, which is known to affect 10% to 30% of patients with SAH. 17,18By contrast, CA due to ventricular fibrillation, or a shockable rhythm, appears to be rare in the acute stage of SAH. 7,13,19In those patients with underlying cardiac disease as a contributing factor to CA, survival seems more likely. 7,20here is a lack of large-scale population data on the impact of CA after SAH on mortality, and the discharge dispositions of those who do survive.To date, our understanding of this clinical syndrome is primarily based on smaller cohort and database studies. 6,21,22We sought to utilize a nationwide hospital database, conducting the largest investigation of its kind, to gain better understanding of clinical outcomes in patients with CA-SAH. Study Design We obtained de-identified data from the National (Nationwide) Inpatient Sample (NIS), a component of the Healthcare Cost and Utilization Project (HCUP), from 2008 to 2014.This publicly available database represents a 20% stratified sample of all discharges from US community hospitals, involving more than 97% of the US population. 23,24The IRB at our institution approved this study and waived the requirement for patient consent given that the data are de-identified. Patient Population We identified patients meeting inclusion criteria using previously validated International Classification of Diseases, 9th Revision, Clinical Modification (ICD-9-CM) codes.Patients were included if they were diagnosed with nontraumatic SAH (ICD-9-CM 430), CA with the cause unspecified (ICD-9-CM 427.5), and CA due to an underlying condition (ICD-9-CM 427.5).Patients were excluded if they were younger than 18 years or had missing demographic data.Patients were also excluded if they had a diagnosis of traumatic SAH (ICD-9-CM 852.1-852.9). Measurements The primary outcomes of this study were survival and discharge disposition in patients with SAH and CA.Secondary outcomes included risk factors for the development of CA and mortality.For patient dispositions, "other" accounts for discharges to places other than home, including discharge to a short-term rehabilitation facility, skilled nursing facility, intermediate care facility, another type of acute care facility, home healthcare, and against medical advice.We collected data on patient demographic and clinical variables; past medical history variables, including seizure disorder, alcohol use, drug use, smoking history, and hypertension; hospital complication variables, including intracerebral hemorrhage (ICH), cerebral edema, cardiogenic shock, acute respiratory distress syndrome, gastric stress ulcer, platelet and coagulation disorders, pulmonary embolism, deep venous thrombosis (DVT), meningitis or ventriculitis, pneumonia, and electrolyte disorders; and treatment variables, including decompressive hemicraniectomy and aneurysm treatment (surgical, endovascular, or none).These clinical variables were used to evaluate patient trajectories and were analyzed for their association with CA and as potential outcome predictors. SAH severity was evaluated using the NIS Subarachnoid Severity Score (NIS-SSS). 25This validated score uses clinical data to approximate disease severity.CCI is a scoring system that quantifies the burden of comorbid conditions, with scores ranging from 0 to 33.In addition to patientlevel variables, we also analyzed hospital size and teaching hospital status. Statistical Analysis The software used for performing statistical analysis was SAS 9.4 (SAS Institute Inc.).The chi-square test for categorical variables and Student t-test were used to perform a univariate analysis on a variety of patient and hospital characteristics such as patient demographics (age, sex, race, and payor status), treatment management (clipping and coiling), teaching hospital status, hospital size, and hospital location.Because of the large number of statistical tests performed, p values < 0.05 were considered significant. Demographics and Clinical Course Using the NIS database, 170,869 patients were identified as having been hospitalized due to nontraumatic SAH from 2008 to 2014.Of those patients, 5415 (3.2%) were hospitalized for CA in association with SAH.Patients with CA-SAH were younger (p = 0.003) and less often hypertensive than those with non-CA-SAH (p < 0.0001), although they had more medical comorbidities, as captured by the CCI (p < 0.0001), and higher rates of illicit drug use and smoking (p = 0.01 and p = 0.001, respectively) (Table 1). Hospital and Insurance Characteristics Patients with CA-SAH were significantly more likely to be hospitalized in smaller, nonteaching hospitals than patients with SAH alone (p = 0.0004 and p < 0.0001, respectively) (Table 1), and hospital region did not differ between the two groups.Patients with CA-SAH were also more likely to have Medicaid or be classified as uninsured/ self-pay than patients with SAH alone (p < 0.0001). Multivariable Predictors of CA Negative predictors of CA in patients with SAH included hypertension (OR 0.733, p < 0.0001), a history of smoking or drug use (OR 0.851, p = 0.0443), and Medicare or private insurance status (OR 0.657, p < 0.0001).Positive predictors of CA in patients with SAH included a higher CCI score (OR 1.07, p = 0.0068) and small and/or nonteaching hospitals (OR 1.862, p < 0.0001) (Table 2). Complications, Treatment, and Hospital Course Patients with CA-SAH had significantly higher NIS-SSSs than patients with SAH alone (p < 0.0001), with substantially greater rates of mechanical ventilation (89% vs 36%, p < 0.0001) and coma (21% vs 7%, p < 0.0001) driving the difference in scores.Of note, some components of the NIS-SSS were less frequent in patients with CA-SAH, including hydrocephalus, ventriculostomy, aphasia, and cranial nerve deficits, although these differences were small (Table 3). Healthcare Burden and Outcomes Patients with CA-SAH had a 48% shorter hospital length of stay (LOS) compared with patients with SAH alone (mean 7.4 days vs 11.9 days, p < 0.0001), and sig-nificantly lower hospital costs (mean $41,889 vs $52,855, p < 0.0001) (Table 1).Patients with CA-SAH were significantly less likely to be discharged home (3.4% vs 44.3%, p < 0.0001) or to an acute rehabilitation facility (14.3% vs 36.7%, p < 0.0001), and were more than four times more likely to die in the hospital (82.1% vs 18.4%, p < 0.0001).Predictors of mortality in the overall SAH population included older age, higher NIS-SSS, drug use, and cardiac arrest (p < 0.0001) (Table 5). A history of hypertension, smoking, and treatment at a large or teaching hospital were protective against mortality (p < 0.0001) (Table 5).Among patients who experienced CA after SAH, the NIS-SSS was the only significant predictor of mortality (p = 0.0075). Discussion Overall, 3.2% of the SAH patient population experienced CA at onset, an incidence rate that is at the lower end of prior reports ranging from 3% to 11%. 8,9The mor- tality rate in our patient population of those who experienced CA due to SAH was 82.1%, more than four times higher than the 18.4% mortality rate of patients with SAH without CA.The 18% survival rate among patients with SAH who experienced CA is similar to previously published reports of high mortality in this group. 7,10,22On the higher end, Shapiro reported a 23% survival rate in patients with CA-SAH, with only 4% of all patients remaining in a vegetative state after treatment. 22Suzuki et al. found that of 66 patients with CA-SAH, only 9.1% survived to discharge, and the majority were in severe condition or a vegetative state. 11On the lower end, Skrifvars and Parr reported that only 0% to 2% of patients with SAH who presented with out-of-hospital CA survived to discharge. 8It is important to note that previous studies of CA-SAH analyzed much smaller sample sizes than our study.The high observed mortality in our CA-SAH population is consistent with the overall mortality rate among hospitalized patients with CA in the US. 28lthough most of the patients with CA-SAH who survived to hospital discharge went to a facility, a substantial proportion did relatively well; approximately 18% were discharged from the hospital.The neurological condition of these patients was presumably good, yet it was not recorded in the NIS database.This underscores the fact that CA at the onset of severe SAH does not always imply a nohope situation.Viewed another way, of the 5415 patients with CA-SAH whom we analyzed, 3% had a surprisingly good outcome and went home from the hospital, whereas another 15% were given a fighting chance at recovery after discharge.The hospital disposition of the non-CA-SAH patient population was consistent with prior large descriptive cohort studies. We found that larger hospital volume and teaching status were predictors of better outcome in patients with CA-SAH.This reconstitutes previous literature that has shown a benefit for critically ill patients being treated in such hospital systems.Interestingly, development of ventriculitis was shown to be the single largest predictor of good outcomes in patients with CA-SAH.It has been shown that the highest mortality rates in patients with CA-SAH occurred at the onset of their SAH bleeding.Increased LOS is correlated with increased rates of survival, for this reason, and may explain the association of survival with the development of this chronic process. 10,11e found a variety of statistically significant patient and hospital characteristics that were associated with CA at SAH onset.In univariate analysis, patients who experienced CA-SAH were significantly younger and less often hypertensive.Females have been reported to have a greater risk of CA with SAH in smaller series; 5,6,17 however, in this larger analysis we did not confirm this finding.In previous studies it has been noted that a lack of comorbidities was associated with CA-SAH, 6 which was not confirmed by our finding of marginally higher CCI scores in the CA-SAH population. The NIS is an administrative database that does not record the most commonly used disease severity scores for SAH: the Hunt and Hess and the World Federation of Neurological Societies grading systems.As a result, the NIS-SSS, which captures various ICD-9 and ICD-10 codes for common SAH comorbidities, was developed as a disease severity measure.NIS-SSSs were significantly higher in the CA cohort, which was largely driven by more frequent coding for mechanical ventilation and coma.The fact that codes for hydrocephalus, ventriculostomy, and aphasia were applied less frequently in the CA cohort exposes a potential weakness in the NIS-SSS, since these codes might be applied less frequently in comatose patients with global cerebral edema who are quickly converted to comfort measures. We found that patients with SAH and private insurance or Medicare were less likely to experience CA than those with Medicaid or self-pay status.This parallels studies that have reported that patients with private insurance tend to have better outcomes after CA. 29 Why uninsured or underinsured patients would be at risk of experiencing CA at the onset of SAH remains unclear.Jacobs et al. have hypothesized that patients with private insurance may be moved through the system to medical attention more quickly. 29imilarly, our findings regarding patients with SAH presenting to small-or medium-sized hospitals might reflect difficulties with emergency medical service response times in more distant and rural locales. Illicit drug use was higher in patients with CA-SAH in our study, a finding that may not be surprising, as drug overdose is the leading cause of nontraumatic CA in younger adults. 30Interestingly, prior smoking history was found to be protective against CA at SAH onset.The reason for the observed protective effect against CA at SAH onset is unclear. Patients were analyzed for treatments and complications during their hospital course, many of which are predictable in this patient population.Electrolyte disorders, most commonly hypo-and hypernatremia, were substantially more common in the CA-SAH population.Our results suggest that CA may contribute to impaired hypothalamic function and osmotic regulation after SAH.Cerebral edema and ICH were also significantly more common in the CA-SAH cohort.ICH results from focal bleeding into brain parenchyma at the onset of SAH and may contribute to tissue shifts that trigger CA.Global cerebral edema, thought to be the result of transient intracranial circulatory arrest followed by reperfusion, is a common feature of poor-grade SAH that predicts poor outcome.Its presence and severity are very likely increased by coexisting CA. An increase in cardiac-related pathology was also observed in the CA-SAH cohort.Takotsubo cardiomyopathy was coded more often in patients with CA-SAH, as was cardiogenic shock.A possible explanation for this is that the catecholamine surge and subsequent cardiac injury that occurs in the setting of intracranial crisis can trigger CA, a theory that has widely been accepted. 1,3The findings of less-frequent DVT, hydrocephalus, meningitis, and ventriculitis in patients with CA-SAH were likely due to a shorter LOS, fewer interventions, and decreased survival. Certain limitations are inherent when using the NIS database.Only information on in-hospital mortality was provided, and, therefore, findings on out-of-hospital mortality or emergency department mortality were not included in this study, which may underestimate our mortality estimates.Similarly, whether patients experienced CA during the prehospital or acute phases, as opposed to later during their intensive care unit course, was not recorded in the database and thus was not classified in our study.We presume that the majority of patients who were coded as having CA experienced it during the acute stage of the hemorrhage, although this is not known for certain.We have no data regarding the use of therapeutic temperature modulation as neuroprotective therapy.The ICD-9-CM codes that we relied on have been previously validated; however, coding errors are always a concern with any study based on NIS data. 23Errors due to sampling bias should also be considered.Moreover, we lacked many specific clinical markers that may indicate the severity of a patient's condition on hospital admission, such as Glasgow Coma Scale and modified Fisher Scale scores as well as aneurysm size and location, that may have informed the relationship between the variables observed. Conclusions CA in SAH is a devastating clinical event with a substantial risk of mortality.Using data from a nationally representative database, this study demonstrated that patients who experienced CA due to SAH had a significantly higher mortality rate than patients with SAH alone.Further study is needed to develop improved treatment strategies for patients with SAH who experience CA. TABLE 1 . Patient demographics, hospital characteristics, and outcomes Pts = patients.Values represent the percentage of patients or mean ± SD unless indicated otherwise.Boldface type indicates statistical significance. TABLE 2 . Multivariable predictors of CA in patients with SAH LL = lower limit; UL = upper limit.Boldface type indicates statistical significance. TABLE 3 . NIS-SSS and component model variables Values represent the percentage of patients or mean ± SD unless indicated otherwise.Boldface type indicates statistical significance. TABLE 5 . Predictors of mortality in the entire study population LL = lower limit; UL = upper limit.Boldface type indicates statistical significance.
2022-03-03T06:23:52.342Z
2022-03-01T00:00:00.000
{ "year": 2022, "sha1": "e5a570caf03afeac2f38f9e38274ebfc54e1d931", "oa_license": null, "oa_url": "https://thejns.org/downloadpdf/journals/neurosurg-focus/52/3/article-pE6.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "8bade22a6385956d0c3a09ab0b7def7c0082025e", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
252700381
pes2o/s2orc
v3-fos-license
Functional Ink Formulation for Printing and Coating of Graphene and Other 2D Materials: Challenges and Solutions The properties of 2D materials are unparalleled when compared to their 3D counterparts; many of these properties are a consequence of their size reduction to only a couple of atomic layers. Metallic, semiconducting, and insulating types can be found and form a platform for a new generation of devices. Among the possible methods to utilize 2D materials, functional printing has emerged as a strong contender because inks can be directly formulated from dispersions obtained by liquid‐phase exfoliation. Printed graphene‐based devices are shifting from laboratory applications toward real‐world and mass‐producible systems going hand in hand with a good understanding of suitable exfoliation methods for the targeted type of ink. Such a clear picture does not yet exist for hexagonal boron nitride (h‐BN), the transition metal dichalcogenides (TMDs), and black phosphorous (BP). Rather, reports of applications of these 2D materials in printed devices are scattered throughout the literature, not yet adding to a comprehensive and full understanding of the relevant parameters. This perspective starts with a summary of the most important features of inks from exfoliated graphene. For h‐BN, the TMDs, and BP, the characteristic properties when exfoliated from solution and strategies to formulate inks are summarized. Introduction In recent years, 2D materials such as graphene, black phosphorus (BP), hexagonal boron nitride (h-BN), and transition metal dichalcogenides (TMDs) with their diverse and exceptional properties have attracted enormous attention in various fields of science and technology, from electronics to sensing, catalysis, and energy storage/conversion. There are several methods for integrating 2D materials in different applications, among which solution processing as a simple, scalable, and cost-efficient material processing method is arguably the most widely used and studied technique. This is because most 2D materials can be produced either via liquid-phase exfoliation (LPE) of their parent layered crystals (usually naturally abundant) or solution-based synthesis routes. [1] As a result, highly scalable and industrial manufacturing techniques such as printing and wet coating can be used for their incorporation in commercial applications and device fabrication. [2] Despite the significant advancements in the past two decades, the processing of 2D materials into functional inks and their efficient printing and coating still face numerous challenges. The low yield of the exfoliation processes and the difficulty of producing stable high-concentration dispersions of 2D materials are two of the main obstacles to the formulation of functional inks. Low concentration inks necessitate multiple overlayer printings, [3] compromising the print resolution and prolonging the manufacturing time. Exfoliation processes usually yield products with a broad particle size distribution. [4] The lateral size of the flakes, the number of layers in each flake, and their aspect ratio strongly determine the electrical properties of the films [5] (number of the intersheet junctions and bandgap in semiconductor 2D materials), the flow behavior of the inks, and their printability using different printing methods. [6] Therefore, controlling the 2D nanosheet morphological parameters is very important for obtaining the best performance and reproducible results. The low compaction level (high porosity) of the printed/coated 2D nanosheet-based films is another issue [7] that should be considered when fabricating devices using printing and coating methods. Fulfilling the rheological requirements of the different printing methods (e.g., high viscosity inks for screen-and extrusion-printing) is another challenge that limits the application of low-concentration inks to a few methods such as inkjet printing (IJP) and aerosol jet printing (AJP), which rely on low-viscosity inks. [8] The surface tension of the dispersing solvents (which determines the wettability and adhesion of the printed films) and their chemical compatibility (with substrate DOI: 10.1002/smsc.202200040 The properties of 2D materials are unparalleled when compared to their 3D counterparts; many of these properties are a consequence of their size reduction to only a couple of atomic layers. Metallic, semiconducting, and insulating types can be found and form a platform for a new generation of devices. Among the possible methods to utilize 2D materials, functional printing has emerged as a strong contender because inks can be directly formulated from dispersions obtained by liquid-phase exfoliation. Printed graphene-based devices are shifting from laboratory applications toward real-world and mass-producible systems going hand in hand with a good understanding of suitable exfoliation methods for the targeted type of ink. Such a clear picture does not yet exist for hexagonal boron nitride (h-BN), the transition metal dichalcogenides (TMDs), and black phosphorous (BP). Rather, reports of applications of these 2D materials in printed devices are scattered throughout the literature, not yet adding to a comprehensive and full understanding of the relevant parameters. This perspective starts with a summary of the most important features of inks from exfoliated graphene. For h-BN, the TMDs, and BP, the characteristic properties when exfoliated from solution and strategies to formulate inks are summarized. and other components of the device) are two different issues that make the functional ink formulation even more difficult. [6] Several attempts have been made to address these challenges, most of which are inspired by the conventional techniques used in nonfunctional printing and coating. [9] Except for a few instances, these traditional solutions are usually not suitable for functional printing, necessitating the development of innovative approaches. In this contribution, some of the most important works on printing and wet coating of 2D materials are reviewed, and the most effective solutions for the aforementioned challenges are discussed. Since all common components of an electronic circuit (both active and passive) are composed of either conductor, semiconductor, or insulator materials, and considering the wide range of electronic properties that 2D materials can offer, it is possible to realize fully printed electronics by printing and coating of 2D materials. In this respect, the reviewed 2D materials are classified and ordered based on their electronic properties. Basics of the solution processing, the exfoliation/ synthesis of 2D materials and the introduction of different printing and coating methods have been extensively reviewed elsewhere [8,10] and will not be covered here in detail. We hope this perspective will serve as a guideline for those who intend to integrate 2D materials in their applications using printing and wet coating methods. Exfoliation and Preprocessing Graphene, the single layer of sp 2 -bonded carbon atoms, is the most extensively studied member of the 2D materials family. Pristine graphene is an excellent electrical and thermal conductor, and its properties can be easily tuned via chemical functionalization or doping. [11] Since its parent crystal is cheap and naturally abundant, LPE is the most preferred technique for its commercial production. [2] In general, graphite's most widely used exfoliation techniques can be classified into two groups: chemical and mechanical methods. In the chemical methods, the main goal is to weaken the van der Waals attractions between the graphene layers in the graphite's crystal by increasing the interlayer distances either by adding functional groups to the graphene layers or intercalating chemical species to their interlayer galleries. [12] These methods offer very high exfoliation yields and are usually used for the commercial production of "graphene derivatives". The term "derivative" is used here since the graphene products obtained via these routes are usually chemically functionalized to a different degree, depending on the exfoliation method. While the functionalization can improve the graphene's solution processability and performance in some applications (e.g., sensing, energy storage), it severely affects its electronic properties and may not be desirable in some other applications (e.g., when it is used as a current conductor). Considering the vast diversity of the functional groups of graphene and the huge number of techniques that have been developed for the synthesis and processing of the functionalized derivatives of graphene, it is not possible to include such materials (e.g., graphene oxide and reduced graphene oxide) in this review as well. In the mechanical exfoliation methods, pristine graphene is produced by applying shear forces to graphite powder in organic solvents (or solvent mixture) with matching surface energy such as N-methyl-2-pyrrolidone (NMP), which minimizes the energetic cost of the exfoliation process. Unfortunately, the exfoliation yield of the mechanical methods and consequently the concentration of the resultant dispersions are often considerably lower than their chemical counterparts. Although by increasing the exfoliation time, and the power of the applied mechanical forces, the concentration of the dispersions can be increased, the obtained 2D nanosheets are usually more defective, and have smaller flake sizes, [13] which yields films with inferior electrical properties. While its origin is not clearly known, [14] it has been shown by zeta potential measurements that the pristine graphene (unfunctionalized) possesses surface charges (both positive and negative, depending on the solvent), which can be the main reason for the stability of the obtained graphene dispersions. [15] Based on a correlation between the donor number of the dispersion solvent and the sign of the zeta potential for the dispersed graphene nanosheets, it has been suggested that the surface charge is a result of electron transfer between the graphene and the dispersing solvent. [15] Dissociation of oxygen-containing functional groups, which are usually present in the starting raw graphite (as impurities or in defect sites/edges), may also contribute to the charging of nonaqueous graphene dispersions. [14] The limited dispersibility of pristine 2D materials in pure solvents is mainly due to the electrostatic stabilization of their dispersion, where increasing the concentration above a certain threshold would lead to the destabilization of the suspension. [16] Suspensions with higher concentrations can be obtained by stabilizing the graphene nanosheets using surfactants and polymers. However, the maximum concentration can hardly reach 1-2 mg mL À1 , which may be acceptable for some deposition techniques (e.g., spray coating or IJP) but is still very low for most other printing methods. Furthermore, as will be discussed later, addition of surfactants and polymers can affect the electronic properties of the printed materials and should be avoided wherever possible. Therefore, to formulate inks for efficient printing and coating, it is often necessary to increase the concentration of the exfoliated 2D nanosheets using additional processes. It is worth noting that since the exfoliation products have a very wide particle size distribution, it is usually beneficial to perform flake size screening after the exfoliation step and before making any attempt to process the dispersion into an ink. This is important from several aspects; first, some printing methods can only handle particles in a specific size range (e.g., inkjet printers); second, the rheological properties of the inks will be more consistent from one exfoliation batch to another; and third, the performance of the printed devices will be more reproducible and reliable since the electronic properties of the 2D materials heavily depend on the size and the thickness of their flakes (will be discussed later). Size screening can be done by various techniques, most of which are done using centrifugation of the dispersions at different speeds and conditions. [10] The simplest yet most practical approach is based on a cascade centrifugation where the suspension is subsequently centrifuged at different speeds (in an increasing order; Figure 1a). [17] Separating the exfoliated graphene from the dispersion medium using ultrahigh speed centrifugation and redispersing it in a smaller quantity of solvent is one of the simplest methods for producing high concentration dispersions (increasing the concentration up to 6 mg mL À1 ). [3] Despite its simplicity, as this method is based on ultrahigh-speed centrifugation, the production time is very long, and scaling up the process is challenging. Solvent exchange is another widely used technique in which graphene is first exfoliated in a low boiling point solvent and then transferred to a smaller amount of another solvent with a higher boiling point (by evaporation). [20] Similarly, since evaporation is a time-and energy-consuming process, this method is also suitable mainly for lab-scale investigations. Considering the high stability of the graphene dispersions, separating the graphene from the dispersion medium is not easily possible. Phase-separation-based methods can be used to facilitate this process. The earliest works on this approach are based on dispersing the graphene in an organic solvent (e.g., ethanol) using ethyl cellulose (EC) and then flocculating the obtained dispersions. Flocculation can be initiated using different methods; for instance, by first adding terpineol to the graphene/EC/ethanol dispersion and then adding water to form a hydrophilic ethanol solution and pushing the hydrophobic EC-covered graphene flakes to terpineol, which is not miscible in water (Figure 1b). [18] Another technique for flocculation of the graphene/EC/ethanol dispersion is the addition of a small amount of aqueous sodium chloride solution. [21] In both methods, the graphene/EC composite is collected and dried after some purification steps. The obtained dried powder can be readily redispersed in a wide range of solvents with desired concentrations, addressing both the concentration and the surface energy challenges in graphene ink formulation. It should be noted that in addition to the flocculation step, EC plays a vital role in other stages of this process; it first stabilizes the graphene and significantly improves the exfoliation yield, and later protects it from restacking during the flocculation and drying steps. These methods are very fast, efficient, and useful in lots of the cases but cannot be used for applications where the thermal treatment is not possible. EC degrades the electronic properties of graphene and should be removed after printing/coating at high temperature (250-35°C) thermal treatments. [21] Another recently developed phase-separation-based technique that is as fast and efficient as the previously discussed methods does not require solid additives and consequently no hightemperature thermal treatment. In this technique, first, a water-immiscible solvent such as xylene is added to the graphene dispersion (in organic solvents such as NMP or dimethylformamide (DMF)). Then the phase separation is triggered by the Figure 1. a) Schematic demonstration of the cascade centrifugation method. Reproduced with permission. [17] Copyright 2016, American Chemical Society. b) Separation of graphene/ethyl cellulose from stable dispersion by addition of water. Reproduced with permission. [18] Copyright 2010, American Chemical Society. c) Interface-assisted extraction of graphene from NMP by the formation of a Pickering emulsion. Reproduced under the terms of the CC-BY Creative Commons Attribution 4.0 International license (https://creativecommons.org/licenses/by/4.0). [6] Copyright 2022, The Authors, published by Wiley-VCH. d) Volumetric changes (compaction level) of the graphene film after photonic annealing and compression rolling. Reproduced with permission. [19] Copyright 2016, Wiley-VCH. addition of water to the mixture. As a result, graphene is forced to leave the dispersion medium to cover the surface of the xylene microdroplets and minimize the huge surface energy of the emulsion formation. As a result, a pickering emulsion will form in the middle of the three-phase system (water-rich phase at the bottom, the pickering emulsion in the middle, and the xylene-rich phase on top; Figure 1c), which can be easily collected (e.g., using a separation funnel) and used for ink production after some purification steps. [6] Pristine graphene can easily restack in water, [22] and the formation of the metastable pickering emulsion protects the graphene from restacking during the separation stage. This method can also be used for graphene dispersions with a water-based medium (e.g., water/ethanol mixture). In this case, by gradually adding a water-immiscible solvent (e.g., xylene), the pickering emulsion will form after passing the solubility threshold (initially, some amount of the immiscible solvent can be dissolved in the dispersion medium because of the organic solvent). Ink Formulation and Printing The ink formulation strategies, their composition, and the specific properties that they should possess to be printable/coatable on various substrates using different printing/coating methods vary considerably. Therefore, the first step before processing a functional material into an ink is the determination of the most suitable deposition technique by considering the restrictions and the requirements of each specific device or application. Some of the most important considerations in this respect are the expected properties that the functional materials should exhibit, the scale or cost at which such devices or films should be produced, the printing resolution, and the geometry of the substrate or device (2D or 3D). For instance, fabrication of a graphene-based transparent conductive electrode requires a totally different deposition technique and ink than an interdigitated microsupercapacitor electrode. Since a compact, uniform, and high-quality thin film (made with large and defect-free graphene flakes) is required for the transparent electrode, methods such as IJP, aerosoljet printing, spin-coating, blade coating, and slot-die coating are potential candidates. However, when considering the expected properties, IJP does not seem to be a good choice as it can only handle small flakes (particle diameter <10% nozzle diameter). Aerosoljet printing also has some limitations on the size of the processable particles (depending on the atomization mechanism), and the alignment and the compaction level of the nanosheets in the deposited films may not be as good as the three other options (spin-, blade-, and slot-die-coating). Ultimately, three clear choices can be made by considering the production scale; small scale: spin coating, medium to large scale: blade coating, and large-scale: slot-die coating. On the other hand, for the fabrication of a microsupercapacitor, usually the deposition of a thick film (with the least number of printing/coating passes) is of greater importance, meaning that methods such as screen-, and extrusion-printing are better choices. In Figure 2, which is adapted from reference, [8] major determining factors for the selection of the deposition technique for different application and expected properties are summarized. Further information about the working principles of each of the deposition techniques can be found in other dedicated literature. [8] Once the most suitable deposition technique is identified, the obtained graphene from the exfoliation step or its "concentrate" can be processed into a printable/coatable ink. The main consideration here is the rheological requirements of that specific deposition method ( Figure 2). For instance, inkjet printable inks should possess a viscosity and a surface tension in a certain range to be jettable and to form spherical satellite-free droplets or screen printable inks should exhibit shear-thinning behavior and high-enough viscosity to stay on the mesh openings after flooding the ink and before being transferred to the substrate. While the viscosity is an important characteristic of an ink, optimized printing results can only be obtained when all the rheological properties of an ink such as storage and viscous moduli and thixotropy are fine-tuned for that specific printing or coating method. Detailed discussions on the rheological requirements of every printing or coating method are far beyond the scope of this perspective. Wettability (ink on the substrate), film formation behavior (e.g., drying speed, crack formation, and dried film profile), adhesion of the film, cohesion of the particles, thermal stability of the utilized materials (e.g., substrate), and chemical compatibility of the carrier solvents with other components of the device (or substrate) are other important determining factors for ink formulation and selection of its composition. In conventional nonfunctional ink formulations, large quantities of various types of additives such as binders, rheology modifiers, defoamers, and surfactants are readily used for adjusting the properties of the inks and fulfilling these requirements. However, in functional printing, the application of additives is usually minimized since they drastically degrade the electronic properties of the materials. It is worth mentioning here that due to their unique 2D morphology and variable aspect ratio, formulation of 2Dmaterial-based inks requires special considerations. This is mainly related to the interactions of the nanosheets in their dispersions, especially when they are single-or few-layered and have high aspect ratio. [2] These interactions, which are very considerable even at low concentrations (and become even more substantial at higher concentrations), can both pose certain restrictions Figure 2. Relation between the ink viscosity and the best achievable throughput, resolution (only applies for printing methods), film conductivity, film thickness, and film transparency for major printing/coating methods. Reproduced with permission. [8] Copyright 2021, Elsevier Ltd. for adjusting the ink composition (especially on solid content) and bring up new possibilities for tuning the rheological properties without using additives. Binders (e.g., macromolecules and polymers) are one of the major components in most of the previously reported graphene inks, especially those that require high viscosities. In these cases, binders act as stabilizing agents for increasing the graphene concentration in the inks and for adjusting their rheological properties. [23,24] It should be mentioned that the application of the binders is unavoidable in some applications where ultrahigh wear resistance is required or when adhesion to a specific substrate is not easily possible by adjusting the surface tension of the inks. [8] Cellulosic binders such as EC and sodium carboxymethyl cellulose are some of the most widely used binders for graphene ink formulation, which require thermal posttreatments between 250 and 35°C. Enormous efforts have been devoted to developing binders with low decomposition temperature or minimum impact on the electronic properties of graphene. [25] For instance, nitrocellulose-based graphene ink (film) can offer high conductivities (%10 000 S m À1 ) even when treated at 20°C. Since decomposition of binders is usually an exothermic reaction (especially nitrocellulose, which also has very fast kinetics), localized and rapid heating methods such as photonic annealing can be used to trigger a self-propagating combustion reaction that enables the low-temperature removal of the additives throughout the entire thickness of the printed films. [26] It is worth mentioning that despite their detrimental effect on the electronic properties of the functional materials, some cellulosic binders can improve the conductivity of the graphene films after the thermal treatment process because their decomposed moieties that are usually aromatic species with the ability of π-π stacking between the residues and graphene flakes can establish relatively efficient charge transport pathways. [21,25] Recently, it has been shown that the well-exfoliated graphene nanosheets can form efficient interlocks and densely packed films (compared to other types of nanomaterials) with acceptable mechanical properties for lots of applications; meaning that in most cases, the addition of a binder to the 2D materials inks is not necessary. It has also been found that a graphene (or any other pristine 2D material) dispersion with high-enough graphene concentration can be easily processed into a gel. At such high concentrations, the dispersion consists of aggregated particles that can form a continuous 3D network in which the solvents can be dispersed. Since the structure and stability of the dispersion are mainly based on the van der Waals (vdW) interactions of the particles, it is possible to use a wider range of solvents that are even incapable of dispersing the graphene as a suspension medium. As a result, the surface tension of the inks, their wettability, and adhesion of the resultant films to most of the substrates can be easily controlled by changing the solvent type. These structural features also enable the fine-tuning of the rheological properties just by adjusting the graphene concentration for all printing methods that require high viscosity inks. Formulation of inks based on capillary suspensions is another creative approach for fulfilling the rheological requirements of the different printing methods without using solid additives. To form a capillary suspension, a small amount of a secondary fluid, immiscible with the continuous phase of the suspension is added, with specific liquid/liquid and liquid/particle ratios. [27] The resultant capillary forces lead to the formation of liquid bridges and the creation of particle networks. As a result, the bulk rheological behavior of the suspension alters significantly from predominantly viscous or weakly elastic to highly elastic or gellike. This phenomenon can be observed in various particle/liquid systems, and despite being used frequently for functional ink formulation, [28,29] only one report is available on extrusion printing of graphene capillary suspension ink. [30] Extrusion printable inks are amongst the most challenging types of inks and usually require considerable amounts of additives (>25 wt%). It has been shown that the addition of only 2 vol% octanol, as the immiscible solvent, to a 16.67 wt% aqueous graphene suspension is sufficient to form a gel that exhibits shear-thinning behavior with high enough yield strength for extrusion printing. Sinter-free nanoparticle-based inks (including 2D materials), even when formulated without additives, form films with relatively low compaction level, [31] which results in poor electrical properties. To improve the interparticle charge transfer and overall conductivity of the network, mechanical compression or rolling techniques can be used. It has been shown that the conductivity of a graphene laminate can be increased by more than 50 times after one pass of compression rolling (conductivity of 4.3Â10 4 S m À1 and sheet resistance of 3.8 Ω sq À1 (with a thickness of 6 μm)). [7] In additive-containing inks, mechanical compression would be even more beneficial since decomposition of the additives generates gases that can further increase the porosity of the films, especially in postprocessing methods with very high heating rates such as photonic annealing (Figure 1d). [19,26] Drying and film formation behavior of the inks, especially in industrial scale printing/coating, are very important considerations for ink formulation. The shape of the profile of the printed tracks and structures can have significant impact on the performance of the printed devices. A rectangular profile is considered as an ideal case, but its realization in practice is very difficult and often a semielliptical profile would be acceptable for most of the applications. However, due to a very common phenomenon in drying particle-based inks, namely the coffee-ring effect, even realizing the semielliptical profile can often be very challenging. As shown in Figure 3a, during the drying of a well-wetting ink (with pinned edges), because of the higher evaporation rate of the solvent from the sides (due to higher surface area to volume ratio), an outward flow from the center of the printed structure toward its edges is generated to replenish the evaporated solvents. Such a flow can carry and accumulate the particles on the edge of the droplet and forms a ring-like structure. This problem can be addressed by the application of multicomponent carrier solvents (cosolvent systems) for ink formulation. It has been recently shown that a binary mixture of IPA-t-butanol (90-10 vol%) can be used for uniform deposition of different 2D crystals and their derivatives. [32] The disparity of evaporation rate of different components in the inks, surface tension, and compositional gradients from center to edges and gives rise to inward Marangoni flows, creating a more uniform redistribution of the particles (Figure 3a). The coffee-ring effect is mostly observed in low-viscosity inks as movement of particles in high viscosity solvents takes much longer time (and eventually film dries). The coffee-ring effect can even be fully avoided in gel type inks, since the solvent is dispersed within the network of the particles and hence cannot carry them to the edges (Figure 3b). One of the necessities in the electronic device industry is finding an appropriate process to develop and fabricate transistors as a backbone of an electric circuit. There is a broad range of research to design and fabricate printed different transistors instead of hybrid versions. For the fabrication of electrolyte-gated transistors (EGTs), there are some electrical and chemical compatibility, sintering, and flexible substrate issues. Therefore, as compared with metal inks, such as silver-based inks, which might react with strong oxidizing chemicals, graphene is the most suitable printable material, with an acceptable range of conductivity and chemical stability, and it can also go under process at lower temperatures. For fully IJP of electrolyte-gated transistors, for the gate part, graphene flakes stabilized by EC in cyclohexanone and terpineol are used as ink. The composite solid polymer electrolyte is prepared by adding poly(vinyl alcohol) to dimethyl sulfoxide solvent. Then the mixture of these two solutions is used as the ink. [33] A very recent attempt to fabricate a touch screen sensor based on a coating of liquid-exfoliated graphene ink showed that the main and most crucial properties of transparent conductive electrodes are electrical performance, visual performance, and mechanical flexibility. These requirements were met by spray coating graphene onto the flexible substrate with an optical transmittance of 78%, a sheet resistance of 290 Ω sq À1 , and no significant change in sheet resistance when bent up to a radius of 28 mm. Furthermore, the low level of attenuation and high signal-to-noise ratio (14 dB) allow for a multitouch operating mode. Also, in the same study, different solvents and methods were tested for graphene exfoliation, which has significant differences in the results. The obvious differences are related to the environmental issue (selection of solvent), yield, and size distribution of nanosheets. [34] Figure 3. a) i) On the left, the coffee-ring effect is schematically depicted and, on the right, the digital image of a dried droplet of several inks with different carrier solvent are shown; ii. Using a proper cosolvent system, coffee-ring effect has been avoided and different 2D materials inks have been printed and various substrates (scale bar: 500 μm). a) Reproduced under the terms of the CC-BY Creative Commons Attribution 4.0 International license (https:// creativecommons.org/licenses/by/4.0). [32] Copyright 2020, The Authors, published by American Association for the Advancement of Science. b) In high viscosity gel type inks printed with different printing methods and various substrates, coffee-ring effect is not observed, and the profiles of the printed structures are usually semielliptical. Scale bars are 3, 3, 1.5, and 2 mm from left to right, respectively. Reproduced under the terms of the CC-BY Creative Commons Attribution 4.0 International license (https://creativecommons.org/licenses/by/4.0). [6] Copyright 2020, The Authors, published by Wiley-VCH. Another sensing application can be related to implementing label-free and high-sensitive graphene inks to get a response for quantitatively detecting biological species like SARS-CoV-2. For this, graphite flakes were exfoliated with a high-shear mixer and stabilized with EC. Various steps of washing, redispersing, and filtration were applied for dispersion of graphene in terpineol to formulated aerosol-jet-printable ink. The fabricated sensors were heat treated at 350°C for 30 min in a furnace to eliminate residual solvent. Aerosol-jet-printed (AJP) graphene-based dipstick functionalized electrodes were incubated with a SARS-CoV-2 spike then the electrode surface was blocked with buffer until further testing. [35] In Table 1, some of the most promising graphene ink formulations for different printing and coating methods are summarized. Black Phosphorus (BP) BP has gained a lot of attention since 2014 after being exfoliated in a liquid phase and used as an efficient field-effect transistor (FET). [53][54][55] Phosphorus (P) atoms are bound to three neighboring atoms with sp 3 hybridized orbitals but with a puckered honeycomb structure since atoms are not on the same plane. Held together by weak vdW forces, the interlayer distance between BP monolayers is 0.53 nm in bulk form. [53,55,56] Depending on the number of layers, BP can also have a tunable direct bandgap (ranging between 0.3 and 2.0 eV), which is placed between graphene (zero bandgap) and TMDs (1.0-2.0 eV). [57] This unique property leads to much higher carrier mobility (%1000 cm 2 V À1 s À1 ) in comparison to TMDs. [53] Overall, besides the abovementioned properties, characteristics like high surface area, good biocompatibility, and low cytotoxicity make BP a very attractive material in a wide variety of fields such as biosensors, [58] energy storage devices, [59] optoelectronics, FET, [60] and catalysis. [61] BP can be synthesized in various forms, including bulk crystals, nanosheets, and quantum dots. Delamination of high-quality BP nanosheets has been reported using almost all common techniques for the exfoliation of 2D materials such as micromechanical, [62,63] electrochemical exfoliation, [64] and LPE methods. [65,66] Table 2 summarizes various LPE methods for producing BP monolayers. Most studies have recommended using organic solvents to impede the BP degradation induced by O 2 and H 2 O. However, due to the environmental and healthrelated considerations, several attempts have been made to obtain stable dispersions of BP using aqueous solutions, and some of them have also been very successful. [67,68] Despite its exciting properties, BP is not chemically stable in the ambient atmosphere. [69,70] The exposure of BP to O 2 and H 2 O leads to the formation of P x O y and phosphoric acid is formed, respectively. Therefore, lots of studies have been carried out to improve the chemical instability of PB by controlling its physicochemical properties, mainly using doping techniques. [71] Comprehensive discussions on doping BP using different methods and dopants have been published in other works. [72][73][74][75] BP can be successfully exfoliated in numerous solvents, including N-cyclohexyl-2-pyrrolidone (CHP), NMP, acetonitrile (ACN), isopropyl alcohol (IPA), and 2-methoxy ethanol (2-ME). However, it has been found that the exfoliation yield is considerably higher for the high-boiling-point solvents. [89] Simulation results have shown that NMP molecules prefer to intercalate between BP layers, facilitating the delamination of bulk BP. [90] In addition to forming stable suspensions, CHP can provide a solvation shell around BP and improve its oxidation resistance, therefore CHP is an appealing solvent for ink formation. Although by adjusting the concentration of the CHP-based ink, its viscosity (10 cP) can also be perfectly tuned to meet the requirements of the IJP, [91] neither CHP nor NMP are good choices for IJP, as their high boiling point can cause numerous problems including severe coffee-ring effect. Therefore, considering the lower exfoliation yield of the low boiling point solvents, exfoliation in NMP (or CHP) and transferring the exfoliated nanosheets to a low boiling point solvent seem to be a practical solution. Following this approach, Jun et al. [89] have first separated the exfoliated nanosheets from NMP by filtration and then after drying have formulated an inkjet printable ink by redispersing the dried powder in 2-ME (using sonication). To avoid the coffee-ring effect, which is a common problem in single solvent inks, they have optimized the concentration of the ink. Printing with inks with the concentration of 1 mg mL À1 has led to uniform distribution of BP nanosheets across the printed patterns avoiding a noticeable coffee ring formation. Formulating inks using binary cosolvent systems is proven to be an efficient strategy for addressing the coffee-ring effect. When a secondary solvent is added to an ink, upon its drying, the concentration of the higher-boiling-point solvent will be higher on the edges and lower in the inner sections of the film. This may result in a temperature gradient in the droplet due to the latent heat of vaporization and hence a recirculating surface tension gradient, inducing a recirculating Marangoni flow that can help to redistribute the particles more evenly across the film. In this regard, Hasan and his group have formulated a BP inkjet printable ink by first transferring the exfoliated sheets (from NMP) to IPA and then adding 2-butanol (10 vol%) to suppress the coffee-ring effect. This solvent system has several other advantages over NMP-based inks, especially for the formulation of inkjet printable inks. The NMP-based inks, due to their relatively higher surface tension (e.g., compared to IPA) and low viscosity, usually have a high Ohnesorge number (Z > 14), which is not suitable for IJP. However, using the IPA-2 butanol system, inks with Z % 10 (D ¼ 22 μm) have been formulated, which are well within the optimal Z value range for stable jetting. [92] Although there is an instability issue of BP under ambient conditions, it still can be used for device fabrication. Various modifications in the solution process steps make it possible to exfoliate BP nanosheets, and based on the selected printing method, inks can be reliably transferred and incorporated in optoelectronic and/ or photonic devices. Suppressing the coffee ring problem and using binder-free ink without any substrate pretreatment can increase the consistency and uniformity of the process and final products. As mentioned, BP is an ideal material for visible and near-infrared optoelectronics, including photodetectors. Using an inkjet-printed BP layer, such a photodetector, not only shows >10 times enhancement in detection performance at 450 nm but also extends the detection range up to 1550 nm. Besides promising results, encapsulation with parylene-C makes the printed BP more stable against long-term (>30 days) oxidation. [92] [46] Graphene þ Binder conc.: 3.1 g L À1 450°C Si/SiO 2 wafers with 300 nm SiO 2 Graphene was exfoliated in ethanol/ethyl cellulose by a high-shear mixer and concentrated with salt addition method www.advancedsciencenews.com www.small-science-journal.com Transition Metal Dichalcogenides (TMDs) One of the most widely studied groups of 2D materials with naturally accruing layered crystals is the TMDs family. These materials are characterized by the general formula of MX 2 , where M 4þ is a metal cation (such as Mo, W, V, Nb, Ti, and Zr), and X 2À is a chalcogenide anion. The monolayer of TMDs consists of positively charged layers of metal cations (M) sandwiched between two negatively charged layers of chalcogenide (X) anions. [93,94] Although the M-X intralayer bonds are very strong (ionic covalent), the MX 2 nanosheets are stacked and held together by weak vdW attractions. As a result, their parent layered crystal can be exfoliated into a single and or few layers using similar methods that are used for the exfoliation of graphene. [95][96][97][98][99] [47,48] Graphene þ Binder conc.: 30 g L À1 35°C in air Polyimide Graphene was exfoliated in acetone/nitrocellulose by a high-shear mixer and concentrated with salt addition method www.advancedsciencenews.com www.small-science-journal.com In contrast to the single layer of graphene, the TMD monolayer shows novel physical and chemical properties that strongly differ from their bulk forms. The transformation from bulk to monolayer structure causes a transition from indirect-to-direct bandgap due to thickness-induced quantum confinement (Figure 4a). [100][101][102][103][104][105] Moreover, the nanosheets of TMDs show polymorphism related to different symmetries of the coordination of the metal ion. The most common phases are 1T, 2H, and 3R based on tetragonal, hexagonal, and rhombohedral symmetries. [106] Decreasing the thickness and number of the layers provides not only a high surface area and abundant surface-active sites but also bandgap modulation possibility over a wide range, enabling novel applications in numerous fields such as catalysis, [107] optoelectronics, [108] sensing, [109] energy storage, [110] and biomedicine. [111] As mentioned earlier, thanks to their structural similarities, most of the techniques for the exfoliation/synthesis of graphene can also be used for TMDs, for example, methods such as mechanical exfoliation, [115] LPE, [116] chemical vapor deposition, [117] magnetron sputtering, [118] and hydrothermal synthesis. [119] However, due to their significantly higher exfoliation energy (surface energy), [98] delamination of the TMDs is more difficult than graphene, which is a big challenge for the production of high concentration suspensions and ink formulation. Since the exfoliation method (e.g., the dispersion medium) can drastically affect the chemical/structural properties of the TMDs (will be discussed later), [120] and hence their performance, the exfoliation/synthesis of the 2D TMDs requires further attention. Therefore, before discussing the printing/coating of this group of 2D materials, their LPE, which is usually the starting point for ink formulation, will be briefly reviewed. LPE of TMDs can be mainly categorized into direct exfoliation (Figure 4b,c) and lithium-intercalation-assisted exfoliation (Figure 4d,e). [105,[112][113][114]121,122] For direct exfoliation, the bulk material is dispersed in a suitable solvent, usually nitrogencontaining solvents such as NMP, which enable long-time stable and relatively high-concentration suspensions. [123] Direct exfoliation in solvents does not involve the formation of an intermediate intercalated compound. The solvent molecules may enter interlayer galleries of the layered solids, but their interactions are not strong enough to form a stable intercalated phase. [124,125] The shear force necessary for LPE can be applied with different methods. A comparison between magnetic stirring, shear mixing, and probe sonication showed that probe sonication produces the most optimal dispersions, among others. [126] So far, most of the studies have been mainly focused on developing dispersion media for an optimum exfoliation process, while the sonication process itself has received much less attention. Generally, simple ultrasonic cleaning baths are used for treating a noticeable amount of material, since these devices do not adequately control transmission power and temperature, nanosheet disintegration, or aggregation cannot be avoided. [127] However, new studies have emerged in recent years that emphasize the effects of ultrasound physics and sonication parameters on the efficacy of the exfoliation of layered dichalcogenides and the sonochemical transformations of the used solvents. Sonication parameters may critically affect the thickness, sizes, and other properties of the products (e.g., chemical structure) formed from bulk TMDs or other layered materials. [128] High local temperatures and pressures accompanying the cavitation phenomenon have been reported as the driving force for nanosheet exfoliation. When cavitation bubbles implode close to the surface of the particles, high-speed jets of liquids or shock waves may develop and create a physicochemical effect on the surface. [129,130] Indeed, two main consequences of MX 2 sonication are: 1) exfoliation, i.e., the separation of the vdW bonded layers; and 2) fragmentation, hence chopping of the bigger particles into smaller ones by breaking covalent bonds. [131,132] Parameters such as sonotrode shape, depth of the sonotrode, which is immersed in a liquid, ultrasound intensity, pressure and temperature of the reaction, material concentration, and solvent density have substantial effects on the process and the final results. [127] Reproduced with permission. [105] Copyright 2016, Elsevier Ltd. b) Schematic of direct exfoliation procedure for WS 2 with DMF. Reproduced with permission. [112] Copyright 2019, Elsevier Ltd. c) Exfoliated and dispersion of some important TMD nanosheets in water. Reproduced with permission. [113] Copyright 2021, Wiley-VCH. d) Sonication-assisted exfoliation using organic lithium-based solutions such as methyllithium (Me-Li), n-butyllithium (n-Bu-Li), and tert-butyllithium (t-Bu-Li). Reproduced with permission. [114] Copyright 2015, Elsevier Ltd. e) Schematic representation of lithium halide-assisted synthesis of few-layered WS 2 , the process is starting with sonication in hexane solvent followed by exchanging hexane to DMF. Reproduced with permission. [65] Copyright 2016, Royal Society of Chemistry. www.advancedsciencenews.com www.small-science-journal.com As mentioned before, the choice of the solvent in LPE plays a crucial role in the exfoliation process. The quality and the quantity of the exfoliated flakes can be controlled by the physicochemical properties of the used solvents, such as solubility, surface energy, and boiling point. The roles of the solvent(s) during the exfoliation are as follows: they transfer the sonotrode acoustic power, decrease the mixing energy between nanosheets and the liquid, and stabilize the delaminated sheets by giving a steric wall to prevent reaggregation. [125] Nonetheless, due to the high surface energy of the TMDs, the chance of reaggregation is higher in pure solvents, which necessitates the use of stabilizing agents. [98] Sodium cholate is one of such stabilizing agents that has shown great promise [98] for the production of highly concentrated dispersions of various 2D materials such as MoS 2 , WS 2 , MoSe 2 , MoTe 2 , NbSe 2 , and NiTe 2 . Green and eco-friendly solvents are particularly desirable for printing and large-area coating methods, but exfoliation of TMDs in such solvents (e.g., water or ethanol) is difficult and yields low concentration suspensions. Using cosolvent systems (e.g., water/ ethanol mixtures) is a promising alternative for stabilizing agent-based strategies and offers the possibility to formulate additive-free inks. [133] It is worth mentioning that when choosing the dispersion system for the LPE, it should be considered that chemical reactions can occur upon exfoliation of the TMDs in certain solvents. For instance, it has been shown that the exposure of MoS 2 to methanol (e.g., in a water/methanol system) can lead to the formation of sulfur vacancies. [120] The Li-intercalation-exfoliation is the other main type of exfoliation method for the production of TMDs, in which their parent crystals are immersed in a solution containing lithium ions such as hexyllithium or n-butyllithium under inert and dry conditions for a few days. During this process, Li intercalates into the interlayer spaces of TMDs and reacts to form compositions such as Li x MX 2 . Then, this compound is sonicated in a water bath where Li reacts with water, leading to the evolution of hydrogen gas. These reactions lead to the delamination of TMDs into individual layers. [106] It should be noted that during the intercalationassisted exfoliation of TMDs, their bulk 2H structure is usually distorted, and a phase transition from semiconductive 2H to metallic 1T may occur. Since the ion intercalation methods are time consuming and involve harmful chemicals such as lithium compounds, electrochemical exfoliation methods have been recently adopted to overcome these limitations. By applying and controlling an external potential, radicals/ions are generated in the electrolyte and aggregate between the layers of TMDs. Ion accumulation and the generation of gases then cause the expansion/detachment of TMD layers. [134] One big challenge in electrochemical exfoliation of the TMDs is the low conductivity of their bulk material, as most of the applied potential will be used to overcome the huge resistance. To address this problem, conductive additives are usually added to the powders of the TMDs' parent crystal, and conductive monoliths are made and used for their electrochemical exfoliation. [135] 3. Tungsten Disulfide (WS 2 ) Tungsten disulfide (WS 2 ) is one of the most extensively studied types of semiconducting TMDs, which has been used in numerous applications such as FETs, phototransistors, photovoltaics, and gas sensors. [136][137][138] The LPE of WS 2 has been so far reported in many different dispersion media ( Table 3), but because of cost and environmental considerations, aqueous suspensions and inks are highly preferable to their organic counterparts. Nevertheless, water-based ink formulation is very challenging, mainly due to its high surface tension and low viscosity. Therefore, in the majority of the previous reports on printing and coating of WS 2 , the ink formulation has been based on either solvent exchange, use of additives, or a combination of both. As noted, using cosolvent systems is one of the main strategies for exfoliating 2D materials. For instance, the water/IPA mixture is a good choice for the exfoliation of WS 2 due to its suitable surface tension and other physicochemical properties. It has also been shown that soaking the bulk powder of WS 2 in water/ IPA (7:3) prior to the exfoliation process can activate the powder and improve the exfoliation yield. However, a water/IPA-based ink would suffer from bad printability and various drying-related issues such as the coffee ring effect. These issues can be mostly addressed by transferring the exfoliated nanosheets to a propylene glycol/water (8:92) mixture. The solvent exchange step also provides an opportunity to properly adjust the concentration of the ink for a more efficient printing/coating. To further improve the printability of the ink for the electrohydrodynamic printing of a fully printed photodetector (Figure 5a), a small amount of xanthan gum as a binder and Triton X-100 as a surface tension modifier have also been added to the ink. [156] Pyrene sulfonic acid (PS) derivatives are very efficient stabilizing agents for the dispersion of numerous 2D materials. Triton X-100, which is a nonionic surfactant, is a good choice for lowering the surface tension of the inks that are stabilized by PS derivatives, as it does not disrupt their electrostatic stabilization. [41] To minimize the adverse effects of the additives on the electronic properties of the functional materials, usually different concentrations of stabilizing agents are used for the exfoliation and the ink formulation. For example, in a recent report on the production of an aqueous WS 2 inkjet printable ink, [41] first WS 2 (1.5 g) was exfoliated in 500 mL water containing 0.5 g of 1-pyrenesulfonic acid sodium (PS1) salt and then the excess PS1 has been separated by ultracentrifugation. The sediments are then redispersed in water containing ≥0.06 mg mL À1 Triton X-100 and ≥0.1 mg mL À1 xanthan gum. A small amount of propylene glycol has also been added (10:1, water:propylene glycol) to increase the viscosity and bring the Ohnesorge number's (Z) closer to the acceptable range (1 < Z < 14). [157] Similarly, in another work for spray coating WS 2 for the fabrication of an electrolyte-gated field effect transistor (Figure 5b), WS 2 has been first exfoliated in sodium cholate but then sedimented and redispersed in DI water. To remove the residues even more thoroughly, the coated films have also been soaked in DI water for 12 h. In spite of all these efforts, WS 2 that is a p-type semiconductor has exhibited n-type behavior, which has been attributed to residues of sodium cholate. [147] The organic WS 2 -based inks are very similar in composition to those of graphene. A mixture of cyclohexanone and terpineol (7:3) has been frequently used as a carrier solvent for WS 2 ink formulation for lots of printing and coating methods. The addition of EC (2.5 wt%) can help with both stabilizing the www.advancedsciencenews.com www.small-science-journal.com nanosheets and adjusting the rheological properties for an inkjet printable ink formulation (viscosity: %12 cP). The effectiveness of this approach and composition for stable ink formulation has also been demonstrated for various WS 2 -based composite inks (with graphene and other nanomaterials). [126,143] Like most other 2D materials, NMP can effectively exfoliate WS 2, and although the obtained suspension can be directly used for printing, adding a small amount of mono ethylene glycol (MEG) can increase the viscosity from 1.69 to 2.5 cP, and significantly improve its printability for IJP. [152] Another WS 2 nanosheet printing example is IJP of a WS 2 layer on a screen-printed graphene layer for fabricating a battery-free wireless photosensor (Figure 5c). WS 2 nanosheets were obtained from sonication of an aqueous dispersion, whereas exfoliated graphene flakes in NMP were produced first by a high shear mixing at 8000 rpm for 2 h followed by an ultrasonication step for 24 h. The flakes were then re-dispersed in ethylene glycol for ink preparation. Developing this combination of materials and methods enabled functional RF electronic device fabrication. [157] Among the recent advances, gas sensing applications have been receiving more attention. MOSFET-type sensor with an inkjet-printed layer of WS 2 demonstrates a special sensitivity (increasing or decreasing the drain current) when the sensor is exposed to specific gases. The sensor shows high selectivity toward NO 2 gas among four target gases (NO 2 , H 2 S, NH 3 , and CO 2 ). [148] Molybdenum Disulfide (MoS 2 ) Bulk MoS 2 is an indirect bandgap semiconductor (1.2 eV) that changes to a direct bandgap (1.9 eV) upon exfoliation to single layers. The thickness and number of the layers significantly affect the size of the bandgap since the quantum confinement effect can be observed in such size ranges. [158,159] Therefore, screening the particle size after the exfoliation process and before the ink formulation is of great importance. In an interesting investigation with surprising results, it has been found that the selection of the starting bulk material for LPE of MoS 2 , which was chosen from six sources, including high-quality large crystals and fine powder, has either no or minor effect on the quality and quantity (yield) of the final product. [101] Ion intercalation is one of the major techniques for the exfoliation of MoS 2 , especially for its large-scale production, which can be done in water and other solvents such as ethanol, methanol, and isopropyl alcohol. [160] However, as it has been mentioned earlier, ion intercalation can cause phase transformation from 2H-MoS 2 (thermodynamically stable indirect bandgap semiconductor) to 1T-MoS 2 (octahedral metallic phase). This process is also associated with the generation of energetic defect sites in the nanosheets and, consequently, a hydrophobic to hydrophilic transition in the surface properties of the MoS 2 (Figure 6a). It is worth mentioning that the 2H to 1T transition provides a unique opportunity for the functionalization of the 1T-MoS 2 nanosheets with fluoroelastomers that can be used for the fabrication of stretchable devices such as solid-state supercapacitors. [161,162] Since Li-ion intercalation is a slow and time-consuming process, special strategies may be implemented to accelerate it significantly using different energy sources such as ultrasonication, [165] microwave irradiation, [166] or solvothermal conditions. [167] In the case of ultrasonication, implosion of cavitation bubbles causes an increase in the local temperature and [156] Copyright 2020, American Chemical Society. b) i) Schematic of the layered WS 2 network transistor structure. ii) Optical image of deposited WS 2 nanosheet network on interdigitated electrodes. b) Reproduced with permission. [147] Copyright 2019, The Authors, published by Wiley-VCH. c) i) Schematic of different printed layers of a photodetector, the graphene top (GrT), and bottom (GrB) electrodes and the WS 2 active layer. ii) Image of the inkjet-printed WS 2 photodetector integrated with screen-printed graphene line. iii) Image of the device printed onto PEL P60 paper. c) Reproduced under the terms of the CC-BY Creative Commons Attribution 3.0 Unported license (https://creativecommons.org/ licenses/by/3.0). [157] Copyright 2019, IOP Publishing Ltd. www.advancedsciencenews.com www.small-science-journal.com pressure that leads to the degradation of butyllithium hexamers into monomers (or any other lithium-containing compounds) and enhancing the electron transfer to MoS 2 and facilitating the insertion of the Li from two aspects. One is forcing the Li to intercalate into the interlayer spaces, and the second is based on the fragmentation and chopping of the sheets and increasing the probability of insertion. [127,168] Obtaining flakes with large lateral dimensions of up to 100 μm is possible by using different intercalating agents such as the double salt of potassium sodium tartrate KNaC 4 H 4 O 6 ·4H 2 O; however, the produced nanosheets will have a partial oxidization and a distorted 1T structure. [169] In addition to the transformation of the indirect to direct bandgap, several other properties of MoS 2 are also changed upon its delamination to a single or few layers. For instance, the magnetic characteristics of the nanoscale MoS 2 sheets would also alter, exhibiting strong ferromagnetism due to the presence of edge spins on the edges of the nanosheets. [170,171] Selection and controlling of the lateral dimensions of the flakes, in addition to the deposition technique, heavily depend on their ultimate Figure 6. a) Functionalization of TMDs affects the wettability of nanosheets. Different contact angles and dispersion behavior are obvious in pure water without any surfactants for pristine and treated MoS 2 nanosheets. Reproduced with permission. [162] Copyright 2019, Elsevier Ltd. b) Schematic and inkjetprinted FET fabrication with graphene-bottom gate layer, h-BN layer on top of graphene layer as source and drain and MoS 2 layer on top. Reproduced with permission. [163] Copyright 2020, The Authors, published under license by AVS. c) Screen-printing process and the necessary steps for fabricating a high throughput of 2D-MoS 2 -SPEs. Reproduced with permission. [164] Copyright 2017, American Chemical Society. www.advancedsciencenews.com www.small-science-journal.com application. Small nanosheets perform better in catalytic applications than big nanosheets, which are targeted for electronics and composites. This is due to the fact that the catalytically active sites are usually located at the edges of the nanosheets, and smaller nanosheets have a higher density of these sites and edge atoms. [127] As mentioned earlier, the low concentration of the suspension of 2D materials is a big obstacle to the formulation of commercial inks and efficient printing. One of the main strategies for addressing this problem is dispersing the 2D material in a dispersion medium ( Table 4) that gives the highest yield and then transferring the exfoliated particles to another solvent (or solvent system) that has a smaller quantity and also offers better printability. In this regard, an inkjet printable ink has been formulated by first exfoliating the nanosheets in DMF (with the aid of EC) and transferring them to terpineol (20 times less solvent) by www.advancedsciencenews.com www.small-science-journal.com evaporation of the DMF. [172] The viscosity and the surface tension of the ink have been further adjusted for IJP by the addition of a small amount of ethanol. The transferring process can also be done from a high boiling point exfoliation solvent to a low boiling point carrier solvent (for ink formulation). In this case, nanosheets are separated from the exfoliation dispersion by ultrahigh-speed centrifugation and then redispersed in the carrier solvent using a short/mild sonication process. Following this approach, an additive-free aerosol jet printable ink (MoS 2 concentration: 0.5 mg mL À1 ) has been produced by transferring the exfoliated MoS 2 from NMP to a mixture of IPA and 2-butanol (9:1 vol%). [179] Like graphene and other TMDs, the cyclohexanone/terpineol (7:3) system proved to be a successful and frequently used composition for MoS 2 ink formulation. Viscosity can be adjusted by the addition of EC (2.0 wt%), for example, for IJP. In this report, to ensure the continuity of the printed film and proper charge transfer through the network of the particles, overlayer printing of up to 20 passes has also been used. [163] IJP provides an alternative route to a cost-effective scaled-up production of FETs to form high functionality devices with a wide variety of applications. With this method, all the different electric parts (conductor, semiconductor, and dielectric) of FETs are constructed with 2D-layered materials. Controlling the flow of electrons or holes from source to drain is an important factor in any kind of transistor. This parameter depends on the size and shape of the channel region and the conductivity of the bottom gate. To accommodate these factors, graphene is selected as the gate, h-BN, and MoS 2 as a dielectric and semiconductor part, respectively. The printing sequences are shown in Figure 6b. [163] Fabricating an electrode for the oxygen reduction reaction (ORR) electrodes is one of the most promising applications of the MoS 2 nanoflakes obviating the need of high-cost platinum. The process is based on the binding of electronegative oxygen atoms to the electropositive molybdenum atoms at the edge sites and shows electrocatalytic ORR properties. A screen printable commercial graphitic-base ink was used with the modification of adding 2D-MoS 2 nanoflakes with different masses up to 40 wt%. The addition of MoS 2 causes some changes in the viscosity of the formulated ink, which complicates the process of screen printing. As shown in Figure 6c, optimized conditions can alleviate this problem. The merit of screen-printing is that reproducible films can be fabricated on a mass-producible scale. [164] Tungsten Diselenide (WSe 2 ) Similar to most other semiconducting 2D materials, the exfoliation of bulk WSe 2 is associated with a transition from an indirect (1.2 eV) to direct bandgap (1.7 eV) semiconductor. [180] High intrinsic charge carrier mobility and the size and the position of the band edges make WSe 2 a great choice for electronics (e.g., an active layer of transistors) and energy storage/conversion applications (e.g., hole transport layer in solar cells). LPE of the WSe 2 in both aqueous (using stabilizing agents) and organic solvents have proven to be very successful ( Table 5). NMP is one of the main solvents, but low-boiling point solvent mixtures such as IPA/water (30:70) have also shown great promise, especially when considering their lower costs and environmental hazards. The size of the solvent molecules and the surface energy of the dispersion media (optimum: 28 mN m À1 ) are two important considerations when choosing the dispersion system. [180] So far, several techniques have been developed for improving the exfoliation of WSe 2, but one of the few works, which described a method that is very effective and yields highly uniform and phase-pure 2H-WSe 2 semiconducting nanosheets, consists of pretreating the bulk WSe 2 crystals with supercritical carbon dioxide (SC-CO 2 ). In this method, the as-received WSe 2 powder is stirred in a high-pressure chamber (10 MPa) for a short time (30 min) in liquified CO 2 at 55°C. After depressurizing the system, the SC-CO 2 -treated WSe 2 is exfoliated (by sonication) in water with the aid of sodium deoxycholate. While stabilizing agents such as sodium cholate or sodium deoxycholate can significantly improve the water dispersibility of WSe 2 [98] and hence the ink formulation process, for most applications, the impurities should be removed thoroughly, making the whole printing process inefficient. [180] As already stated, the products of the LPE methods usually have a very broad size distribution. Considering that the bandgap of 2D semiconductors can change significantly by slight changes in the number of layers in each flake (e.g., 0.5 eV for going from 1 to 6 layers), it is necessary to screen and separate the flakes for specific applications. For instance, such a big difference in the size of the bandgap of different flakes in the active layer of a transistor can act as charge carrier traps and drastically affect the device's performance. Few-layered flakes (>6) are preferred to single layers in such applications (e.g., optoelectronic) since their bandgap is the same as the bandgap of the bulk form of the material. For instance, Kelly et al. have first exfoliated the WSe 2 in NMP and then separated the particles comprising less than 6 layers with the cascade centrifugation method. Then by redispersing the size-screened flakes in NMP (1.5 mg mL À1 ), an inkjet printable ink was obtained. [4] Another challenge is the formulation of high viscosity inks for high-throughput printing. Increasing the viscosity to such high levels requires large amounts of additives, which is very detrimental to semiconducting 2D materials. The same ink formulation strategy for the formulation of graphene-based vdW inks can be used here as well. Using this technique, WSe 2 nanosheets exfoliated in NMP are concentrated using an interface-assisted extraction method and used for the formulation of gel-type inks with suitable rheological properties for screen-and extrusion printing. Considering the morphological variations between the products of different exfoliation methods, it is suggested that the yield strength of the gel (which is a better representation of the rheological properties of the gel) is a better criterion for ink formulation and obtaining reproducible printing results than the ink concentration. [6] One of the applications of WSe 2 -based inks is related to fully printed resistive random access memory (RRAM) on flexible substrate using an AJP process. [188] The printed RRAM shows forming free, unipolar behavior, lower switching voltage, and operating power than similar flexible RRAM. The printed devices retain their functionality even after bending, making them suitable for monolithically integrated embedded memory in electronic devices (Figure 7a). Another application is based on WSe 2 nanosheet networks for electrochemical thin-film transistors (TFTs). [4] In one of the related studies, WSe 2 nanosheets were exfoliated by liquidphase methods in N-methyl 2-pyrrolidone media. The size selection, along with the solution process and solvent exchange from NMP to isopropanol, provides a further possibility of altering the bandgap and related properties (Figure 7b). For the fabrication, the nanosheet dispersions were sprayed onto flexible alumina-coated PET substrates to form porous nanosheet networks (PNNs). Such networks appeared uniform over large length scales. However, they show considerable local disorder. In this case, solid electrolytes (polymers or gels) decrease the ionic mobilities, and the use of an ionic liquid/polymer-based gel increases switching time and speed. In addition, BN nanosheets are sprayed on top of the active layer, not as a dielectric but as an electrochemical separator between the active layer and the conductive top gate. MoSe 2 as another member of the TMDs family is important because of its strong optical absorption and high surface activity, thereby exhibiting high efficiency in photoelectrocatalysis. MoSe 2 is an n-type semiconductor with an indirect bandgap of 1.09 eV for the bulk and a direct bandgap of 1.57 eV. Possible applications are in the field of batteries and energy storage because of their high surface area and short ionic diffusion length when the number of layers is reduced. [189][190][191] A crucial point in 2D materials is attaching different chemical or biological groups from dispersion media. There are few studies related to the functionalization of TMD, which can help to exfoliate them with unsaturated atoms at the edges or defects on the basal plane and vice versa. The functionalization of MoSe 2 can be categorized into three parts: covalent functionalization, noncovalent functionalization, and metal deposition on the MoSe 2 sheets. [189] In covalent functionalization, the presence of group IV element vacancies (for instance, vacancy of Se in MoSe 2 structure) and edges plays a crucial role. It is related to the interaction of attaching ligands or any functional groups with unsaturated metal atoms on the edges or defects in the basal plane. The covalent bond formation occurs between free radical or electrophile and Se-based nucleophile or between organic functional groups and Se vacancies. The organic functional molecules covalently attached to the chalcogen atom drastically change the electronic and optical properties of MoSe 2 . [192] Noncovalent functionalization is a low-cost process for engaging some molecules to alter the surface characteristics without affecting their electrical structure. This interaction can be assisted with van der Waals forces, physisorption, or electrostatic attractions, and generally, it depends on the density of the defects sites. Typical cationic surfactants and polymers can be noncovalently functionalized through electrostatic interaction, which gives them excellent solubility in both organic solvents and aqueous solutions. At the same time, simultaneous exfoliation and functionalization of MoSe 2 nanosheets are achieved as shown in Figure 8a. [192,193] Different exfoliation strategies are listed in Table 6. Zwitterionic components recently gained significant attention due to the high water solubility and contained both cationic and anionic species with a net charge of zero. Efficient liquid exfoliation synthesis of the nanosheets of TMD can be obtained because of the association of alkyl chain moieties with two ionic species to adhere to the surface of nanosheets. The exfoliation yield depends upon the number of alkyl groups of a zwitterion in the dispersion media. The efficient dispersion of a variety of few-layered TMD nanosheets with lateral dimensions of several hundred nanometers in water-soluble zwitterions without the need to use additives such as any surfactant or modifier makes it suitable for ink formulation for conventional IJP (Figure 8b). Furthermore, the water-soluble Zwitterion can also play the role of additives for adjusting the surface tension of the ink to enhance the wettability of the ink on a substrate. Besides N). The horizontal line approximately separates thinner nanosheets with a typical AFM image. ii) Photographs of the printing steps. From left to right: Graphene source (s) and drain d) electrode (t % 400 nm); the WSe 2 channel (t % 1 mm, L ¼ 200 mm, w ¼ 16 mm); the BN separator (t % 8 mm); and finally, the graphene gate (g, t % 400 nm). b) Reproduced with permission. [4] Copyright 2017, American Association for the Advancement of Science. www.advancedsciencenews.com www.small-science-journal.com photodetector application, biocompatible Zwitterionic-assisted TMD inks were reported as a good choice for skin-patchable electronics based on cell (HeLa cells) viability measurements. [201] As a showcase, MoSe 2 is chosen as an active layer for real-time sensing applications. MoSe 2 nanoflakes are exfoliated through a wet grinding process. The MoSe 2 ultrafine powder is first ground in a pestle and mortar using DMF for 8 h (Figure 8c). The obtained gel mixture is dried by heating at 110°C for 1 h. Lastly, the MoSe 2 solution is centrifuged. The resultant supernatant solution is then separated via the decantation process. Finally, 7 mg mL À1 MoSe 2 nanosheets in DMF ink are deposited by a spin coater for 30 s at 17 000 rpm and cured at 100°C. The 2D layer of MoSe 2 thus achieved has a very high surface-area-to-volume ratio and high surface roughness providing additional Mo and Se edges for hydrogen and oxygen bonding. [202] In another example, screen-printable 2D-MoSe 2 electrocatalytic ink at an optimal ratio (10 wt% besides carbon-based ink) was implemented to produce electrodes/surfaces that exhibit low hydrogen evolution reaction (HER) due to electronegatively charged Se atoms. Dangling bonds at the edge sites have an affinity for binding electropositive H þ atoms. Therefore, these sites are responsible for the 2D-electrochemical activity toward the HER. Moreover, the screen-printing fabrication approach provides a lower cost and scalable mass production and results in stability improvements over other traditional techniques such as drop-casting. A series of screen-printed electrodes is displayed in Figure 8d. [203] [196] Method: Probe sonication Initial concentration: 10 mg mL À1 2D h-BN, a graphene structural isomorph, was soon synthesized after the discovery of graphene and since then has been used in numerous applications in printed electronics and energy storage. The in-plane bonds between the boron and the nitrogen atoms are covalent (σ), which offers excellent mechanical strength. The absence of π electrons minimizes surface interactions, giving h-BN unique physical and chemical properties. On the other hand, the electronegativity difference between two atoms causes partially charged sites (B positive, N negative). Selective functionalization can occur for the 2D h-BN via the lone pair electrons and empty orbitals of the N and B atoms, respectively. [205] Various properties can be obtained using different surface functionalizations; for instance, hydroxylation enhances hydrophilicity, fluorination affects photoluminescent emission, edge amination improves surface acid and base reactions, and point defects intensify catalytic activity. [206] Unlike other 2D materials, which show metallic or semiconducting characteristics, the bulk form of h-BN is considered a typical insulator with a bandgap of %6 eV due to the free of charge traps and the dangling bonds on its surface. [207] Nevertheless, its bandgap can be modified using edge or surface functionalization by efficient modulation or doping with exotic species. A typical example is the 2D h-BN doped with fluorine groups that act as a semiconductor with a bandgap energy of 3.1 eV (5.8 eV in the undoped state). In addition to doping and functionalization (commonly by carbon or oxygen), the bandgap modulation of 2D h-BN can be achieved via defect engineering. Altering the local energy of the surface by adjusting its chemical composition would increase ionic conductivity, catalytic activity, and proton exchange rate, which provides an excellent platform for applications in the energy sector. Extremely high thermal conductivity, chemical and thermal stability, optical transparency in the visible spectrum, and the electrical insulating nature are some other properties that determine h-BNs widespread use. Since liquid media are preferred for most top-down approaches, LPE as its subcategory has attracted a lot of attention. Different LPE methods, such as probe sonication, bath ultrasonication, hydrothermal, ball milling, microwaveassisted exfoliation, and solvothermal techniques, have been Figure 8. a) Process of simultaneous exfoliating and noncovalently functionalizing MoSe 2 nanosheets. Reproduced with permission. [193] Copyright 2016, Wiley-VCH. b) A schematic of interaction between TMD nanosheets and zwitterion (top), optical images of commercial black ink and the MoSe 2 ink printed on A4 paper (bottom). Reproduced with permission. [201] Copyright 2020, Tsinghua University Press and Springer-Verlag GmbH Germany, part of Springer Nature. c) Different steps of MoSe 2 ink preparation from grounding the bulk material, sonication, and size selection with cascade centrifugations. Reproduced under the terms of the CC-BY Creative Commons Attribution 4.0 International license (https://creativecommons.org/licenses/by/ 4.0). [202] Copyright 2020, The Authors, published by Springer Nature. d) Photographs of screen-printed 2D MoSe 2carbon-based electrodes, Reproduced with permission. [203] Copyright 2017, Royal Society of Chemistry. e) Screen printing of Ag ink to form interdigitated electrodes on PET substrate and schematic of the sensing mechanism based on spin-coated MoSe 2 nanoflakes. Reproduced under the terms of the CC-BY Creative Commons Attribution 4.0 International license (https://creativecommons.org/licenses/by/4.0). [204] Copyright 2020, The Authors, published by Springer Nature. www.advancedsciencenews.com www.small-science-journal.com developed to synthesize 2D h-BN with varying surface functional groups, thickness, and size. Generally, the diffusion of the solvent's ions/molecules into adjacent h-BN layers causes its exfoliation. Accordingly, the extent of liquid interaction with h-BN layers and, consequently, the yield is dependent on the exfoliating medium. It is reported that a liquid could interact with the h-BN through polar covalent bonding, Columbic interactions, and Lewis acid-base interactions. In any case, the electronic properties and doping level of the h-BN layers will be modified by functionalization. [205] Parameters affecting exfoliation or synthesis mainly follow from the significance of surface functionalization or level of doping in energy applications. According to the literature, modification of 2D h-BN is obtained by functionalizing with long-chain molecules, amine group , and hydroxyl groups [-OH], doping with oxygen, fluorine, and carbon. Due to the partly ionic character of the interlayer bonds, choosing the proper methods and conditions among several exfoliation procedures is essential in the production process for printed electronic inks. [205] Some of the exfoliation conditions and results are listed in Table 7. However, not all of the methods can be applied for exfoliating and using them as a solution-based technique for ink formulation at the same time. Another parameter that should be focused on is the rheological properties of the inks that must be fitted to the printing or coating process's viscosity ranges. Poly(m-phenylenevinylene-co-2,5-dictoxy-p-phenylenevinylene). www.advancedsciencenews.com www.small-science-journal.com As mentioned, boron nitride can be exfoliated in a wide range of dispersion media, including aqueous and organic systems (Figure 9a). [210] One of the solvents that has been reported for effective exfoliation of h-BN is DMF. This is due to the polar nature of the DMF, along with the assistance of the cavitation phenomena in the continuous sonication procedure. The exfoliation yield can be increased by using polymeric materials such as polycarbonate (PC), polyvinyl butyral, and poly(methyl methacrylate). Physical interactions of polymer chains with the surface of exfoliated nanoflakes help separate the layers. Furthermore, this interaction increases the stability of the dispersion because of the steric hindrance and repulsion mechanism Figure 9. a) UV-vis data for h-BN nanosheets in different co-solvent systems recorded at 400 nm vs. surface tension. Each case represents different ratios between solvent and water w/w%. Reproduced with permission. [210] Copyright 2015, Royal Society of Chemistry. b) Effect of a boron nitride nanosheetcoated separator for Li metal anode. In a typical separator (right), Li dendrites are grown on the current collector during Li deposition, resulting in low Coulombic efficiency; however, a thermally conductive BN coating can result in a uniform deposition/stripping of Li due to the smaller total surface area of the initial deposited Li wires and decreasing the risk of dendritic Li growth and crack cause improving performance. Reproduced with permission. [216] Copyright 2015 American Chemical Society. c) h-BN ink formulation for the IJP is prepared by dispersion of the h-BN/EC exfoliated powder in an 85:15 ratio of cyclohexanone and terpineol, and for the blade coating, h-BN inks are prepared by dispersing h-BN/EC powder in a 2:1 ratio of ethanol and ethyl lactate where a more viscous ink is required than IJP method. d) Morphology of exfoliated h-BN nanosheets by shear mixing. TEM images showing the EC coating on the surface of the nanosheets at lower (i) and higher (ii) magnification, Reproduced with permission. [217] Copyright 2019, Wiley-VCH. e) Schematic, optical microscopy, and photograph image of a fully printed thin-film transistor (TFT) with the h-BN ionogel dielectric on a polyimide film. Reproduced with permission. [218] Copyright 2021, Royal Society of Chemistry. www.advancedsciencenews.com www.small-science-journal.com between nanoflakes connected with polymer. PC, one of the materials mentioned, has a high glass-transition temperature (%150°C) and high solubility at room temperature in the exfoliation solvent media such as DMF, which is well fitted to formulate suitable inks. For methods demanding higher viscosity like screen printing (>10 Pa s at 10 s À1 shear rate), a low-boilingpoint solvent such as chloroform can be implemented as a viscosity tuning agent and control the drying process, which also affects the adhesion of the ink to the substrates. [215] 2D h-BN plays an important role as an electrode separator in Li-ion batteries (LIBs). [217] Separators are of utmost importance in preventing short circuits between the battery electrodes as well as controlling ion transfer during the charging and discharging processes. Short circuits are a major concern for practicality and safety concerns in LIBs. This problem can be avoided by the utilization of a separator with high thermal stability. In addition, it has been reported that the presence of h-BN layers can suppress Li-dendrite formation during the charging process. Dendrites eventually can penetrate the separator, also leading to a short circuit ( Figure 9b). [216] Another interesting investigation in the ink formulation field was conducted to synthesize and stabilize h-BN. EC-ethanol solution was used as the exfoliation medium, promoting the synthesis and stabilization of h-BN. Furthermore, EC not only minimizes the tendency of nanoflakes to agglomerate but also enables control over the ink rheology for a wide range of ink viscosities, allowing ink formulations from low-viscosity printing methods (e.g., IJP and spray coating) to high-viscosity techniques (e.g., screen printing and blade coating). For instance, for IJP, cyclohexanone and terpineol solvents were added to h-BN and tuning the viscosity to 8.0 Â 10 À3 Pa s at 1000 s À1 . For blade coating (Figure 9c), a mixture of ethanol and ethyl lactate (2:1) as a lower boiling point solvent system was considered (0.4 Pa s at 1000 s À1 ) to even provide an ink with higher viscosity by partial evaporation of the solvents (2.6 Pa s at 1000 s À1 ). In the heat treatment step, the printed patterns were annealed at 300°C for 30 min; the decomposition of EC polymer causes a high porosity film (Figure 9d). This carbonaceous film on h-BN nanoflakes improves the organic phase wettability, which is an essential property for battery separators in contact with electrolytes. [217] Another application of h-BN in the battery field is as a matrix material for solid-state electrolytes. h-BN amends the performance of electrolytes by improving the fabrication robustness and mechanical stability and also increments of the Li transference number (Li þ ) and ionic conductivity. Still, the presence of inorganic fillers suppresses ion transport affecting the cell capacity of the battery. On the contrary, the smaller size of functionalized 2D h-BN layers are capable of both boosting the electrolyte in terms of ionic conductivity and mechanical strength and dealing with such issues. [205] Another asset of the 2D h-BN in this context is the ability to formulate ionogel printable inks. Exfoliation and ink formulation of h-BN with residual chemicals does not cause adverse effects for dielectric applications. The organic materials enter the ink system via the exfoliation process, and the decomposition of these materials in the heating process leaves an amorphous carbon thin film on the surface of h-BN, which improves the interactions between ionic liquids and h-BN nanoplates. For ink formulation, nanoplatelets are mixed with 1-ethyl-3-methylimidazoliumbis (trifluoromethylsulfonyl) imide (EMIM-TFSI) and ethyl lactate as an ionic liquid and solvent, respectively, with the ratio of 1:2 between solid content and ionic liquid. These ion gels contain a storage modulus higher than its loss modulus over the entire measured frequency range. The mechanical strength and ion conductivity can be varied with h-BN concentration. The ion gels of h-BN were printed by the AJP method on a polyimide film as dielectric for fully printed thin-film transistor applications (see Figure 9e). [218] Perspective For graphene inks, as a result of vast research efforts, wellperforming exfoliation methods and ink formulations could be established. The situation is less clear for the other 2D materials hexagonal boron nitride (h-BN), the transition metal dichalcogenides (TMDs), and black phosphorous (BP). The very different attempts to formulate suspensions and inks for printing are an indication that research here is in a rather early stage. It can be envisioned though that these materials will play a leading role in the design of future electronic devices. We gave a detailed overview of the current state of research and challenges concerning exfoliation, ink formulation, and applications of these materials, hoping to help overcome reservations that may exist toward experimenting with 2D materials other than graphene-related compounds.
2022-10-05T15:12:51.003Z
2022-10-02T00:00:00.000
{ "year": 2022, "sha1": "a12452589c70c075a81e4eef1ce1a4a087e61b48", "oa_license": "CCBY", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/smsc.202200040", "oa_status": "GOLD", "pdf_src": "Wiley", "pdf_hash": "47796cfcb3ca1d93c1bfc1f11f23927e97cc9483", "s2fieldsofstudy": [ "Materials Science" ], "extfieldsofstudy": [] }
265151772
pes2o/s2orc
v3-fos-license
Construction and validation of an e-book about cardiovascular risk in people living with the human immunodeficiency virus Objective: To build and validate an e-book about cardiovascular risk in people living with the human immunodeficiency virus. Methods: Methodological study based on the evaluation research theory with analysis of outcome. It involved technological production comprising the phases of analysis and planning, modeling, implementation, and evaluation. Cardiovascular risk factors and strategies to reduce them were discussed. The e-book was validated with experts from all over the country between October 2017 and August 2018 using the Content Validity Index. Results: As the e-book was written Introduction Antiretroviral therapy adherence has provided better rates of morbidity and mortality in people living with human immunodeficiency virus (PLHIV), as the infection has gone from a progressive fatal disease to a chronic disease with a significant improvement in life expectancy. (1,2)ccording to the Joint United Nations Program on HIV/AIDS (UNAIDS), there were 37.7 million PLHIV and around 28.2 million with access to treatment by 2020, which reflects the success of every policy effort towards access to antiretroviral therapy that contributes to reducing the rate of AIDS-related deaths. (1)owever, the rates of chronic noncommunicable diseases in this population -including cardiovascular diseases -have been higher than those in the general population, signaling the need to rethink healthcare strategies for this population from the perspective of preventive care. (2,3)he pathophysiology of cardiovascular diseases associated with HIV is complex, multifactorial and involves the interaction between traditional risk factors, the presence of HIV infection markers, CD4 T-cell and viral load counts, and prolonged exposure to antiretroviral therapy.These factors enable the occurrence of inflammation and immune activation responsible for triggering a persistent endothelial inflammatory process, which is a precursor to diseases such as myocardial infarction and stroke. (3,4)n fact, PLHIV still have a higher prevalence of classic risk factors for cardiovascular diseases than that of people who do not live with the infection.Thus, the management of modifiable risk factors for cardiovascular diseases has become an essential aspect for the care of PLHIV. (2,3)ducational interventions encouraging changes in lifestyle and the management of established risk factors for cardiovascular diseases are considered the first step towards primary and secondary prevention and have an association with decreased cardiovascular risk and better health outcomes. (3,5)ealth education is a care technology and a tool that values the knowledge, practices and cultural context of everyone involved in the educational process.8)(9) In the context of HIV infection, digital information and communication technologies have been used for interventions related to the prevention of virus transmission and to monitor patients, with a view to improving accessibility and quality of care. (10,11)owever, no educational material addressing the prevention of cardiovascular diseases in this population has been identified.There are gaps in the literature on the construction and validation of educational material to improve the provision of care aimed at preventing cardiovascular diseases in PLHIV. Therefore, the aim of this project was to develop innovative digital information and communication technologies that could meet the needs of PLHIV and assist health professionals in the context of cardiovascular disease prevention.Thus, this study aimed to build and validate an e-book about cardiovascular risk in PLHIV. Methods This study was part of a larger project entitled "Development, validation and effectiveness of educational technologies focused on the behavior, preventive practices and lifestyle of people living with HIV/aids".This methodological study was based on the theory of evaluation research, outcome analysis type, which involves technological production. (12)t comprised the phases of: analysis and planning (organization of the script and selection of content); modeling (construction of the material, preparation and editing of the layout, editing of images and videos); implementation (final construction of the e-book and availability for download); and evaluation (evaluation by experts in the field).Specific procedures must be followed in each phase to guarantee the quality of the material. (13)he building process took place from February to August 2017, and the validation by experts took place between October 2017 and August 2018. The term e-book refers to the English abbreviation of the term "electronic book".It is a digital book that can be read on electronic devices such as computers, tablets or even cell phones that support this feature. (13)e first version of the e-book was entitled "Take care of your heart: strategies to reduce cardiovascular risk in people living with HIV/aids".Part of the theoretical content on the topic was gathered for this purpose. The script was developed by the authors.Several studies and the original articles identified were used for the selection of the content, in addition to the Update of the Brazilian Dyslipidemia and Atherosclerosis Prevention Directive (14) and the Clinical Protocol and Therapeutic Guidelines for the Management of HIV infection in adults in Brazil. (15)he topics of the e-book were related to modifiable risk factors for cardiovascular diseases, cardiovascular risk among PLHIV, and strategies that can be adopted to reduce such risks. In the modeling phase, all selected material was used to prepare the content of the first version of the e-book.The text editing software Microsoft Word 2016 was used in the construction before sending it to the Virtual Learning Environment. In this phase, images, videos, photographs and links that could assist in reading the material were also selected.In addition to stimulating reading, these resources make the process of building knowledge more dynamic and attractive. Initially, videos made available on public domain sites (YouTube) were used, which were evaluated by experts as to the veracity and quality of content. However, videos with more specific guidelines were needed.The script of the videos was completed, specialists were invited, and the new videos were treated and edited by an audiovisual technical team.After the final version was approved, the videos were hosted on the Ribeirão Preto College of Nursing page on YouTube for later sharing in the e-book. With the prepared material in hand, the content was transferred to a file in the Electronic Publication format, which allows the use of several tools that guarantee the principles of usability and accessibility, providing the user with an easy-to-use, dynamic, and interactive tool. After testing all tools and layout approval, the e-book was available for free download on digital platforms for iOS and Android operating systems. After the creation and modeling of the e-book, it was evaluated by experts in the area.This improved quality through evaluation throughout the development process. A reference that recommends between six and 20 specialists in the selection of experts was followed. (16)It is also recommended that the number is odd to avoid possible ties. (17,18)or a safe and reliable validation of the submitted content, judges should be experts in the subject area, which can be proven by both professional experience and academic career. (19,20)he choice of clinical specialists was made by checking the Lattes Platform, which makes available the curriculum of all researchers in Brazil.The option for "Advanced Search" was selected and in the "Subject" tab were used the keywords "HIV", "Cardiovascular Diseases", "Nursing". After a systematic search, 54 researchers were found.However, when evaluating their resumes, many did not work in the required expertise area, and 36 specialists in the field of cardiology and infectiology were invited to participate.The purpose of this evaluation was to analyze the content, possible errors, and interface or layout problems that could reduce the quality of the experience in the virtual learning environment.Twenty-one of these specialists completed the analysis. Criteria proposing a score calculation were used in the selection of these experts.A minimum score of four points was necessary for inclusion of a specialist.In case of professionals with titles, a point was added as if the expert had all the necessary degrees. (19)he evaluation of instruments should be carried out in a maximum of 30 days.After completing the evaluation and signing the informed consent, the evaluation instruments should be sent to the researcher.A reminder with a new deadline of seven days was sent to those who did not respond within 30 days.After the deadline, in the absence of a response, they were considered dropouts. An instrument adapted from a model was used in the e-book validation by health experts.It assessed the following aspects: general impression, objective, content, relevance, verbal language and inclusion of topics. Another instrument was used for audiovisual specialists, which included the following aspects: interface, aesthetic and audiovisual quality, and space for notes on necessary but absent content, unnecessary content and other comments. In addition, each specialist completed a sociodemographic characterization questionnaire containing information about sex, age, time of academic training, field, length of professional experience, and academic degree. In the validation of the e-book by judges, the Content Validity Index (CVI) per item and global CVI were used, and these measure the degree of agreement of experts on aspects of the material. (21)he Content Validity Index per item is calculated based on the number of experts who rated the items with answers of 3 or 4 (representative or very representative), divided by the total number of specialists. (22)In the calculation of global CVI, the number of responses 4 and 5 (I agree and totally agree) was divided by the total number of questions. (1,23)The value of 80% was adopted as a limit for approval or disapproval of items covered. (22)fter assessing all the criticism and suggestions raised by experts, the material was restructured according to their recommendations.Subsequently, the material was sent for publication on digital platforms. The e-book is available for free download on the Apple Books for the iOS platform, and on Google Play for the Android platform. (24)ll guidelines of the Declaration of Helsinki were followed in the development of this study.It was approved by the Research Ethics Committee of the proponent research institution under number 76868517.7.0000.5393. Results The script was organized and the content that should compose the e-book was selected based on findings of the literature, as seen in table 1.As the content was written to meet the needs of the population, accessible language was used to help them understand the content.Images, videos, photographs and links were selected to facilitate the interpretation of the content and make the virtual learning environment more dynamic and interactive.In addition, scripts were created for recording videos created for the project; one on opening the material, another on nutrition, and another on mental health.erage of 36.1 years, and an average academic education time of 12.86 years (SD ± 7.79).In relation to degree, 48% had a master degree, 33% held a PhD, 14% had post-doctorate, and 5% were specialists.The e-book was evaluated in several aspects by specialists in the field of health and information technology.After the evaluation, an analysis of the most frequent criticisms and/or suggestions was performed, from which the authors worked on readjustment of the material.According to the analysis of the global CVI of specialists in the field of health and information technology, an agreement rate of 80.5% was observed.All items had satisfactory indexes of agreement, with a CVI of 86.6% for overall impression, 96.8% for objective, 92.0% for content, 90.4% for relevance, 88.7% for verbal language, and 92.3% for inclusion of topics.Regarding the analysis performed by information technology specialists, an agreement index of 95.0% was identified for the quality of the interface and 86.0% for aesthetics and audiovisual.In order to respond to the criticisms and/or suggestions of experts, the goal was to create new images, update the Hypertension and Dyslipidemia Guidelines, record new videos and use new colors in the layout, as displayed in table 3. The e-book interface was built using HTML5 and Java Script.A file was generated in Electronic Publication format, which has features that promote usability and accessibility, and provides the user with an easy-to-use, dynamic, and interactive tool.As described in the previous section, after conducting the systematic search, 36 professionals were invited to participate in the study; 34 of these accepted, but only 21 completed the e-book evaluation and validation process.For assessing their expertise, the specialists were classified according to Fehring score, which ranged from 7 to 19 points, and a mean score of 12.76 was achieved, as seen in table 2. Regarding the characterization of specialists, 52.4% were female, age range of 26-55 years, av- Discussion The educational e-book is an innovative, accessible, low-cost digital tool that can be adopted in health services as a strategy for promoting cardiovascular health in PLHIV, since it was considered valid by experts, and followed the construction and validation steps recommended in the literature. (24)ith the increasing access to mobile devices, new digital interventions and digitization of healthcare products are perceived as a unique opportunity for health interventions. The use of digital strategies creates interesting opportunities for health promotion throughout the care and prevention process, especially in light of the rapid expansion of access to this technology, as its emergence diversifies learning methods and makes it possible to acquire knowledge not only for general purposes, but also to solve real problems. (25,26)he evolution of books stands out in this context, represented by the emergence of e-books.Following the evolution of man and the emergence of new technologies, the book production process has changed, culminating in digital publications. (26)n the health area, the use of varied information and communication technologies goes beyond the transmission of information, as it supports self-care, behavioral changes, and exchange of information and emotional support among peers, in addition to providing benefits in the screening of people with chronic diseases. (27,28)he rapid growth of mobile communication technologies (cell phone, smartphone) is being used to complement traditional public health programs, to promote health and healthy behaviors, to raise awareness of health risks and to manage treatment and adherence to medication. (29)he preparation of the e-book starts from this assumption, since this material provides knowledge for decision making based on self-care and promotion of behavioral changes with a view to reducing cardiovascular risk in PLHIV. In this sense, the e-book appears an important tool linked to the Theories of Health Behavior Change, since it aims to promote subjects' knowledge, self-efficacy, and motivation, skills that are the basis for behavioral change. (29)t can be inferred that the use of technology associated with quality information can promote greater interest of subjects, and foster knowledge and motivation.The human functioning is inherent to a wide network of influences mediated by cognitive processes in the adaptation to human changes. (29)n the context of PLHIV, studies show that technologies have been increasingly used in interventions aimed at preventing transmission of the virus or monitoring patients, promoting improved accessibility and quality of care, evidencing the originality of this study.Other advantages include its ability to impact hard-to-reach populations, including those with typically stigmatized behaviors within health services. (10,11,25,30)herefore, the use of the e-book ensures access to secure information, regardless of where patients are, since it can be accessed from anywhere at any time and assist them in the process of knowing the risk factors for cardiovascular diseases, providing a reflection on their habits. Mobile health technology has been a focus of growing interest as a way to improve cardiovascular prevention by combining modifiable risk factors, which represent most global risk factors for Accepted Add who the material is intended for Accepted cardiovascular diseases, in a scalable and accessible way with the potential to assist in lifestyle modification. (31)ven though the use of technology in health has the potential to improve the efficiency and effectiveness of care, studies have shown that acceptance among health professionals is still limited. (32,33)n this context, the possibility of using the e-book during healthcare is highlighted as a means of passing the guidelines to PLHIV with a view to reducing cardiovascular risk. In order to ensure that this instrument is presented in a safe and effective manner, the satisfactory levels of agreement indices presented during the validation process are highlighted.This factor demonstrates the importance of validation with a committee of experts to ensure the clarity, adequacy and relevance of the content, as well as the language of the educational material. A study showed a variety of factors listed by PLHIV to face the challenges of having a healthy lifestyle.Although healthy eating and physical activity were recognized as important components of a good lifestyle, socioeconomic and/or financial reasons were the main barriers to the adoption of this behavior. (33)or the success of lifestyle interventions in the long run, recommendations must include much more than a list of foods to eat and/or avoid. (33)For this reason, the material was designed and built in such a way as to provide possible changes in habits, regardless of financial issues. A study indicated that participants were willing to learn about healthier food choices and enthusiastic to share knowledge with each other.The e-book tool can meet these needs since it brings information and enables the sharing of ideas. (33)nderstanding the positive aspects from the perspective of patients provides the basis for patient centered counseling approaches that motivate people towards changing their behaviors. 33In this context, modifiable cardiovascular risk factors should be a significant component of care, and part of the care routine established by health professionals. Consequently, knowing the effects of dyslipidemia, stress, poor diet, obesity, smoking, hyperten-sion, and diabetes is the first step to perceiving cardiovascular risk and determining the changes that must be made. This material was developed to meet the needs of this population and make them aware of the need for prevention, adequate management of cardiovascular risk factors, and of their role as agents for changing their own habits, able to make decisions by themselves and move forward with necessary changes to a better quality of life. We emphasize that this work is an example of how nurses can use new technologies and innovations to improve care and look at the patient in a holistic way, enabling a new way of doing health, using technology in their favor. Our study had some limitations.As we had low feedback from the experts, the study took a broader time to be completed, although this did not interfere in the methodological rigor or in the quality of our findings.Furthermore, we could not validate the e-book with the population given the timeframe of the study. Conclusion This study presented the steps for the construction and validation of educational material in digital format (e-book) for the Brazilian population with guidance on knowledge of the risk factors for cardiovascular diseases.The production followed the instructional design development phases through which the content was selected, the material was built, and the layout was prepared, followed by the creation of images and recording of videos.An evaluation was performed by experts and the content was made available for download.A global agreement index of 80.5% was found.Furthermore, suggestions were accepted aiming to ensure a more complete, cohesive, easy to read and updated material.According to the evaluation between specialists, the material proved to be valid for use by PLHIV with the purpose to understand their cardiovascular risk and know healthier habits that may help in prevention of cardiovascular diseases. Table 1 . Script prepared for version 1 of the e-book -Take care of your heart: strategies to reduce cardiovascular risk in people living with HIV Table 2 . Fehring classification score of specialists Table 3 . Suggestions from the expert committee after e-book evaluation
2023-08-13T15:07:57.096Z
2023-08-09T00:00:00.000
{ "year": 2023, "sha1": "271dd30dae6ba9bb6dc67e4ea6c73699785b4e5f", "oa_license": "CCBY", "oa_url": "https://acta-ape.org/wp-content/uploads/articles_xml/1982-0194-ape-36-eAPE00733/1982-0194-ape-36-eAPE00733.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "84e0ade73ffe4e0710503462cd3d13de7259bcbb", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
233637676
pes2o/s2orc
v3-fos-license
A Pragmatic Analysis of Personal Deixes in Lyrical Poetry: Ezra Pound's Lyrics "Girl" and "A Virginal" Deixes belong to the field of both semantics and pragmatics as they lie in the edge of these two fields. Pragmatically, they are concerned with the relationship between the structure of a language and the contexts. The present work aims at analyzing the use of deixes using Levinson’s (1983) and Yule's (1996) concept of deixes, where the latter maintained that the referents of the deixes cannot be realized apart from the context where they are used. He added that the contextual information of certain utterances involves information about the participants (the speaker and the addressee), the time and the place. Consequently, a qualitativedescriptive approach has been adopted to meet the objective of the study which reads, “examining the 1 and 2 person deixes in two of Pound’s lyrics ‘Girl’ and ‘A Virginal’. The study has revealed that knowledge about context affects the manifestation of the referents of deixes and uncovers the ambiguity of understanding the poetic extracts. Pragmatically speaking, the referents of the pronouns cannot be detected in isolation from the context where they are used. It has also been found that the context and the interpretation of the literary text can help in determining the referents of the deixes in terms of person, number and gender. Finally, a great relation appears when the referent of the deixis is realized in accordance with its context. Introduction Both semantics and pragmatics deal with the study of meaning. Semantics concentrates on studying the relation between word and sentence meanings while pragmatics deals with how contexts shape meaning. Pragmatics displays that understanding utterances does not only depend on the linguistic knowledge, but also on the shared knowledge about the context i.e., the status of the people, the intention of the speaker, the place and time involved in the context of an utterance. Pragmatic awareness is regarded as one of the most challenging aspects of language learning, and it often comes through experience. Deictic expressions are considered the key link between the people involved in the utterance, time frame and place. Deictic is a Greek word, meaning to show whereas deixis is a word used in both pragmatics and linguistics to refer to a process whereby the meaning of a word depends on the knowledge of the context of the utterance. Deictic expressions can be realized in all types of texts, including literature and particularly poetry. Such expressions need to be analyzed pragmatically to know their hidden meanings (Crystal,1987). This paper aims at examining how knowledge about context affects the realization of their referents through identifying the occurrence of the 1 st and 2 nd person deixes in two of Pound's lyrics 'Girl' and 'A Virginal'. The study is of a significant value as it shows a sufficient survey of the concepts of deixes and pragmatics to enrich researchers' knowledge about these concepts and make them understand the importance of context in identifying the meaning of the deictic expressions and uncovering the ambiguity in understanding poetic extracts. Pragmatics Pragmatics is that branch of linguistics that deals with meaning in context. The main focus of pragmatics is the person's capability of deriving meanings from particular kinds of speech to realize what a speaker refers to and the relation between the new information and what happened before. It further helps to understand the speech by resorting to past knowledge about the topic and the speaker (Charles,1998). Levinson (1983, p. 6) stated that "pragmatics is the study of those relations between language and context that are grammaticalized, or encoded in the structure of a language." Pragmatics is the study of meaning that systematically relies on the use of language. The principle subjects of pragmatics are: presupposition, implicature, deixes and speech acts. Crystal (1987, p. 120) stated that "pragmatics studies the factors that govern our choices of language in social interactions and the effects of our choice on others." From this point of view, pragmatics not only deals with what is said, but also with the factors behind it and the reasons that make the speakers or the writers choose a specific expression rather than another in certain contexts. Pragmatics, according to Yule (1996), studies what a speaker means. Speakers must have the ability to convey their meaning depending on the shared assumptions and knowledge about the things they are communicating about. In this context, pragmatics shows the contribution of language content to overcome ambiguity. He added (2010) that pragmatics deals with the implied meaning, or how to recognize the meaning of the utterance even if the intended meaning is not directly said or written. That is why; the writers or the speakers have to rely on the shared presuppositions and knowledge of the communication subject. To sum up, the use of language in communication cannot be disconnected from pragmatics. However, pragmatics offers some expressions which help the speaker to avoid any sort of ambiguity. Deixes are one of these expressions. Understanding the intended meaning of the utterance depends on word meaning, context and the prior knowledge about the intended subject. Pragmatics deals with how context affects the listeners or readers in interpreting the meaning of the utterance. Fromkin, Rodman, & Hyams (2003) argued that context does not refer to only the place, identity of the speaker and listener, but also to thing and its surrounding world. Context refers to the circumstances which constitute the setting of the event or speech through which the utterance can be understood. Yule (2010) mentioned two types of context. The first one is the linguistic context, i.e., the co-text, and the second one is the physical context. The linguistic context refers to the set of words, phrases or sentences that surround the co-text; it has a great deal of effect on what one thinks a word probably means. The second context is the physical context. People understand what they read and hear through the physical context; especially the place and time at which the linguistic expressions are mentioned. Fundamentally, physical context indicates where and when events are happened. In short, context can express speakers' or writers' intended meanings. Deixes Deixis (deictic) is a term found in linguistics; it refers to the features of language that denote personal pronouns, time, and the place of the situation where the utterance takes place. It is the process of pointing through language and its denotation changes from one discourse to another. That is; its meaning can be derived from that situation itself (Crystal,1998). Types of Deixes Deixes are of three types: person deixes, time deixes and place deixes. Person deixes, according to Levinson (1983), are concerned with encoding the role of participants in the speech event. In this case, the word 'participants' refers to the speaker as well as the addressee; i.e., when the speaker interacts with the addressee. According to Yule (1996), person deixes are of three categories that are exemplified by different pronouns, such as 'I' for the first person, 'you' for the second person, and 'he, she, it' for the third person. Levinson (1983) contended that the category of first person deixes is the grammatical referent of the speaker to himself. Sometimes, the first person is used not only to refer to the speaker, but also to both the speaker and the addressee or speaker and the group. For example 'we' is a 1 st person pronoun which refers to the speaker and the addressee. Yule (1996) provided what is called the inclusive and exclusive use of the pronoun 'we'. The first use denotes the speaker and others apart from the addressee while the term exclusive means the speaker and the addressee. Levinson (1983) confirmed that the first person pronouns include the following: ('I/ me' are for the singular referent and 'we/us' are for plural referents). Moreover, the second person 'you' is used to refer to one addressee or more. It is only one pronoun which is used to refer to both singular and plural referents. The third person of deixes is used to refer to neither the speaker nor to the addressee. There are particular pronouns that refer to this category, such as: 'he/him, she/her, it' are for singular referents and 'they/them' are for plural referents. The second type of deixes is place deixes or spatial deixes, which are used to describe the location of the referents mentioned in the speech. The third type of deixes is time deixes, or temporal deixes. It denotes the expressions that are used to refer to a point in time when the speaker is speaking (Gjergji, 2015). Literature Review There are numbers of studies concerning the pragmatic analysis of deixes in various contexts. Some of them used the political speeches as data of analysis while others used some selected novels or poems. In 2010, Al-Fikasari analyzed the use of person deixes in Obama's speech both stylistically and pragmatically, while Sari (2015) mentioned studies used nearly the same approach while dealing with deixes; however, their data was different. They all examined the used deixes, their frequencies, their dominant types, and the reason behind this dominance. The Adopted Model According to Yule (1996), deixes are divided into three kinds: person deixes, temporal deixes, and spatial deixes. Person deixes are classified into three types, 1 st person, 2 nd person and 3 rd person. Levinson (1983, p.17) clarified that "deixis refers to the phenomenon wherein understanding the meaning of certain words and phrases in an utterance requires contextual information. Words or phrases that require contextual information to convey meaning are deictic." Reference Deixes cannot be studied apart from the words that are used to refer to persons or things. To understand the intended meaning of the speaker or writer, the listener has to know or realize the referent. The reference is the main goal of the speaker or writer; that is why, the writer has to use certain linguistic forms to identify the person or the thing communicated about to grasp the meaning of the utterance. Person deixes deal with encoding the role of persons who are involved in the utterance. Person deixes are related to the person grammatical categories. It includes the following: 1. 1 st person deixes, such as: 'I, me, myself, my, mine', which refer to the speaker as a participant. 2-1 st person deixes, such as: 'we, us, ourselves, our, ours', which refer to the speaker and referents. It is of following two sub-types: a-Inclusive 1 st person deixes, which refer to a group including the addressee. b-Exclusive 1 st person deixes, which refer to a group without the addressee. 3-2 nd person deixes, such as 'you, yourself, yourselves, your, yours', which refer to the addressee who can be one or more than one person (Yule, 1996). Anaphora and Cataphora Yule (1996) confirmed that anaphora is a process by which the referent of the pronoun is introduced before mentioning the pronoun. The referent is called the antecedent and the first utterance is called antecedent. Cataphora is different from anaphora as the former involves the use of personal pronouns before the first mentioning of referents. Methodology of the Study The research adopted a qualitative descriptive method by applying George Yule's (1996) classification of deixes in his book pragmatics and Livenson's viewpoint about deixes. The study focuses on the use of first and second personal pronouns, their referents, and the role of context in uncovering the deictic meaning in Ezra Pound's lyrics 'Girl', and 'A Verginal'. Data of the Study According to Hornby (1987), lyrics is a kind of poetry that is used to express the personal thoughts and feelings. It is originated in ancient Greek literature, and is composed for singing to give an inspiration for life. A lyrics is a short emotional poem having a song quality. It is written in the first person, and expresses the writer's emotions. The selected lyrics in this study is of Ezra Pound. He is the poet who is responsible for endorsing a modernist aesthetic movement in poetry. He became a fascist collaborator after travelling to Italy during the Second World War. His contributions to poetry started with his imagism promulgation. He was born in 1885. He studied for two years at the college, but left it in 1905. He taught at Wabash College, then, he travelled to Spain where he became interested in Chinese and Japanese poetry. In 1914, he married Dorothy Shakespeare and became the editor of the Little Review in London in 1917. His works are Ripostes (1912), Hugh Selwyn Mauberley (1920, and his 800page epic poem, The Cantos . Introduction to the Poem 'Girl' The tree has entered my hands, The sap has ascended my arms, The tree has grown in my breast- "Girl" is the poem which has two interpretations. The first one is considered an imaginary poem where the poet was the narrator. He held the tree in his hand and made it possible for its essence to invade him. As the tree grew downward and outward, it could escapulate to the one whom he loves. The second interpretation comes from the Greek mythology, which is about the story of Dophe and Apolla. Dophe did not love Apollo, yet the latter loved her. She asked her father to turn her into a tree so that Apollo would not recognize her. As her father accepted to turn her, her skin changed to dark, her hair turned leaves and her arms became branches. Later the branches shrank away yet Apollo promised to take care of the tree and decorate the leader's head with the leaves of that tree. Thus, through power, he rendered the tree green (Alexander,1979). Introduction to the Poem 'A Virginal' No, no! Go from me. I have left her lately. I will not spoil my sheath with lesser brightness, For my surrounding air hath a new lightness; Slight are her arms, yet they have bound me straitly And left me cloaked as with a gauze of aether; As with sweet leaves; as with subtle clearness. Oh, I have picked up magic in her nearness To sheathe me half in half the things that sheathe her. No, no! Go from me. I have still the flavour, Soft as spring wind that's come from birchen bowers. Green come the shoots, aye April in the branches, As winter's wound with her sleight hand she staunches, Hath of the trees a likeness of the savour: As white their bark, so white this lady's hours. 'A Virginal' is a poem which was published in 1912. It refers to a small musical instrument used by girls in the 16 th and 17 th centuries. The poem takes the form of a Petrarchan sonnet. It contains fourteen lines and consists of two parts. It displays the story of a man who loved a young virgin and who devoted himself for her to the extent that he cannot speak to another woman. The speaker in this lyrics is a man who is screaming at a woman, because he already has his own virgin lover who has "bound [him] straitly." He mentioned that he will not love another woman, then, he described his love, the magic surrounding her, and her effects on him. He could not accept any other women because according to him, any woman would not be as good as his virgin girl (Alexander,1979). Data Analysis The analysis of the two poems under study will be displayed in accordance with the objectives and steps mentioned in the methodology. The first step will be figuring out the person deixes in 'Girl'. After that, the referents of these deixes and their frequencies of occurrence will be specified in separate tables followed by an elaboration about the effect of contexts on realizing these deixes. Pound used the two kinds of personal deixes: 1 st person deixis as represented by the use of the possessive form 'my', which has the frequency of 40% while 10% is for the use of 1 st person objective case 'me'. He further used the 2 nd person deixis equally as the 1 st person deixis, i.e., the frequency of the 2 nd person deixis 'you' is also 50%. Table 1 1 st and 2 nd Person Deixes in 'Girl' The interpretation of the meaning and the assignment of the referents of both the 1 st and the 2 nd person deixes depends on the interpretation of the overall meaning of the lyrics as well as on one's awareness of its context. As this lyrics has two interpretations, each interpretation has a different context than the other. The referents of the two kinds of pronouns are different as shown in Table 2. Pound used cataphora in the first stanzas. He used the 1 st person deixis without mentioning who was the narrator. One can categorize the narrator or the one involved in the context according to the two interpretations of the chosen lyrics. However, in the second stanza, Pound used anaphoric reference as he used the pronouns after identifying the nouns being referred to. Besides, the 1 st person deixes according to the first interpretation have a male connotation and the 2 nd person deixes have a female connotation. However, according to the second interpretation, both the 1 st and the 2 nd person deixes have a female connotation. Me (L.4) / 5 Me (L.5) / In 'A Virginal', Pound set his poem about a man. Thus, the poem was written from the viewpoint of that man who was a narrator and lover at the same time. The man was describing his virgin beloved. He used the 1 st person deixis as he was describing his feeling. As for the woman, who was a third person, she was not within the limit of this study. The addressee was not there in the context. From the pronouns used, one can deduce that his beloved, i.e., the addressee was not involved in the situation and she thus appeared as a 3 rd person in the description. Thus, 100% is the frequency of the use of 1 st person deixis in contrast with the 2 nd person deixis. In the first line, Pound used the 1 st person deixes 'me' and 'I' without first mentioning the referents; therefore, the reader of the lyrics will feel confused whether the one involved in the lyrics is the poet himself or a certain figure, and whether these 'me' and 'I' have male of female referents. Later, after going through the whole poem, one can know that the 1 st person pronoun refers to a man talking about his virgin beloved and the reason behind leaving her in spite of the fact that he was connected with her supernaturally or spiritually. 4-Conclusions It has been found that the context and the interpretation of the literary text can help determining the referents of the deixes. A great relation appears between realizing the referents of the deixes and their contexts. This outcome can be manifested while grasping the referents of the deixes in both lyrics under analysis. The personal deixes in the 'Girl' appear to have two different referents depending on how the lyrics is interpreted and on the context of each lyrics. Moreover, the gender of the referent can be manifested through the context. Pound used the 1 st person pronoun more than the 2 nd person pronouns in both lyrics. This reflects that the poet is the dominant of the context whether he is involved or not in the context. He wrote his lyrics from the viewpoint of the speaker. He intended to create characters in his lyrics relying on the 1 st person actor. In addition, personal deixes show gender distinctions and mark a number of overlapping instances in gender realization; a problem that is best solved by resorting to the shared knowledge of the context.
2021-04-30T06:20:43.091Z
2021-03-28T00:00:00.000
{ "year": 2021, "sha1": "7c09ef45f04050ce94bd79299fbd60ef1692808f", "oa_license": "CCBY", "oa_url": "https://jcoeduw.uobaghdad.edu.iq/index.php/journal/article/download/1475/1311", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "d4ab2c092158b76082dcb9d0701efae58b4d2aa2", "s2fieldsofstudy": [ "Linguistics" ], "extfieldsofstudy": [ "Philosophy" ] }
249153170
pes2o/s2orc
v3-fos-license
Integrated Management Strategies for Epidermolysis Bullosa: Current Insights Abstract Epidermolysis bullosa (EB) is a group of rare genodermatoses that is characterized by skin fragility resulting from minor trauma. There are four major subtypes, namely, EB simplex, junctional EB, dystrophic EB and Kindler EB, depending upon the localization of defective protein and resulting plane of blister formation. The phenotype is heterogeneous in terms of severity and majority of them present at birth or neonatal period. Currently, the treatment is mainly supportive and requires multidisciplinary care. The complex molecular pathology creates difficulty in discovering a unified curative treatment approach. But with arduous efforts, significant progress has been made in the development of treatment strategies in the last decade. The management strategies range from targeting the underlying causative factor to symptom-relieving approaches, and include gene, mRNA, protein, cell and combination therapies. In this review, we enumerate the promising approaches that are currently under various stages of investigation to provide effective treatment for patients with EB. Introduction Epidermolysis bullosa (EB) is the prototype of genetic disorders of skin fragility where blister formation, skin peeling, erosions and ulceration develop on various body parts such as nail, hair, teeth, oral, ocular, esophageal, tracheal, and genito-urinary system. Majority of these disorders usually present at birth or neonatal period. The severe ones that present at birth can succumb to illness as early as 6 months of age due to fever, sepsis, and failure to thrive. Some types which present in childhood may resolve spontaneously with age and others may present in adulthood with mild symptoms. The symptoms may be localized to hand and foot or generalized to the whole body with significant involvement of other epithelial lined structures leading to multisystem involvement. Significant morbidity may be seen due to recurrent infections, stricture formation in esophageal tract, contractures, acral mutilation, microstomia, anemia, osteoporosis and cutaneous malignancies like squamous cell carcinoma and basal cell carcinoma. These disorders have profound clinical and genetic heterogeneity. Genetics of EB is complex with the same gene leading to both autosomal dominant (AD) and autosomal recessive (AR) inheritance with different phenotypes. Different genes may lead to same phenotype in different subclasses as well. Many genes are still being discovered and the complete molecular pathogenesis is yet to be deciphered. Figure 1 depicts different proteins and their localization that are involved in pathogenesis of EB. Classification: There are 4 major subtypes depending upon the layer of the skin where the defective protein is localized and leads to blister formation in that layer: 1 (i) EB simplex -fragility involving epidermis of skin (ii) EB junctional -fragility involving lamina lucida of the basement membrane of skin (iii) EB dystrophic -fragility involving lamina densa of the basement membrane of skin (iv) Kindler EB -fragility involving skin epidermis or basement membrane or underlying connective tissue These 4 types are further classified into multiple types based on mode of inheritance, phenotypic severity and isolated or syndromic. According to the latest classification of an international consensus group on inherited EB disorders, there are 14 subtypes of EB simplex, 9 subtypes of EB junctional and 11 subtypes of EB dystrophic. 1 Incidence of epidermolysis bullosa varies with type, with EB simplex being the most common and Kindler EB being the rarest with only 250 reported individuals to date. 2 Overview of Different Types of EB (a) EB simplex (EBS): This type is characterized by non-scarring blisters and erosions triggered by minor mechanical trauma. Seven genes have been implicated and 75% involve keratin 5 and 14, which are produced by basal keratinocytes. These 2 keratins form a heterodimer to form a network and provide strength. The common subtypes are enumerated in Table 1. (b) EB junctional (JEB): This type is characterized by mild or severe blisters with little or no trauma and significant oral and mucosal involvement. Genes maintaining lamina lucida are involved. 3 Table 2 shows a few common subtypes of junctional EB. (c) EB dystrophic (DEB): Due to blistering in deep level of skin, fibrosis, milia and scarring is common. COL7A1 mutation leads to defective type VII collagen, which is the anchoring fibril in the basement membrane. 3 Dystrophic EB subtypes are shown in Table 3. (d) Kindler EB: Rare form of EB where blisters occur in multiple layers and hence it may mimic any of the 3 other types. Characteristically, they present with blistering, atrophy, poikiloderma, telangiectasia, gum, ocular, esophageal and genito-urinary involvement. Kindler EB has autosomal recessive inheritance and is caused by variation in FERMT1 gene, which codes for kindlin-1. This protein is localized next to basal keratinocytes and has a role in adhesion. Defective kindlin leads to disorganization of keratinocytes. 4 Laboratory Diagnosis Due to significant clinical overlap between the various subtypes, diagnosis based on clinical features is not entirely possible. Routine histological examination does not have good resolution to identify the planes of cleavage and is therefore not used. Other methods of diagnosis include immunofluorescence mapping (IFM) and transmission electron microscopy (TEM). Immunofluorescence mapping is a rapid technique where skin biopsy taken from perilesional area is subjected to fluorescent labelled antibodies against the protein. This helps to visualize the layer of skin where cleavage occurs. 3 Transmission electron 5134 microscopy identifies the ultrastructure aberration in the layer of the skin and delineates additional features such as keratin filaments, desmosomes etc., which may help in subtyping. 3 However, both these methods require expertise on specimen transport, processing, and result interpretation. They are labor-and cost-intensive and are therefore not available at all laboratories. Moreover, artefacts may be produced due to use of local anesthetic or tissue handling. Some cases may produce no result when no cleavage or immunofluorescence is seen. Also, these methods do not give any information about the underlying genetic defect. Nevertheless, these techniques have demonstrated high sensitivity (IFM, 97%; TEM, 71%) and specificity (IFM, 100%; TEM, 81%). 5 These techniques are complementary to genetic testing and help in confirming the diagnosis in cases where variant of uncertain significance is obtained in next generation sequencing. At present, next generation sequencing is the gold standard for diagnosis of EB. The concordance of IFM and TEM with next generation sequencing (NGS) is 76% and 78.5%, respectively. 6 NGS based on massive parallel sequencing is the most effective approach to identify the candidate gene. Molecular diagnosis not only confirms the subtype, but also identifies those subtle types of EB which do not have ultrastructural abnormalities. Novel genes may also be identified which may help in expanding the classification of EB and provide knowledge on the etiology of the disease. Confirmed molecular diagnosis helps in predicting the disease outcome as well. It provides information on mode of inheritance, which helps in counselling of the family and is also useful for the purpose of prenatal diagnosis. Next generation sequencing may be offered in the form of targeted panels or whole exome. Targeted panels comprising the testing for common genes have high efficiency (94.3%) 7 and sensitivity ranges from 75-98% 8 with low cost and rapid turnaround time. Whole exome sequencing has the potential to identify novel disease-causing genes, further improving diagnostic sensitivity. Large in-frame deletion in COL7A1 and KRT5 has also been reported and this can be diagnosed by multiplex ligation-dependent probe amplification (MLPA) or quantitative polymerase chain reaction (PCR). 9,10 The clinical practice guidelines for laboratory diagnosis of EB that has been previously published by internationally recognized experts in this field emphasizes in detail the importance of early and accurate diagnosis. 11 Management Currently, there is no definitive treatment and cure remains elusive for epidermolysis bullosa (EB). A patient-centric multidisciplinary approach therefore is required for the effective management of these individuals. One should adopt strategies to ameliorate symptoms that are critically needed to increase patients' quality of life. Good skin care to prevent infections, encouraging wound healing, pain and pruritus control and minimizing complications by avoiding trauma, management of extracutaneous complications, nutritional support, and occupational therapy are required for effective management. Comprehensive guidelines regarding the interdisciplinary management of EB have been previously published. 12 The following section will provide an overview of novel therapies ( Figure 2) that are in various stages of experimentation. Novel Experimental Therapies Supportive approaches do not sufficiently meet the medical needs of those suffering from severe EB subtypes. Hence, strategies to correct or modulate the underlying disease mechanism are essential. This is reflected by the increasing number of trials that are being conducted for this condition in the last decade. The goal of these trials is the prolonged or permanent restoration of functional protein expression through addition, replacement, modification, disruption or correction of the defect at the DNA, RNA, protein, or cellular level. The diversity of clinical pathology presents a major challenge in optimizing patient management. We divide these strategies into those that target the underlying cause and those that alleviate the associated comorbidities. DNA-Based Strategies Epidermolysis bullosa (EB) is inherited in both autosomal dominant and recessive forms. Gene therapy strategies mainly focus on replacing genes in recessive forms and silencing genes in dominant forms. Gene Replacement Therapies Ex vivo Gene Therapy. This therapy uses viral vectors to replace a missing gene product by isolating patients' cells and inserting a normal gene in vitro followed by expansion of corrected cells into epidermal sheets and grafting these back onto wounds in patients. This method has been successfully utilized in three patients with junctional EB (JEB) who were carrying mutations in the LAMB3 gene, with a major challenge of identifying the keratinocyte stem cell for the gene therapy. Targeting the holoclone stem cells, which has the greatest capability of self-renewal and proliferation, led to satisfactory results with no adverse effects on long-term follow up. [13][14][15] However, a similar therapeutic effect was not obtained in individuals with recessive dystrophic EB (DEB), 16,17 partly owing to large size of the transgene. This explains the varying therapeutic effects amongst different subtypes owing to differences in biology of affected protein. General limitations of ex vivo gene therapy include requirement of multiple biopsies to ensure successful stem cell isolation, and extensive debridement for wound bed preparation to increase engraftment success. 18 In vivo Gene Therapy. This is performed via topical delivery of genetically corrected cells using vectors to chronic skin lesions. In preclinical studies, highly branched poly(β-amino ester)/minicircle COL7A1 polymeric nanoparticles for gene delivery in recessive DEB keratinocytes showed positive results. 19 Currently, in vivo gene therapy using viral vectors is being investigated for patients with DEB 20 along with other clinical trials (Phase III trial, NCT04491604). Potential advantages include low risk of immunological reactions, minimum toxicity, a more stable delivery, low costs, easy manufacturing and reduced interventional burden. The overall limitation of gene replacement therapy is patient selection. Only those patients who have partial but positive expression of the mutated protein can be included in order to minimize the risk of autoreactivity to the newlyformed wild-type proteins. 21 Gene Editing Therapies Gene editing by designer nucleases such as zinc-finger nucleases (ZNF), transcription activator-like effector nucleases (TALEN) and clustered regularly interspaced short palindromic repeats (CRISPR) /CRISPR-associated protein 9 (Cas9) have revolutionized the field of genetics. These tools can permanently correct the genetic defect at DNA level by utilizing an exogenous donor template and generating double-stranded breaks at the loci of interest and activating endogenous DNA repair mechanisms (non-homologous end joining (NHEJ) or homology direct repair (HDR). While NHEJ can be used to induce disruption of a dominant mutant allele, reframing of a frameshift mutation or skipping of a mutant-bearing exon, HDR can achieve a precise repair and complete restoration of the wild-type genetic sequence. 22 In preclinical studies, TALENs were first used to edit primary recessive DEB fibroblasts through homology-directed repair. 23 Ex vivo gene disruption using these tools was achieved for dominant negative mutations in COL7A1 24 and KRT5 genes. 25 stranded breaks, has also been employed successfully in recessive DEB. 26 Most recently, the expansion and refinement of base editing in the form of "prime editing" represents a further advancement that can potentially edit the vast majority of all pathogenic EB mutations. 27 The dilemma regarding the efficacy and safety of these techniques, especially the unpredictable off-target effect, precludes its entry into clinical applicability at present. mRNA-Based Strategies a) Antisense oligonucleotide (AON): AONs are short fragments of single-stranded DNA or RNA that specifically bind to a complementary sequence in the target pre-mRNA and mask the mutated exon from the splicing machinery, thereby excluding it from the mature mRNA. Thus, a truncated but functional protein can be obtained and is useful where inframe exons encode non-essential domains and their deletion from the protein is unlikely to result in major structural or functional changes. This method has been successfully employed in patients with DEB due to mutations in exons 13, 70, 73, 80 and 105 of COL7A1 gene. [28][29][30][31] A double-blind, randomized, intra-subject, placebo-controlled clinical trial of an AON targeting exon 73 of COL7A1 gene in DEB wounds (NCT03605069) is currently being assessed. 32 Overall limitations of AON-based therapy include transient effect and its beneficial effect on a limited subset of patients with desired mutations. b) Small interfering RNA (siRNA): siRNA has gained attention as a potential therapeutic reagent due to its ability to inhibit expression of a mutant mRNA without silencing the wild-type allele and is most suitable for dominant negative mutations. This method has been employed in dominant DEB and EBS. [33][34][35] The long-term safety of siRNA is still not clear due to the off-target effect and intact delivery of siRNA into skin in vivo is a major challenge. c) Spliceosome-mediated RNA trans-splicing (SMaRT): The endogenous splicing machinery is exploited to replace mutated sequences of an endogenous pre-mRNA transcript with wild-type sequences provided by an exogenously provided RNA trans-splicing molecule. 36 The same template can be used to correct a pool of mutations. This type of RNA editing was first achieved for PLEC gene 37 followed by KRT14 38 and COL7A1 39,40 genes by in vitro and in vivo methods in preclinical studies. Protein-Based Strategies Replacement of a missing or faulty protein with wild type form is currently being evaluated only for recessive DEB and is under Phase II trial (NCT04599881). A pre-clinical study has shown that intravenously injected Recombinant Human Type VII Collagen led to its adherence to skin wounds and restored skin integrity of DEB. 41 Poor uptake of protein by skin cells owing to its size, poor accessibility to other extracutaneous tissues, unknown duration of efficacy and immune responses are its downsides. Cell-Based Strategies Revertant mosaicism (RM) is a process by which inherited mutation is rescued by a second somatic or postzygotic mutation that results in a functional protein. Because of this nature, it is often considered as a 'natural gene therapy'. This phenomenon has been reported in all types of EB, particularly in intermediate JEB. 42 The mechanisms for revertant mosaicism include formation of back mutations, second-site mutations, intragenic recombination, gene conversion and errors during DNA repair. 43 Revertant mosaicism in the skin can be readily detected using sequence analysis of genomic DNA isolated using laser capture microdissection (LCM) of biopsies from suspected revertant areas, as well as immunohistochemistry to detect changes in protein expression. 44 This mechanism was exploited and the resultant patchy healthy skin due to revertant mosaicism can be expanded in vitro and can be used to promote healing of affected skin areas. 45 Cultured epidermal autografts containing revertant cells have also been used in the management of chronic wounds in patients with recessive DEB. 46 Mesenchymal stem/stromal cells (MSCs), which have stem-cell capabilities, immunomodulatory and antiinflammatory effects, were delivered through various routes (intravenous/ intradermally) and their effects are being assessed in patients with EB. In recessive DEB, intravenous infusion of MSCs resulted in improved wound healing and reduced pain and itching in children. 47 Usefulness of MSC subpopulation including bone marrow-derived muse cells were also demonstrated in DEB patients. 48 The unresolved doubts with this mode of therapy exist in terms of the type of source, route of delivery, optimal dosage and treatment intervals and potential role of haploidentical donors. Combined Strategies A Phase I, open-label, single-centre clinical trial evaluated the efficacy of combined ex vivo gene and cell therapy using patients' autologous fibroblasts in four recessive DEB patients. Patient fibroblasts were first modified ex vivo using a self-inactivating lentiviral vector that carries COL7A1 cDNA. The gene-modified autologous fibroblasts were then injected intradermally back into the patients. This treatment had no serious adverse effects and was well tolerated by the patients. 49 Readthrough Strategies Nonsense mutations leading to premature stop codon (PTC) formation are present in 15-20% of JEB and recessive DEB patients. In EB, nonsense mutations tend to result in severe phenotype due to complete or near complete absence of functional protein. In premature stop codon readthrough strategies, a random amino acid is incorporated at the PTC position in the mRNA. Depending on the impact of the introduced amino acid on protein folding, stability, and posttranslational processing, PTC readthrough therapies can result in the synthesis of a functional full-length protein by inducing a conformational change at the decoding site, causing reduced translational fidelity and incorporation of nearcognate tRNAs at the stop site. 50 The common readthrough agents employed in various trials in EB include aminoglycosides geneticin, gentamicin and paromomycin and anti-inflammatory drug amlexanox. This strategy has been utilized in recessive DEB-derived cells 51,52 and JEB keratinocytes carrying nonsense mutations within the LAMB3 gene. 53 Twoweek daily intravenous gentamicin administration in four recessive DEB children not only improved wound healing for at least 3 months, but also markedly increased C7 expression and anchoring fibrils. 51 Three-weekly intravenous gentamicin infusions (7.5 mg/kg/day) in five severe JEB neonates led to improved skin lesions in four subjects. 54 Optimal dosing, treatment intervals, and the cumulative toxicity profile still need better definition. Novel Strategies to Modify Disease and Alleviate Comorbidities Blister Management Topical application of a small molecule drug, diacerein, was found to significantly reduce the number of blisters and their recurrence in RCTs of EBS. 55 Upregulation of IL-1ß due to accumulation of mutated keratins is a characteristic feature of few subtypes of EB. This rhein prodrug has been shown to reduce expression of K14 and inhibit IL-1ß converting enzyme and found to be safe and effective as a 1% topical formulation. Apremilast, a phosphodiesterase 4 inhibitor (PDE-4) that suppresses Th1/ Th17 activation has already been approved for treatment of psoriatic arthritis and oral ulcerations in Behcet's disease. Since EBS fluid was found to have high levels of Th17 cytokines, this drug was tried and a dramatic reduction in blisters were found in three patients and two of them had sustained clinical remission. 56 High Mobility Group Box-1 (HMGB1) is a peptide responsible for mobilizing MSCs from bone marrow and recruiting them to damaged skin for repair. Based on preliminary data, a Phase II, single-arm, non-randomized, uncontrolled clinical trial (UMIN000029962) to assess its role in recessive DEB showed satisfactory results in blister/ erosion reduction. Pruritus Management The role of neurokinin-1 receptor (NK1R) antagonist serlopitant is currently being evaluated in patients with any EB subtype. 57 A double-blind RCT is currently in place to evaluate the role of low-dose topical calcipotriol ointment on improving wound healing in DEB. 58 This is based on the antiproliferative role of calcipotriol on keratinocytes. A monoclonal antibody drug, dupilumab, has already been approved for moderate to severe atopic dermatitis and is currently being evaluated for EB-pruriginosa. 59 It is an anti-interleukin-4 receptor alpha (IL-4Rα) monoclonal antibody that inhibits both IL-4 and IL-13 signalling and modulates Th2-mediated immune mechanisms. Wound Healing Thymosin β4 is a naturally-produced polypeptide and has several wound healing properties, including anti-inflammation, anti-fibrosis, pro-angiogenesis, stem cell recruitment and keratinocyte migration promotion. 60 Phase II clinical trial (NCT03578029) evaluating the efficacy of a topical thymosin β4 dermal gel on paired wounds in 15 JEB/DEB patients is underway. A triterpene extract in sunflower oil, oleogel-S10 has been shown to promote wound healing through inflammation modulation, stimulation of keratinocyte migration and altered epidermal differentiation. It acts irrespective of the underlying molecular pathology in EB. 61 Currently, preliminary results from a Phase III double-blind, randomized, placebo-controlled "EASE" trial (NCT03068780) are showing sustained wound closure benefits in recessive DEB patients. Deformities Correction Chronic wounds lead to repeated cycles of inflammation eventually culminating in progressive fibrosis followed by tissue destruction. This in turn results in mitten hand and foot causing severe disability. Transforming growth factor-beta (TGFß), a pro-inflammatory cytokine, plays a key role in EB-associated fibrosis. 62 Thus, modulating the expression of TGF-ß1 can help in reduction of fibrosis and the role of angiotensin II antagonist with anti-fibrotic effects, losartan, is currently being evaluated in children and adolescents with recessive DEB. 63 Risk of Skin Malignancy One of the deadly complications of EB that significantly reduces the life span of these individuals is the risk of developing aggressive squamous cell carcinoma (SCC) due to repeated wounds, infections and inflammation of skin. In comparison to the general population, these individuals are at 70-fold higher risk of developing SCC. 64 Conventional treatments in the form of local excision, radiotherapy or chemotherapy can be detrimental owing to their adverse effects. Hence, it is important to strike a balance between tumour suppression and its adverse effects on wound healing. Few groups have evaluated the role of cetuximab, a monoclonal antibody targeting epidermal growth factor receptor (EGFR) on EB patients with advanced cutaneous SCC as EB-associated SCC often express EGFR. However, they found limited beneficial effects of the drug on survival but a better response is seen when administered early in the course of illness. 65,66 Currently, a multicentre Phase II clinical trial in recessive DEB patients with late stage, metastatic or unresectable SCC is in place to evaluate the role of rigosertib, a PLK1 (polo-like kinase-1) inhibitor that has a strong and selective apoptotic effect in recessive DEB-SCC cells (NCT03786237). Anti-PD1 (programmed death-1) monoclonal antibodies are also being evaluated on EB patients with metastatic SCC (Eudra CT-No. 2016-002811-16). PD1 is predominantly expressed on T cells, and, by binding to its ligands PD-L1 and PD-L2 expressed on tumor cells, induces a negative signal that leads to effector T cell suppression. Specific antibodies that block these interactions can thus lead to reactivation of the immune system and improvement of anti-tumour immune responses. To conclude, there has been tremendous progress in understanding the molecular genetics and underlying pathological mechanisms of EB over the past few decades. Many preclinical and clinical attempts to develop new treatments for EB are currently in place. Gene replacement therapy is an exciting approach and few studies have reached Phase III trials for both ex vivo and in vivo approaches and this can benefit a subset of people with EB. mRNA-based therapies have better safety standards but their effects are transient in nature and their use is restricted to a subset of people bearing specific mutations. Despite rapid advancements in genome editing tools and their great potential, their entry at the clinical level in EB is still precluded by their unpredictable off-target effects. Readthrough therapies, though restricted to nonsense mutations, showed promising results in Phase I and II trials. Other regenerative cell-based therapies and strategies to mitigate the effect of comorbidities are currently in Phase I, II and III trials and can significantly improve the quality of life of individuals with EB. But one should also weigh the risks of some of these technologies against their potential benefits before their clinical application. With advancing technologies, refinement of current innovative strategies with improved safety profiles for successful treatment of this group of incurable diseases is not far from reach. The search for new methods of treatment will keep expanding and provides hope for a definitive cure. Disclosure The authors report no conflicts of interest in this work.
2022-05-30T05:21:45.699Z
2022-05-24T00:00:00.000
{ "year": 2022, "sha1": "e478224de9fb176aa2f32b8f21161af60eee859a", "oa_license": "CCBYNC", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "e478224de9fb176aa2f32b8f21161af60eee859a", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
271274820
pes2o/s2orc
v3-fos-license
Cationic polymer effect on brown adipogenic induction of dedifferentiated fat cells Obesity and its associated comorbidities place a substantial burden on public health. Given the considerable potential of brown adipose tissue in addressing metabolic disorders that contribute to dysregulation of the body's energy balance, this area is an intriguing avenue for research. This study aimed to assess the impact of various polymers, including collagen type I, fibronectin, laminin, gelatin, gellan gum, and poly-l-lysine (PLL), on the in vitro brown adipogenic differentiation of dedifferentiated fat cells within a fibrin gel matrix. The findings, obtained through RT-qPCR, immunofluorescent imaging, ELISA assay, and mitochondria assessment, revealed that PLL exhibited a significant browning-inducing effect. Compared to fibrin-only brown-like drops after two weeks of incubation in brown adipogenic medium, PLL showed 6 (±3) times higher UCP1 gene expression, 5 (±2) times higher UCP1 concentration by ELISA assay, and 2 (±1) times higher mitochondrial content. This effect can be attributed to PLL's electrostatic properties, which potentially facilitate the cellular uptake of crucial brown adipogenic inducers such as the thyroid hormone, triiodothyronine (T3), and insulin from the induction medium. Introduction Obesity, marked by an excessive accumulation of body fat, is a complex and far-reaching global health issue [1].It is a significant risk factor for a range of serious diseases including, but not limited to, type 2 diabetes [2], high blood pressure [3], heart disease [4], stroke [5], certain cancers [6], musculoskeletal problems [7], and respiratory issues [8].According to the 2023 World Obesity Atlas, obesity is anticipated to affect nearly two billion individuals globally by 2035 [9]. The increasing prevalence of global obesity has highlighted the importance of studying adipose tissue and its related metabolic complexities.Among mammals, adipose tissue is mainly classified into two types: white adipose tissue (WAT), responsible for storing energy as intracellular triglycerides and secreting hormones, and brown adipose tissue (BAT), involved in regulating body temperature through nonshivering thermogenesis [10].Additional adipose tissue types such as beige/bright [11], pink [12], and yellow [13] have also been documented in scientific literature [14].Brown adipocytes facilitate heat dissipation by uncoupling proton transport from ATP synthesis.Due to their abundant mitochondria and the presence of "uncoupling protein 1" (UCP1) [15], BAT can convert electrochemical energy generated during respiration into heat by uncoupling of lipid oxidation from ATP generation in mitochondria [16,17].This metabolic process contributes to oxygen uptake, caloric consumption, and body temperature regulation [17].Because BAT strongly impacts calorie expenditure, it is considered a potential therapeutic target in combating obesity. White adipocytes exhibit high plasticity and high reprogramming capacity.In vitro, mature adipocytes can undergo reprogramming, transitioning into dedifferentiated fat cells (DFAT cells), recognized as a novel form of stem cell [18].Once adhered in fully medium-filled cell culture flasks, white adipocytes shed their cytoplasmic lipid content, evolving into DFAT cells within approximately one week [19].Through this process, they acquire multipotent characteristics similar to mesenchymal stem cells (MSC), expressing MSC-related markers such as CD44, CD90, CD73, and CD105 [20], displaying substantial redifferentiation potential across osteogenic [21,22], chondrogenic [23], and adipogenic [24] lineages.This exceptional capacity for redifferentiation, combined with a straightforward isolation process [18,25], establishes DFATs as a valuable and abundant resource applicable in tissue engineering, regenerative medicine, cell therapy, and stem cell research.As detailed in our previous study [26], DFATs offer important advantages, such as their relatively easy and abundant acquisition from waste adipose tissues obtained through procedures like liposuction, compared to adipose-derived stem-cells (ADSCs), in tissue engineering studies. Given its critical role, the engineering of adipose tissue represents a promising avenue for assessing and enhancing endocrine metabolism within the body [27][28][29].Various scaffold materials have been explored to emulate the native tissue environment and bolster adipose tissue biological functions, but there has been less research conducted on thermogenic adipose tissue engineering as a therapeutic approach for obesity-related diseases [30].The materials used in these few studies have been a range of natural and synthetic options such as collagen type I [31], hyaluronic acid [32], gelatin [33], and polyethylene glycol (PEG) derivatives [34,35]. In this study, we chose fibrin gel for the basic material to construct our tissues.It is a well-regarded option in tissue engineering research owing to its biocompatibility, biodegradability, and non-toxic nature.Typically serving as a matrix or carrier for cells in adipose tissue engineering studies [36,37], it is commonly used as a matrix or cell carrier.Used by themselves, standalone fibrin gels have a limited impact on adipogenesis [36], and their dehydration and degradation in extended culture conditions restrict their utility.Nevertheless, they do offer mechanically conducive environmental conditions for soft tissue engineering applications.When combined with collagen microfibers (CMF), this gel facilitates the regeneration of the structure of adipose tissue before implantation, employing white adipocytes, resulting in notably high cell viability [37].However, a review of the relevant literature on in vitro brown adipose tissue engineering was unable to identify any study reporting the effect of a fibrin matrix mixed with different polymers on brown adipogenic differentiation. We therefore generated brown-like adipose tissue drops by encapsulating DFATs derived from human white adipocytes within fibrin gel mixed with various biopolymers, including collagen type I, collagen type IV, fibronectin, laminin, gelatin, gellan gum (GG), and poly-L-lysine (PLL) as a cationic polymer.We investigated the impact of polymer supplementation on browning through lipid droplet size analysis, mitochondrial assessment, oxygen consumption rates and PCR analysis within the developed drops.This proposed research could have a profound impact, providing potential applications for PLL in the field of thermogenic adipose tissue engineering to address obesity and related comorbidities. Obtaining of DFATs DFATs were obtained using our previously reported method [26].Human adipose tissues were obtained from patients at Kyoto University Hospital, then mature adipocytes were freshly isolated.Briefly, tissues were washed with phosphate-buffered saline (PBS) with 5 % penicillin-streptomycin. Adipose tissue as 2-3 g per well of a 6-well plate, was minced, and a collagenase solution (2 mg/mL in Dulbecco's modified Eagle Medium (DMEM) with 5 % bovine serum albumin (BSA) and 1 % penicillin-streptomycin) was added for 1 h at 37 • C with rpm rotation.After filtration and centrifugation, mature adipocytes were collected from the top layer, and stromal vascular fraction from the bottom.After discarding the in-between liquid, washing was performed with PBS with 5 % BSA, 1 % penicillin-streptomycin and a final wash in DMEM with 10 % fetal bovine serum (FBS) and 1 % penicillin-streptomycin.Then, freshly isolated mature adipocytes were cultured at a seeding density of 5.0 × 10 4 /cm 2 in polystyrene flasks fully filled with DMEM containing 20 % FBS and 1 % penicillin-streptomycin.The flasks were securely capped to prevent medium leakage and incubated at 37 • C for one week.After incubation, the medium was aspirated, and the resulting DFATs were detached through trypsinization (Trypsin-EDTA 0.25 %).These obtained DFATs (passage numbers 4 to 7) were used in subsequent experiments. Ethics statement The adipose tissues were collected from Kyoto University Hospital (Kyoto, Japan) after abdominal adipose tissue or liposuction isolation of three human donors aged 41, 45, and 53 years old, with a BMI of 22.40, 25.78, and 20.46, respectively.All use was approved by the Osaka University Research Ethics Review Committee (approval number: L026). BAT drops seeding The graphical overview of the sample preparation is shown in Fig. 1.DFATs were mixed at a seeding density of 4.0 × 10 6 cells/mL with a fibrinogen solution (final concentration of 6 mg/mL from a stock solution (50 mg/mL) in DMEM 0 % FBS, 1 % penicillin-streptomycin, filtered using a 0.2 μm filter) and a thrombin solution (final concentration of 3 U/mL from a stock solution (10 U/mL) in DMEM, 10 % FBS, 1 % penicillin-streptomycin, filtered using a 0.2 μm filter).The resulting mixture was directly seeded to a 96-well ultra-low attachment roundbottomed plate at a volume of 5 μL using wide pipette tips and incubated at 37 • C for 20 min for gelation.Subsequently, after adding 80 μL of growth medium (GM, 10 % FBS, 1 % penicillin-streptomycin in highglucose DMEM) to detach drops, samples were transferred to a 24-well ultra-low attachment plate and incubated in 500 μL of GM in each well for 2 days.The culture medium was then fully replaced with brown adipogenic differentiation medium (BAM), containing 0.5 μM dexamethasone, 125 nM indomethacin, 250 μM IBMX, 850 nM bovine insulin, 1 μM rosiglitazone, 120 nM triiodothyronine (T3), 1 % penicillinstreptomycin, and 10 % FBS in high-glucose DMEM [34].For polymer-mixed drops, all polymers (collagen type I, fibronectin, laminin, collagen type IV, gelatin, GG) were first dissolved in PBS (pH 7.4), filtered through a 0.2 μm filter, and then mixed with the cell-fibrin gel mixture at a final concentration of 50 μg/mL, except for PLL.For PLL-mixed drops, the polymer was mixed at concentrations of 5, 10, or 20 μg/mL.Samples were cultured in BAM for 2 weeks, with half the medium replaced every two days throughout the culturing period.To obtain WAT drops, samples were cultured in Adipocyte Differentiation Medium (Cell Applications Inc.) for 2 weeks with half the medium replaced every two days throughout the culturing period.For undifferentiated DFAT drops in fibrin gel, samples were cultured in GM for the same duration with half the medium replaced every two days. Immunofluorescence imaging After the two-week incubation in BAM and WAM, the samples were washed three times with PBS, followed by fixation in a 4 % paraformaldehyde solution in PBS overnight at 4 • C. To enhance permeability, samples were treated with 0.05 % Triton X-100 in PBS for 15 min and then incubated for 1 h at room temperature in 1 % BSA in PBS to minimize nonspecific staining.The anti-UCP1 antibody, diluted in 1 % BSA (1:500 dilution), was applied to the samples overnight at 4 • C. Subsequently, the samples were exposed to Alexa Fluor® 647 secondary antibodies (dilution 1:200) for 2 h at room temperature.Intracellular lipid accumulation was visualized using Nile Red (final concentration: 50 ng/mL), and nuclei were counterstained with Hoechst (final concentration: 10 ng/mL).For mitochondrial staining on days 7 and 14, samples were washed three times with PBS and incubated in Mito-Tracker dye diluted in medium (10 % FBS, 1 % penicillin-streptomycin in DMEM high glucose without phenol red) for 30 min at 37 • C in a 5 % CO 2 incubator.The samples were then washed with PBS.All samples were rinsed in PBS and examined using an FV3000 Confocal Laser Scanning Microscope (CLSM) (Olympus, Tokyo, Japan).To compare UCP1 content, lipid accumulation, and mitochondrial abundance, Zstack images were captured using the same procedures, and maximum intensity projection was performed while maintaining consistent exposure time and excitation power for all samples.Data were acquired by measuring the total fluorescence intensity of UCP1, lipid droplets, and MitoTracker, normalized to the total fluorescent intensity of Hoechst for each drop, using Image J software (Fiji for Mac OS X).For the heparan sulphate staining on DFATs incubated in fluorescein isothiocyanate (FITC)-labeled PLL (final concentration: 10 μg/mL) including DMEM (10 % FBS, 1 % penicillin-streptomycin), cells were seeded on a 96-well plate with 10,000 DFATs per well.After 24 h incubation in GM at 37 • C in a 5 % CO 2 incubator, the medium was removed, FITC-PLL mixed in GM was inserted and the cells were incubated for 24 h.After fixation, permeabilization and blocking steps, the anti-heparan sulphate antibody, diluted in 1 % BSA (1:100 dilution) was applied to the samples overnight at 4 • C. The samples were then exposed to Alexa Fluor® 647 secondary antibodies (dilution 1:200) for 2 h at room temperature.Nuclei were counterstained with Hoechst (final concentration: 10 ng/ mL).All samples were rinsed in PBS and then examined using an FV3000 CLSM (Olympus, Tokyo, Japan). Lipid droplet size measurement The samples incubated in BAM and white adipogenic medium (WAM) were subjected to intracellular lipid accumulation staining using Nile Red, with nuclei counterstained using Hoechst (see 2.3.for details).Subsequently, the samples were rinsed in PBS and then examined on a glass-bottomed surface dish using an FV3000 CLSM (Olympus, Tokyo, Japan) at a 60× magnification with immersion oil.Lipid droplet sizes were quantified using Image J (Fiji for Mac OS X), with 200 droplets measured for each image. Cell viability assay To assess the cell viability in PLL-mixed drops with different PLL concentrations (0, 10, 20 and 50 μg/mL), samples were cultured in GM overnight.Subsequently, a Live/Dead® viability assay kit was applied and compared with samples containing only fibrin.After three washes with PBS, cells were stained using Calcein, (final concentration: 2 μM/ green for living cells), and Ethidium Homodimer-1 (final concentration: 4 μM/red for dead cells) for 30 min at 37 • C in the dark, followed by imaging using an FV3000 CLSM (Olympus, Tokyo, Japan).The percentages of cell viability were quantified using Image J software on Zstack images, captured with consistent laser power and step sizes for each. RT-qPCR analysis Gene expressions were assessed using real-time quantitative polymerase chain reaction (RT-qPCR).Total RNA from drops (6 drops combined per replicate) was isolated using a PicoPure™ RNA isolation kit according to the manufacturer's instructions.The extracted RNA was quantified using a Nanodrop™ N1000 device (Thermo Fisher Scientific, MA, USA).To convert the isolated RNA to DNA, an iScript cDNA synthesis kit was used as per the manufacturer's instructions.For DNA amplification, cDNA was amplified using Taqman Fast Advanced Mix with Taqman gene expression assays for UCP1, Cidea, PRDM16, and RPII (used as a housekeeping gene), following the manufacturer's protocol (Supplementary Table 1).The cDNA synthesis and RT-qPCR reactions were carried out using a StepOnePlus Real-Time PCR System (Thermo Fisher Scientific, MA, USA).RT-qPCR analysis was conducted on cells obtained from the three different donors, for a total number of replicates from 3 to 9. UCP1 ELISA assay Samples were washed three times with PBS and then treated with Trypsin-EDTA at 37 • C until the fibrin gel dissolved.The cells were collected by centrifugation, washed with cold PBS three times, and subjected to three cycles of freezing and thawing.After removing cellular debris through centrifugation at 1,500g, 4 • C for 10 min, a Human UCP1 ELISA Kit was used following the manufacturer's instructions.Data normalization was achieved by quantifying DNA using a Qubit HS DNA assay on the ELISA lysates. Measurement of oxygen consumption rate Polymer-mixed samples were washed three times with PBS, then transferred to a 96-well round-bottomed OxoPlate (OP96U, PreSens Precision Sensing).Four drops were added per well with DMEM containing 10 % FBS and 1 % penicillin-streptomycin, without phenol red.For plate calibration, eight wells were designated for 0 % O 2 standard (H 2 O with 10 mg/mL sodium sulfite) and 100 % O 2 standard (respiration media), following the manufacturer's protocol.Oxygen concentrations were measured on day 14 and after an additional 24 h with the same plate using a plate reader equipped with two calibration standards and filter pairs for the indicator (excitation 540 nm, emission 650 nm) and reference (excitation 540 nm, emission 590 nm).Calibrations and oxygen levels were calculated following the manufacturer's manual, and the oxygen consumption rate in 24 h was determined.Data normalization was performed using DNA quantification with a Qubit HS DNA assay. DNA quantification A Qubit™ DNA HS Assay Kit was used with a Qubit™ 2.0 Fluorometer (Life Technologies, Thermo Fisher Scientific Inc.) for DNA quantification.Samples were washed with PBS and incubated in Trypsin-EDTA to dissolve the fibrin gel.They were then subjected to three cycles of freezing and thawing in Eppendorf tubes.The assay was performed following the manufacturer's instructions.For DNA normalization of the ELISA samples, the assay was performed directly on ELISA lysates. Monitoring of T3 and insulin adsorption on PLL coated surfaces The PLL coating was applied following the provider's protocol, involving a 5-min incubation at room temperature with the addition of 200 μL solution onto a 24-well polystyrene surface.Insulin and T3 solutions were prepared according to the provider's data sheet and added to the PLL-coated wells.Subsequently, samples were collected every 10 min, and the UV-Vis absorbances of samples with a final concentration of 1 μM for T3 and 10 μM for insulin were measured with a UV-Vis/NIR Spectrophotometer (V 670, Jasco Inc., Japan).The amount of adsorbed product on the surface over time was calculated using the obtained absorbance graphs (n = 3). Zeta potential measurement Zeta potential of insulin and T3 solutions were measured by a Malvern ZetaSizer.Solutions were prepared at the same concentration as for BAM, for T3 120 nM and insulin 850 nM.Then disposable folded capillary cells were filled with 800 μL of solutions and measurements were taken by dynamic light scattering at 25 • C (n = 3). Statistical analysis ANOVAs were performed using ezANOVA software to determine significant differences between pairs of data sets.Error bars represent standard deviation.p values < 0.05 were considered statistically significant. Brown adipogenic differentiation medium effect on browning First, we confirmed the brown adipogenic induction potential of our culture conditions by comparing BAM and WAM in drop tissue formation.Lipid droplet size of differentiated brown and white adipocytes is well defined in the literature [38].Therefore, we compared the effects of these two media on lipid droplet sizes.The lipid droplets of samples cultured in BAM and WAM for a duration of 14 days were stained with Nile Red, as illustrated in Fig. 2A.Notably, brown-differentiated cells exhibited significantly smaller (p < 0.001) lipid droplet diameters compared to their white-differentiated counterparts.The lipid droplet sizes of brown adipocytes were 2.4 (±0.8) μm, whereas white-differentiated cells had larger droplets, measuring 7 (±3) μm (Fig. 2B).This result confirms the morphological distinction between white and brown adipogenic differentiations.BAT is recognized for its elevated mitochondrial content.Consequently, mitochondrial content in both brown-differentiated and undifferentiated cells (BAM and GM respectively) was evaluated by quantification of mitochondrial abundance through MitoTracker staining, as illustrated in Fig. 2C.The results revealed a notably higher fluorescence intensity of mitochondria in brown-differentiated cells as opposed to undifferentiated cells, with a statistically significant difference (p < 0.05) indicating a substantial increase of 25 % (±17). Then, immunocytochemical visualization of the brown adipogenic marker UCP1 and lipid droplets was conducted on both brown adipogenic differentiated and undifferentiated samples cultured in BAM, GM, and WAM respectively, as shown in Fig. 2E.Distinct expressions of both UCP1 and lipid droplet accumulations were exclusively observed in the BAM group.This result confirms the successful occurrence of brown adipogenic differentiation.Accordingly, although the WAM cultured droplets showed lipid accumulation, they were negative for UCP1 expression as expected.In general, white adipocytes do not show UCP1 protein expression [39,40].In Fig. 2F, higher magnification representation of samples cultured in BAM is shown.The relative gene expression levels of BAT-specific genes, UCP1, Cidea, and PRDM16, were finally assessed by the RT-qPCR analyses.UCP1 serves as a primary thermogenic marker and plays a crucial role in fatty acid metabolism [41].The outcomes showed expressions of these brown adipogenic-related genes were markedly elevated in comparison to undifferentiated cells cultured in GM (Fig. 2G).Specifically, the gene expressions of UCP1, Cidea, and PRDM16 were determined to be 58 (±16), 198 (±71), and 4.7 (±0.8) times higher, respectively, in the BAM groups compared to the GM counterparts.This significant difference observed in brown-specific genes further corroborates the process of browning.Based on the comprehensive findings obtained from the evaluation of various parameters, it is substantiated that the BAM exerts a notable impact on DFATs encapsulated in fibrin gel and cultured for a duration of 14 days in terms of browning.This efficacy is evident across multiple aspects, including lipid droplet size measurements, mitochondrial abundance assessments, immunocytochemical analyses of UCP1, and BAT markers' gene expression evaluations.This culture medium and the fibrin gel culture conditions were thus used for the next steps assessing the effect of various added polymers. UCP1 and lipid contents of polymer-mixed samples PLL is well known for its possible cytotoxic effect [42].Therefore, before mixing it in the fibrin gel tissues, different concentrations were compared.The other polymers were added at a final concentration of 50 μg/mL, following the protocol in our previous study [26].PLL was used at a concentration of 10 μg/mL based on cell viability analysis results (Supplementary Fig. 1), since viability was observed to be only 26 % (±6) in drops containing 50 μg/mL PLL, while all other concentrations exhibited viabilities exceeding 80 %.Additionally, DNA quantification analysis (Supplementary Fig. 2) revealed that drops containing 10 μg/mL PLL exhibited a similar amount of DNA content to fibrin-only drops by the 14th day (174 ± 27 ng/mL for fibrin only, 174 ± 55 ng/mL for PLL group).Since the amount of RNA in cells can serve as a valuable indicator of cell viability, the RNA concentrations measured after RNA extraction were examined.It was found that the droplets containing 5 μg/mL PLL had 63 (±3) ng/mL RNA, while the droplets containing 10 and 20 μg/mL PLL had RNA concentrations of 37 (±4) and 33 (±10) ng/mL, respectively.However, DNA quantification analysis and live/dead staining showed that 10 μg/mL PLL is acceptable in terms of cell viability.Based on these findings, it was decided to maintain the PLL concentration at 10 μg/mL for subsequent experiments related to browning. UCP1 immunofluorescence staining of the polymer-mixed samples (Fig. 3A) was then assessed and all confirmed the brown adipogenic differentiation of DFATs in fibrin-only and polymer-mixed fibrin gel on day 14 in the BAM condition.Additionally, the high magnification images of PLL-mixed droplets, and the separate channel images of nuclei, lipid, and UCP1 content are shown in Supplementary Fig. 5. For the quantification of UCP1 and lipid content in the samples, the fluorescence intensity of each drop was normalized by DNA fluorescence (Fig. 3B, white bars).Analyses revealed that none of the polymers, except PLL, tended to induce an increase in UCP1 content (white bars).In drops containing PLL, the fluorescence intensity of UCP1 was 1.5 times higher (±0.2) compared to the fibrin-only group, although it was not statistically significant (p > 0.0509).In a comparable manner, the measured lipid content revealed that no statistically significant difference was observed among polymer-mixed groups when compared exclusively to samples containing only fibrin (Fig. 3B, red bars).However, statistically significant differences were evident between laminin and collagen type IV, as well as PLL and GG and fibronectin groups.Remarkably, PLL exhibited the highest lipid content value in this statistical comparison of 1.1 (±0.1).Immunofluorescent images indicate that PLL may have increased both UCP1 expression and lipid content.The UCP1 concentrations in polymer-mixed samples were confirmed by ELISA assay and compared with those from the fibrin-only group (Fig. 3C), normalized by DNA content.Notably, PLL demonstrated the highest relative concentration of 5 (±2), compared to the fibrin only group, while an increase in UCP1 concentration was also observed in the case of gelatin, laminin, and fibronectin mixed drops albeit to a lesser extent.This ELISA result further supports the inducing effect of PLL on brown adipogenesis. Relative gene expression analysis of polymer-mixed brown-like drops The relative gene expression of the three brown adipogenic markers UCP1, Cidea and PRDM16 was next analyzed to confirm the brown adipogenic differentiation of DFATs in fibrin-only and polymer-mixed fibrin gels (Fig. 4).PLL-mixed drops (10 μg/mL), when compared to the other groups, induced a significantly higher UCP1 gene expression, with a relative fold increase value of 6 (±3) compared to fibrin only.Additionally, an increase in PLL concentration in PLL-mixed droplets, with fibrin gel, at concentrations of 5, 10, and 20 μg/mL resulted in a corresponding increase in UCP1 expression, with values from 1 (±0.3) to 2.07 (±0.01) (Fig. 4A, right graph), further confirming the meaningful effect of PLL on the brown adipogenesis differentiation induction. Concerning the Cidea gene (Fig. 4B), samples containing PLL (10 μg/mL) had the highest expression at 3 (±3).However, the PLL concentration effect on Cidea relative gene expression was not significant, with a A.S. Karanfil et al. maximum of 1.1 (±0.3) for 20 μg/mL.Finally, for PRDM16 relative gene expression, the results were not significantly different but a similar trend could be observed in samples containing PLL (10 μg/mL), mirroring the UCP1 and Cidea gene expression values.For the PLL concentration comparison, as for Cidea, no significant difference was evident, the relative expression being 1.07 (±0.05) for 20 μg/mL.In terms of browning, PLL exhibited an increasing trend in the expression of brownspecific genes.However, the increased concentration of PLL did not A.S. Karanfil et al. result in a significant difference, except for UCP1 gene expression (p < 0.01). Mitochondrial assessments of polymer-mixed brown-like drops To further assess the impact of the added polymers on the mitochondrial content during brown adipogenic differentiation, the mitochondrial abundance of polymer-mixed samples cultured with BAM was evaluated on days 7 and 14 (Fig. 5A) and the quantification was normalized by DNA fluorescence intensity (Fig. 5B).Samples containing only fibrin (on day 7) were compared with polymer-mixed samples at both time points (7th and 14th days) and the fibrin-only group (on day 14).Accordingly, when they were examined on the 7th day, the mitochondrial abundance in drops containing polymers was not higher than A.S. Karanfil et al. that in the fibrin-only group.While the PLL-containing drops exhibited significantly lower mitochondrial content on the 7th day, a surprising increase was observed by the 14th day, reaching 163 (±11) compared to the day 7 fibrin only samples (100 ± 7).Moreover, across all groups, an increase in mitochondrial quantity over time was observed, and this increase was statistically significant for laminin, collagen type IV, and GG drops compared with fibrin only drops.The increasing abundance of mitochondria throughout the culture period may suggest that cells continue their brown adipogenic differentiation.The measurement of basal oxygen consumption rates in adipocytes has also been previously reported in the literature [43].Accordingly, the oxygen consumption rate of the samples was assessed on the 14th day and the oxygen consumption rate over a 24-h period from this day was calculated (Fig. 5C).As a result, the oxygen consumption rate within 24 h was highest in drops containing PLL, reaching a significantly greater value of 138 (±13) than fibrin only drops (100 ± 13).However, in collagen type I and laminin drops, oxygen consumption decreased, while the remaining groups exhibited values close to the fibrin-only group.These data further confirm PLL's inducing effect on brown adipogenesis and increases in metabolic activity. PLL integration on the DFATs To confirm the interaction between PLL and DFAT, DFATs were incubated for 24 h in culture medium containing FITC-labeled PLL (green, Fig. 6C).Additionally, as PLL is known to possibly interact on heparan sulphate proteoglycans (HSPG), the location of HSPG was also identified by immunostaining.As seen in Fig. 6C, an association between the positively charged polymer and the expression of negatively charged HSPG (magenta) could be identified on the immunofluorescent images that might explain how PLL can bind to the cell surface.In Fig. 6E, the potential mechanism of PLL integration and the subsequent increase in brown adipogenic inducing molecules are illustrated. Monitoring T3 and insulin adsorption on PLL coated surfaces Positively charged PLL can also interact with negatively charged proteins, especially culture medium components.PLL might thus promote the adsorption of specific hormones or proteins from the culture medium that play a role in brown adipocyte differentiation induction.To test this hypothesis and further explain the PLL effect for cell browning, T3 and insulin components of the BAM were chosen.Timedependent adsorption curves of T3 and insulin on PLL-coated surface were observed (Fig. 6A and B), by using the UV absorbance spectra of T3 and insulin (Supplementary Fig. 3).T3 exhibited a peak at around 240-300 nm wavelength and significantly accumulated on the surface after approximately 30 min (p < 0.05).Insulin, on the other hand, peaked in the wavelength range of 240-285 nm and reached a maximum adsorption on the surface after around 50 min (p < 0.05).Finally, to validate that this adsorption is related to charge interactions, the zeta potentials of T3 and insulin (− 12 ± 1 and − 10 ± 0.7 mV, respectively) were assessed (Fig. 6D).Their confirmed negative charges may cause the observed accumulation due to the electrostatic interaction with PLL.These results indicate that PLL may interact with insulin and T3 with electrostatic interaction subsequently enhancing their cellular bioavailability for the DFAT leading to an increased browning differentiation. Discussion In this study, we evaluated the impact of various polymers mixed into a fibrin matrix on browning.As expected, we first confirmed the brown adipogenic differentiation effect of BAM on DFAT encapsulated in fibrin gel.While examining the effect of the medium initially, we conducted various comparisons, taking into consideration factors such as the lipid droplet size and metabolic differences such as mitochondrial abundance and oxygen consumption rate.As clearly identified, the morphology of lipid droplets in brown and white adipocytes demonstrates notable differences, with intracellular lipid droplets appearing large and uniocular in WAT and small and multilocular in BAT [44][45][46].As illustrated in Fig. 2A and B, the lipid droplet diameters of brown-like adipocytes were confirmed to be significantly smaller that their white differentiated counterparts.Indeed, the unique lipid accumulation patterns in adipocytes actually reflect the functional properties of the fat cells.White adipocytes efficiently store triacylglycerol in large lipid droplets, while brown adipocytes store lipids in small, multilocular ones that facilitate the transport of free fatty acids to mitochondria, promoting effective lipolysis and heat production [44].On the other hand, it is known that the increased mitochondrial activity and abundance in brown adipocytes are associated with their non-shivering thermogenesis ability [47].The pivotal factor responsible for this unique process is UCP1, a protein that briefly allows for the generation of heat instead of ATP by increasing the permeability of the inner mitochondrial membrane in BAT mitochondria [48,49].In Fig. 2C and D, when compared with undifferentiated cells cultured in GM, the increased mitochondrial content in brown-differentiated cells served as evidence confirming the successful occurrence of brown adipogenic differentiation, in addition to the confirmed presence of UCP1 by immunocytochemical staining (Fig. 2E).The increased expressions of brown-related mRNA expressions was also observed as shown in Fig. 2G.Here, alongside the validation of UCP1 gene expression, the increased expression of two other brown-related mRNAs which are Cidea (involved in lipid droplet remodeling [41,44]) and PRDM16 (involved in controlling the differentiation-linked brown fat gene program [41,50]) has confirmed the process of brown adipogenesis.In the subsequent step, we aimed to assess the impact of various polymers on browning.While the mechanism of how local factors influence stem cell fate is highly complex, polymers, as the external factors, can indeed affect crucial cellular processes such as differentiation, proliferation or apoptosis [51].Moreover, in combination with all these factors, the process of adipocyte differentiation is complex, demanding coordinated communication between external stimuli within a network of receptors and transcription factors in the nucleus [52].Consequently, few studies exist on the effect of polymers on stem cell adipogenesis signaling, especially for browning induction.On one hand, regarding the effect of ECM-related polymers, Porras et al. demonstrated that mouse ADSCs cultured in an environment inducing browning on surfaces coated with collagen type I and laminin exhibited a decreased tendency for differentiation into brown-like adipocytes.They suggested that the ECM molecules found at higher levels in WAT may lead to a decrease in thermogenic capacity by inhibiting UCP1 expression [53].In our findings, as demonstrated in Figs.3-5, the drops mixed with ECM molecules supported the reported results, indicating lower UCP1 expressions and a relatively lower oxygen consumption rate.However, in another related study [31], human MSCs and endothelial cells were co-encapsulated in a collagen type I gel and cultured in a BAM.The system demonstrated inducibility for brown adipogenesis, shown by increased PGC1-α and UCP1 mRNA expression in differentiating stem cells.The importance of endothelial-stem cell crosstalk for brown adipogenesis was also emphasized and within the collagen type I gel, the 1:1 co-culture displayed the most comprehensive vascular network formation, suggesting a supportive environment for vascular network development and supporting the upregulation of vascular endothelial growth factor which can be supporting for brown adipogenesis [31].On the other hand, while the impact of charged polymers on adipogenic differentiation is a promising topic, there are intriguing findings in the literature, particularly focusing on in vitro white adipogenesis.The addition of metal ions (Cu 2+ , Fe 3+ ) to polyelectrolyte multilayers (PEMs) made from chitosan and alginate can promote cell adhesion and adipogenic differentiation on stem cells and fibroblasts.The specific mechanisms by which metal ions influence adipogenesis may not be fully understood but it has been indicated that these ions assist in better cell adhesion and the initiation of adipocyte differentiation by binding to the polysaccharide layers [54,55].In another study, photoreactive polyelectrolytes such as polyallylamine (PAAm) and poly (acrylic acid) (PAAc) have been demonstrated to support the adipogenesis of MSCs.These modified surfaces can facilitate cell attachment and adhesion, thereby promoting adipogenic differentiation [56].Lee et al. showed that PLL can enhance the white adipogenic differentiation of both 3T3-L1 preadipocytes and hMSC by activating the insulin signaling pathway.As a possible explanation, they indicated that PLL has the potential to serve as a substitute for insulin, a critical adipogenic inducer [57].In our study we used brown adipogenic differentiation medium which includes T3 that important brown adipogenic inducer.T3 affects brown adipocyte thermogenesis by acutely increasing UCP1 gene expression via a cyclic adenosine monophosphate (cAMP)-mediated pathway [58] Additionally, when insulin combined with T3 in the differentiation medium, it affects brown adipogenic differentiation by influencing relevant signaling pathways, such as the adenosine 5′-monophosphate (AMP)-activated protein kinase (AMPK) pathway [59].However, none of these studies evaluated the impact of charged polymers on brown adipogenic differentiation.In the current study, we observed that the cationic polymer PLL can significantly induce brown adipogenic differentiation compared to drops obtained by mixing collagen type I, laminin, fibronectin, collagen type IV, and GG with fibrin gel.Together with immunocytochemical analysis and ELISA assay showing UCP1 expression (Fig. 3), an increase in the BAT-related gene expressions was highlighted in drops containing PLL (Fig. 4).Subsequent mitochondrial analyses, particularly on the 14th day, further confirmed the significant increase in both mitochondrial quantity and oxygen consumption rate due to PLL (Fig. 5).One possible reason for the high error bars observed in Fig. 4 could be the age-related decrease in the cellular differentiation abilities of DFATs obtained from adipose tissues from different donors.Increased donor age has been reported to negatively impact the differentiation abilities of stem cells in various studies [60,61].Therefore, cells obtained from donors with ages ranging from 41 to 53 may be the cause of the high standard deviations.On the other hand, while the increased PLL concentration only leads to a significant increase in UCP1 gene expression (Fig. 4A, right chart), it can be observed that there is no significant difference between the groups when considering the expressions of Cidea and PRDM16 genes.One possible explanation is the fact that Cidea and PRDM16 are early gene markers compared to UCP1 [62,63], and their expression already decreased leading to UCP1 late marker increase.Moreover, considering the cytotoxicity associated with increasing concentrations of polycations [64], we came to the conclusion that choosing a relatively moderate concentration would be more advantageous in this study.To go into further detail about the possible cellular mechanism of the PLL, a linear lysine polymer carrying a positive charge at neutral pH, it can interact electrostatically with negatively charged molecules on the cell surface.One possibility is its interaction with the negatively charged heparan sulphate proteoglycans (HSPG) as a first step, leading to cell surface HSPGs mediated endocytosis [65].In Fig. 6C, the interaction between FITC-labeled PLL and cell HSPGs can be observed.Based on these findings, we hypothesized that the enhancing effect of PLL on brown adipogenic differentiation could be attributed to the co-internalization of insulin, an effective agent for both white and brown adipogenic differentiation, and the thyroid hormone T3, a crucial factor in brown adipogenic differentiation.Numerous in vitro studies on brown adipogenic differentiation involve the inclusion of T3 in the brown adipogenic differentiation cocktail [61].T3 hormone activates thermogenesis by uncoupling electron transfer in BAT mitochondria from ATP synthesis.Consequently, it is also an important regulator of basal metabolic rate, and therefore, hypothyroid patients often clinically manifest as overweight [66].It also influences brown adipocyte thermogenesis by increasing the stimulatory effect of norepinephrine (NE) and enhancing the acute elevation of cAMP-mediated UCP1 gene expression [67].In light of this information, considering the zeta potential of the T3 and the insulin (Fig. 6D), their electrostatic interactions with the positively charged PLL provide a possible hypothesis as summarized in Fig. 6E.Therefore, by initially interacting with the negatively charged HSPG on the cell surface, PLL may facilitate the uptake of both T3 and insulin into the cells, thereby triggering brown adipogenesis.However, it is difficult to predict whether the observed effects are due to the impact of PLL on the differentiation process itself or due to the effect of PLL-mediated added hormones [68]. Conclusion In conclusion, our study explored the impact of the culture medium and various polymers integrated into the fibrin matrix on the process of brown adipogenic differentiation.In particular, our findings indicated that the incorporation of the cationic polymer PLL into the fibrin gel can significantly increase brown adipogenesis, leading to distinct inductions in the differentiation mechanism compared to other polymers.The observed electrostatic interaction between PLL and HSPG on the cell surface suggested a potential mechanism for PLL in facilitating the uptake of insulin and T3, thereby contributing to brown adipogenic differentiation.Our results shed light on the complex interplay of factors influencing brown adipogenesis, highlighting PLL as a promising candidate for inducing this intricate process.We believe that further research is essential to delve into these intricate molecular mechanisms to uncover potential applications for cationic polymers in the realm of brown adipogenic differentiation.We further hope that our research will contribute materially to future studies in this field. Declaration of competing interest The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. Fig. 1 . Fig. 1.Schematic overview of the research design.DFATs were encapsulated within a fibrin gel by using a round bottom 96 well plate with variety of polymers, then samples transferred into ultra-low attachment 24 well plate and cultured in brown adipogenic induction medium for 2 weeks.Mixed polymers are collagen I, collagen IV, fibronectin, laminin, gelatin, and gellan gum, and PLL. Fig. 2 . Fig. 2. Brown adipogenic differentiation medium (BAM) and growth medium (GM) effect assessment.A) Lipid droplets of differentiated cells were stained with Nile Red and Hoechst counterstaining was used to visualize nuclei (blue) (scale bar: 50 μm).B) Lipid droplet sizes of each drop were measured by Image J, with counted 200 lipid vesicle diameters of each drop (n = 3, three independent experiments from one donor).C) Representative MitoTracker staining images of mitochondria (red) and Hoechst counterstaining was used to visualize nuclei (blue) of fibrin only drops incubated in BAM and GM on day 14.Mitochondria and nuclei of cells are shown in Merge image (Scale bar: 200 μm).D) Measurement of the MitoTracker fluorescence of the drops incubated in BAM and GM, normalized by the either the DNA (Hoechst) fluorescence.Results are shown as mean ± standard deviation (n = 5, five independent experiments from one donor), * = p < 0.05.E) Representative images of immunocytochemical staining of Uncoupling protein 1 (UCP1, magenta), Nile Red was used for lipid droplets (red), and Hoechst counterstaining was used to visualize nuclei (blue) of fibrin only drops incubated in brown adipogenic differentiation medium (BAM) growth medium (GM) and white adipogenic differentiation medium (WAM) on day 14.UCP1, lipids and nuclei of cells are shown in Merge image (Scale bar: 200 μm).F) Higher magnification of UCP1, lipids and nuclei of cells are shown in Merge image of fibrin only samples in BAM (Scale bar: 50 μm).G) Relative gene expressions of several brown adipogenic markers on day 14.Uncoupling protein 1 (UCP1), cell death-inducing DNA fragmentation factor alpha-like effector A (Cidea) and PR domain containing 16 (PRDM16).Results are shown as mean ± standard deviation (n = 3, three independent experiments from one donor).Data was normalized by RPII (RNA Polymerase II) the house keeping gene.Statistical differences were obtained by student t-test were shown as * = p < 0.05, ** = p < 0.01.White dots represent data points of each parallel. Fig. 3 . Fig. 3. A) Representative images of immunocytochemical staining of UCP1 (magenta), lipid droplets by Nile Red (red), and Hoechst counterstaining was used to visualize nuclei (blue) of polymer mixed brown-like adipose drops on day 14.UCP1, lipids and nuclei of cells are shown in Merge image (Scale bar: 200 μm, for high magnification image scale bar: 50 μm).B) Measurements of the UCP1 (light grey bars) and Nile Red fluorescence of drops (red bars), normalized by either the DNA (Hoechst) fluorescence.Results are shown as mean ± standard deviation (n = 3, three independent samples from one donor).Statistical differences were obtained by Anova statistical tests were shown as * = p < 0.05 for UCP1, # = p < 0.05 for lipids.C) UCP1 Elisa results of polymer mixed drops.Results are shown as mean ± standard error (n = 3, three independent experiments from one donor).Statistical differences were obtained by Anova statistical tests were shown as * = p < 0.05.White dots represent data points of each parallel. Fig. 5 . Fig. 5. Representative MitoTracker staining images of mitochondria (red) and Hoechst counterstaining was used to visualize nuclei (blue) drops incubated in BAM on day 7 and 14.The mitochondria and nuclei of cells are shown in Merge image (Scale bar: 200 μm) B) The measurement of the MitoTracker fluorescence of the samples, normalized either the DNA (Hoechst) fluorescence.Results are shown as mean ± standard error (n = 3, three independent experiments from one donor).Statistical differences were obtained by Anova statistical tests were shown as * = p < 0.05.* = p < 0.05, ** = p < 0.01 when compare with day 7 fibrin only group as control.• = p < 0.05, •• = p < 0.01 when compare with day 14 fibrin only group as control.C) Oxygen consumption percentages in 24 h.Results are shown as mean ± standard error (n = 3, three independent experiments from one donor).Statistical differencies were obtained by Anova statistical tests were shown as * = p < 0.05, ** = p < 0.01, *** = p < 0.001.White dots represent data points. Fig. 6 . Fig. 6.Conclusion of the study.A) T3 adsorption monitoring on PLL coating surface according to the time duration based on 302 nm absorption point (n = 3, three independent experiments).B) Insulin adsorption monitoring on PLL coating surface according to the time duration based on 277 nm absorption point (n = 3, three independent experiments).C) Representative HSPG staining image of DFATs incubated in GM included F-PLL (green).HSPG (magenta) and Hoechst counterstaining was used to visualize nuclei (blue) (Scale bar: 10 μm).D) Zeta potential of T3 and Insulin solutions (n = 3, three independent experiments).E) Expected redifferentiation mechanism of DFATs into brown-like adipocyte in PLL mixed fibrin gel.Positively charged PLL may increase the cellular intake of T3 and insulin during brown adipogenic induction.
2024-07-19T15:19:45.463Z
2024-07-17T00:00:00.000
{ "year": 2024, "sha1": "50edd2f3fbdeb11590a52eb10b8602348dace1b6", "oa_license": "CCBYNCND", "oa_url": null, "oa_status": "CLOSED", "pdf_src": "PubMedCentral", "pdf_hash": "9cdd98c904fee986247237a4c38b2f405df639b7", "s2fieldsofstudy": [ "Materials Science", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
211079689
pes2o/s2orc
v3-fos-license
RUNX1 Mutations in the Leukemic Progression of Severe Congenital Neutropenia Somatic RUNX1 mutations are found in approximately 10% of patients with de novo acute myeloid leukemia (AML), but are more common in secondary forms of myelodysplastic syndrome (MDS) or AML. Particularly, this applies to MDS/AML developing from certain types of leukemia-prone inherited bone marrow failure syndromes. How these RUNX1 mutations contribute to the pathobiology of secondary MDS/AML is still unknown. This mini-review focusses on the role of RUNX1 mutations as the most common secondary leukemogenic hit in MDS/AML evolving from severe congenital neutropenia (SCN). INTRODUCTION The occurrence and frequency of RUNX1 mutations in a variety of hematological malignancies has been well-documented (Sood et al., 2017). Originally identified as a chromosomal translocation partner in the so-called core-binding factor (CBF) leukemias, somatic RUNX1 mutations were also found in myeloid malignancies, particularly in myelodysplastic syndrome (MDS) and acute myeloid leukemia (AML) (Chen et al., 2007;Christiansen et al., 2004;Gaidzik et al., 2011;Harada et al., 2004;Mangan and Speck, 2011;Osato, 2004;Schnittger et al., 2011;Steensma et al., 2005;Tang et al., 2009). Somatic mutations in RUNX1 cluster mostly within the N-terminal Runt homology domain (RHD) whereas mutations disrupting the C-terminal transactivation domain (TAD) occur less frequently (Gaidzik et al., 2011;Preudhomme et al., 2000;Schnittger et al., 2011;Tang et al., 2009). Importantly, mutations in RUNX1 were identified as the cause of familial platelet disorder, in which patients show a predisposition to develop MDS or AML (FPDMM or FPD/AML) (Song et al., 1999). These germline mutations are similar to those acquired in MDS/AML (Song et al., 1999). Finally, it has become clear that somatic RUNX1 mutations are particularly prevalent in MDS/AML secondary to inherited bone marrow failure syndromes (iBMFs) such as Fanconi anemia and severe congenital neutropenia (SCN), and in radiation-associated MDS/ AML (Harada et al., 2003;Quentin et al., 2011;Skokowa et al., 2014). These forms of secondary MDS/AML (sMDS/ AML) are characterized by an adverse prognosis due to refractoriness to treatment. Why secondary RUNX1 mutations are associated with sMDS/AML and how they contribute to the pathogenesis of these conditions remains largely unclear. Here, we will discuss the current insights and ideas regarding mutant RUNX1 in the context of malignant transformation of iBMFs, taking SCN as the leading example. Specifically, we will briefly summarize and discuss our most recent insights into these issues based on observations in patients, mouseand induced pluripotent stem cell (iPSC)-models. phil counts, leading to life-threatening bacterial infections (Skokowa et al., 2017). Autosomal dominant mutations in ELANE, the gene encoding neutrophil elastase, are the most frequently observed genetic defects in SCN patients. How these mutations give rise to severe neutropenia is still largely unknown (Skokowa et al., 2017). Life-long administration of colony stimulating factor 3 (CSF3), also known as granulocyte colony-stimulating factor (G-CSF), successfully alleviates the neutropenia in the majority of SCN patients (Dale et al., 1993). Importantly, SCN patients have a high risk of developing MDS or AML, with a median incidence of 21%, 15 years after initiation of CSF3 treatment (Rosenberg et al., 2006;2010). The majority of SCN patients with leukemic progression show the appearance of hematopoietic clones with somatic mutations in CSF3R, resulting in a truncated form of CSF3R with defective internalization and aberrant signaling properties (Touw, 2015). These clones may persist for months or even years before MDS or AML becomes overt (Germeshausen et al., 2007), raising the question how these CSF3R mutants contribute to the malignant transformation of SCN. Activation of oxidative stress through enhanced production of reactive oxygen species (ROS) and sustained activation of signal transducer and activator of transcription STAT5 have been put forward as candidate mechanisms by which activation of truncated CSF3R drive clonal expansion of myeloid progenitors (Liu et al., 2008;Zhu et al., 2006). RUNX1 MUTATIONS IN SCN-MDS/AML PATIENTS Like for numerous other disease conditions, the introduction of massive parallel ("next generation") sequencing has greatly advanced our insights into the genomic defects associated with the leukemic progression of SCN. A retrospective analysis in an ELANE-SCN patient, who continuously received CSF3 therapy for 15 years and during which period serial BM sampling was done, showed that after the occurrence of multiple CSF3R mutant clones 2 years after the start of CSF3 treatment, no additional mutations were detected until MDS/ AML became clinically overt (Beekman et al., 2012). At that fully transformed stage, a limited number of clonal mutations in regulatory genes, including RUNX1, SUZ12, ASXL1 and EP300, were present (Beekman et al., 2012). This pattern of leukemic evolution was confirmed in a follow-up study involving 31 SCN-MDS/AML cases (Skokowa et al., 2014). Importantly, this study revealed that mutations in RUNX1 are by far the most frequent somatic secondary mutations in SCN-MDS/ AML and preferentially occurred in CSF3R mutation clones. In SCN-MDS/AML, mainly RUNX1 mutations disrupting the RHD, essential for DNA binding and for interaction with the regulatory protein CBFβ, were found (Beekman et al., 2012;Skokowa et al., 2014). In view of these characteristics, the molecular pathogenesis of SCN/AML serves as an attractive model to investigate the role of secondary RUNX1 mutations in a molecularly well-defined process of leukemic progression. MOUSE MODEL TO STUDY THE IMPACT OF Csf3r AND RUNX1 MUTATIONS IN CONJUNCTION WITH CSF3 TREATMENT The impact of Runx1 and mutants on hematopoietic cell development has been investigated in a variety of mouse models and has been the subject of several recent reviews (Bellissimo and Speck, 2017;Chin et al., 2015;Harada and Harada, 2009;Sood et al., 2017). Notwithstanding some contradictory results, possibly related to discrepancies in the immune-phenotyping based classification of stem cell subpopulations, it is generally accepted that wild type Runx1 has no major impact on the production and function of longterm hematopoietic stem cells in mice, both under homeostatic conditions and under conditions of proliferative stress (Cai et al., 2011). More relevant in the context of RUNX1 mutations in SCN-MDS/AML are the mouse models with RUNX1-RHD mutations equivalent to those recurrently found in patients (Harada et al., 2004). Watanabe-Okochi and colleagues studied the effects of such a mutant in transplantation experiments, in which donor bone marrow (BM) cells were transduced with a murine leukemia virus (MLV)-derived vector to express the most common RUNX1 mutant D171N and reported that this resulted in MDS and MDS/AML (Watanabe-Okochi et al., 2008). However, integration of the MLVbased vector in the Mecom (Evi1) locus caused overexpression of Evi1 in these mice (Watanabe-Okochi et al., 2008). Because the combination of RUNX1 mutations and high EVI1 expression is rarely seen in MDS/AML, and because high Evi1 expression can be leukemogenic by itself, the contribution of RUNX1-RHD in MDS/AML development in more general could not be accurately deduced from this model (Harada et al., 2013;Watanabe-Okochi et al., 2008). In fact, expression of mutant D171N in human cord blood cells had marginal effects on the rate of proliferation and differentiation capacity of CD34 + cells in in vitro suspension culture relative to empty vector control cells, suggesting that isolated RUNX-RHD mutations are only weakly leukemogenic (Goyama et al., 2013). To study the role of RUNX1-D171N in a context relevant to SCN-MDS/AML, we used a mouse model expressing a truncated Csf3r (Csf3r-d715) identical to the mutant CSF3R form in SCN patients (Hermans et al., 1998;. To avoid the tropism of MLV-based vectors for oncogenic enhancers, we generated a lentiviral expression vector to express RUNX1 mutant D171N (Goyama et al., 2013) in conjunction with enhanced green fluorescent protein (eGFP) in Csf3r-d715 BM cells, which were subsequently serially transplanted in wild type recipients. Recipients were treated either 3× a week with CSF3 or with PBS (solvent control). Transcriptome analysis and whole exome sequencing on FACS purified eGFP + Linc-Kit + (LK) populations were done to identify molecular pathways associated with leukemic progression. Sequential CD34 + cell samples from a SCN/AML patient with identical CSF3R and RUNX1 mutations (Beekman et al., 2012) and whole genome sequencing data from diagnostic AML samples were used for clinical comparisons (Olofsen et al., 2018). CSF3 treatment of primary recipients transplanted with Csf3r-RUNX1 mutant BM cells resulted in sustained (30+ weeks) presence of eGFP + LK cells in the peripheral blood (PB), which had the morphological appearance of myeloblasts. The PB also contained eGFP + neutrophils, indicating that myeloid differentiation was not completely blocked. Importantly, none of these primary recipient mice succumbed to symptoms of AML, suggesting that the elevated myeloblasts in the PB reflected a pre-leukemic rather than a fully transformed state. However, upon transplantation in secondary and tertiary recipients, mice developed Csf3r-RUNX1 mutant AML that was no longer dependent on CSF3 administration. Transcriptome profiles of purified eGFP + LK cells sorted before transplantation, showed that expression of RUNX1 mutant protein in Csf3r mutant cells resulted in elevated proliferative/ metabolic signatures characterized by elevated MYC and mTORC1 signaling relative to empty vector controls. Strikingly, at the sequential steps of leukemic transformation in the mouse model, these signatures declined while TNFα-, interferon-and interleukin-6-driven inflammatory responses were increasingly upregulated. Whole exome sequencing performed on the LK-cells from these stages revealed that an internal tandem duplication (ITD) in Cxxc4 was acquired. In the secondary and tertiary recipients all AML cells harbored the Csf3r, RUNX1 and heterozygous Cxxc4 mutations, while the primary recipient showed a subclonal Cxxc4 mutation (VAF: 0.27). The mutation resulted in a 7-fold higher expression of CXXC4 protein. CXXC4 was previously shown to inhibit TET2 protein levels (Hino et al., 2001;Ko et al., 2013) and in agreement with this, TET2 levels were strongly reduced in the CXXC4 mutant/overexpressing leukemic samples. Intriguingly, CXXC4 mutations have also been detected in human AML cases, including the ITD mutations identified in our mouse model (Olofsen et al., 2018;Olofsen et al., Unpublished reference). These observations in mice fit into a model in which the activation of a truncated Csf3r by the sustained administration of CSF3 and the presence of RUNX1-RHD mutant D171N give rise to a premalignant state, characterized by the accumulation of LK cells in the PB and elevated activation of proliferative signalling (Fig. 1). An additional clonal mutation that reduces the levels of TET2 drives the full transformation to AML, at which stage the leukemia-initiating cells have lost their need for CSF3 for propagation in vivo and the AML blasts have adopted an inflammatory signature identical to that of SCN/AML cells with identical mutations in CSF3R and RUNX1 (Fig. 1) (Beekman et al., 2012;Schmied et al., Unpublished reference). Although CXXC4 mutations have thus far not been reported in clinical SCN/AML samples, mutations potentially affecting TET2 levels and/or function, such as mutations in polycomb repressor complex-2 genes (EZH2, SUZ12) are recurrently present (Beekman et al., 2012;Skokowa et al., 2014). STUDIES IN INDUCED PLURIPOTENT STEM CELL (iPSC) MODELS The use of patient-derived iPSC lines has created new possibilities to model diseases, including myeloid malignancies (Papapetrou, 2019). In the context of RUNX1, these studies have mainly dealt with FPD/AML, characterized by germline RUNX1 mutations (Antony-Debre et al., 2015;Connelly et al., 2014;Sakurai et al., 2014). Key features of these iPSC lines are (i) their reduced ability to generate CD34 + CD45 + hematopoietic stem and progenitor cells (HSPCs) and (ii) their affected ability of megakaryocyte (Mk) production and pro-platelet formation, thus explaining the platelet defects observed in patients. The reduced production of HSPCs from FPD-derived iPSCs is consistent with a role of RUNX1 in hematopoietic development from pluripotent stem cells (Yzaguirre et al., 2017). As mentioned above, in SCN patients who develop MDS or AML, RUNX1 mutations are most often acquired in CSF3R mutant HSPC clones. Hence, it is important in this context to assess the consequences of somatic RUNX1 mutations in HSPCs cells that already harbor a CSF3R nonsense mutation. To achieve this, a CRISPR/Cas9-based strategy was used to introduce a patient-derived CSF3R nonsense mutation into iPSCs. After switching the cells to hematopoietic culture conditions (STEMdiff Hematopoietic Kit from STEM-CELL Technologies, Canada), CD34 + CD45 + cells were lentivirally transduced to express the RUNX1-RHD D171N mutant. These experiments showed that the combined presence of CSF3R and RUNX1 mutations had a moderate effect on myeloid differentiation, characterized by a relative abundance of immature neutrophilic differentiation stages, but not by an absolute differentiation block (Fig. 2). As such, these findings corroborate the findings in the mouse model described above and further suggest that secondary RUNX1 mutations in clones with CSF3R mutations do not confer a fully transformed, i.e., MDS/AML like phenotype. In agreement with this, transcriptome analysis showed that the CSF3R-RUNX1 mutant cells had elevated proliferative signatures but did not show the inflammatory profiles seen in the SCN/AML patient and the mouse AML cells. A key question that remains to be addressed is how mutations in ELANE, HAX1 and other In primary recipients, the combination of Csf3r-d715 and RUNX1-D171N gives rise to accumulation of immature LK cells. This occurs only when mice are treated with G-CSF (CSF3) and mice do not succumb to symptoms of leukemia. Upon secondary and subsequent transplantations of these LK cells, a G-CSF independent AML develops, which is characterized by elevated inflammatory responses and reduced TET2 protein levels. SCN-causing mutations contribute to leukemic progression in conjunction with CSF3R and RUNX1 mutations. Preliminary data from these models suggest that ELANE and HAX1 mutations cause elevated levels of ROS in CD34 + CD45 + HSPCs generated from SCN-iPSCs, resulting in the upregulation of anti-oxidant pathways (Olofsen et al., 2019). Future work should clarify whether and to what extent the oxidative damage caused by ROS and the adaptive anti-oxidant protection mechanisms contribute to malignant transformation. CONCLUSIONS AND OUTLOOK The role of RUNX1 mutations in the development of MDS and AML remains incompletely understood. Studies in mouse-, patient-and iPSC-models, addressing the role of a recurrent RUNX1 mutation (D171N) in combination with the most frequent CSF3R mutation in the leukemic progression of SCN (CSF3R-d715), showed that RUNX1-D171N enhanced the activation of proliferative signaling pathways but only mildly affected myeloid differentiation, leading to a relative accumulation of immature cells but not to an absolute differentiation block. Furthermore, the studies in mice showed that the combination of these two mutations is not enough for leukemic progression, even when the mice were subjected to sustained G-CSF (CSF3) treatment. These findings established that additional events, one of which affecting TET2 levels, are necessary for full leukemic transformation. Leukemic progression in all models was also associated with enhanced interferon-γ, interleukin-6 and TNFα/NFkB signaling, suggesting that inflammation is a major additional component in the development of myeloid malignancy involving RUNX1 mutations. How this interplay between mutant CSF3R signaling, aberrant transcriptional control by mutant RUNX1, loss of TET2 function and inflammatory responses contributes to leukemic transformation and what the exact causal relationships between these mechanisms are remains to be addressed. Detailed insights into this complex network of events may help to discover biomarkers for early detection of leukemic progression of SCN and possibly other forms of iBMFs and may provide leads for novel forms of therapeutic intervention to avoid full malignant transformation of these conditions. Disclosure The authors have no potential conflicts of interest to disclose. Fig. 2. Myeloid differentiation of iPSC-derived CD34 + CD45 + cells, genome-edited to express a truncated form (d715) of CSF3R and transduced with RUNX1-D171N lentiviral expression vector or empty vector (ev) control. Cells were cultured in suspension for a total of 9 days in medium supplemented with myeloid growth factors, i.e., a cocktail of IL3, SCF, GM-CSF and G-CSF for the first 4 days, followed by G-CSF as the single growth factor for the next 5 days.
2020-02-12T14:04:22.475Z
2020-02-03T00:00:00.000
{ "year": 2020, "sha1": "e62094fe7ff7c5d53f1b61cfad4da5266add0127", "oa_license": "CCBYNCSA", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "799699d8a3c6027585f361330aaa0c2358d423dd", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
246161986
pes2o/s2orc
v3-fos-license
Seed priming with selenium: Effects on germination, seedling growth, biochemical attributes, and grain yield in rice growing under flooding conditions Abstract Prevalent irregular rainfall, flooding for weed control, and unleveled fields in the middle and lower reaches of the Yangtze River all contribute to flooding stress on germination and growth of direct‐seeded rice ( Oryza sativa L.). Herein, some experiments were conducted so as to assess the effects of seed priming with selenium (Se) on the germination and growth of rice under hypoxia. The experiment was arranged in a completely randomized factorial design with two factors and five replicates. Factors included Se concentration (0, 30, and 60 μmol/L) and duration of flooding stress (0, 2, 4, and 8 days). The experimental results showed that Se accelerated seed germination and increased emergence index and final emergence percentage. Additionally, Se increased shoot and root lengths and dry weights, but high Se concentration (60 μmol/L) reduced 18‐day‐old seedling dry weight under long‐term flooding (8 days). Furthermore, Se reduced malondialdehyde content and increased starch hydrolysis efficiency in seeds, superoxide dismutase, peroxidase, catalase, and glutathione peroxidase activities and seedling soluble protein and total chlorophyll contents. Se improved seedling total Se and organic Se contents while increasing total dry weight and yield. Notably, the highest yield was obtained after a 4‐day flooding period. Although Se priming favored rice seedling emergence and growth under flooding conditions, Se concentrations equal or above 60 μmol/L increased the risk of seedling death during long‐term flooding (≥8 days). | INTRODUCTION Rice (Oryza sativa L.) is one of the most important staple cereal crops in the world, and more than half of the population in China depends on rice for food (Kennedy, 2002). With the acceleration of industrialization and urbanization, the labor force engaged in agricultural production in China is rapidly shifting to secondary and tertiary industries, leading to a rapid rise in the costs of agricultural labor (Lu et al., 2019). The traditional system of rice production in China, which involves the transplanting of seedlings from a nursery into a paddy field, faces unprecedented challenges because the production pattern is labor, water, and energy intensive, making the overall process less profitable (Ge et al., 2018). As an alternative, and owing to its low-cost and labor-saving features, direct seeding is currently receiving much attention worldwide and is being widely promoted, especially in China. Direct seeding contributes less greenhouse gas emissions than rice transplanting, thereby contributing to environmental protection efforts and sustainable agricultural development (Tao et al., 2016). However, when direct seeding is subjected to heavy precipitation or unleveled fields during seed germination and seedling growth, rice seeds become susceptible to flooding (Lal et al., 2018). Moreover, weeds are more competitive than rice seeds and may severely inhibit the growth of directly seeded rice seedlings (Chamara et al., 2018). Fortunately, soil flooding after direct seeding is an effective, environmentally friendly, and low-cost weed control method (Chamara et al., 2018). Nevertheless, irregular rainstorms, unleveled fields, or flooding for weed prevention may cause hypoxia (low O 2 availability) or anoxia (no O 2 ) stress in rice seedlings. Although rice is the only cereal crop that can germinate and extend its coleoptile under oxygenlimited conditions (Ismail et al., 2009), an insufficient oxygen supply will inhibit the aerobic respiration of rice, causing carbohydrates stored in the endosperm to provide only a small amount of energy through oxidation pathways to support the elongation of the coleoptile. Consequently, rice seedlings may fail to develop roots and leaves (Ismail et al., 2009). Limiting oxygen may induce restricted seedling growth or death, the formation of uneven and insufficient production groups, and, eventually, reduced grain yields (Lal et al., 2018). Rice seedlings resist flooding mainly through two mechanisms: (1) the low-oxygen quiescence syndrome, whereby the rice shoot does not elongate upon submergence but regrows after de-submergence, or (2) low-oxygen escape syndrome, whereby the shoot extends rapidly under flood waters to reach the water surface (Ma et al., 2020). Selenium (Se) is an essential trace element for maintaining the normal functioning of many physiological processes (Kieliszek & Błażejak, 2016;Pappas et al., 2019). All forms of life, from primitive cells to complex organisms, require certain amounts of Se to be incorporated to special enzymes and cellular components for their metabolic functions (Oraby et al., 2015;Rayman, 2012). Studies have shown that low concentration of Se is beneficial for plant growth and development (Feng et al., 2013;Kaur et al., 2014). A small amount of Se can not only improve the quality and yield of plants but also modulate multiple stress-responsive genes (Gupta & Gupta, 2017;Moulick et al., 2016;Wang et al., 2017). Khaliq et al. (2015) found that when soaking rice seeds with pure Se content between 15 and 60 μmol/L, the germination potential and germination rate of seeds were improved, as well as the activities of various enzymes. In vivo, antioxidative effect is one of the most important physiological functions of Se (Misra et al., 2015). Se can effectively improve the activity of antioxidant enzymes, reduce oxidative damage, and promote plant growth (Filek et al., 2008;Pedrero et al., 2008). Further, Se favors rice seedling emergence and seedling quality (Lidon et al., 2018). Although studies on the effects of flood conditions or selenium on rice performance have been widely reported, still rarely study has been accomplished about the effects of Se on seed germination and seedling growth under flooding conditions. Herein, we hypothesized that seed priming with Se ensures the uniformity of germination and enhances seedling growth under limited-oxygen conditions. Therefore, our objective was to investigate the effectiveness of Se application as a seed germination initiator under flooding conditions. | MATERIALS AND METHODS The experiment was laid out in a split-plot design with flooding duration (FD) as the main plot and Se concentration as the subplot. FD at four levels and Se at three levels were tested with five replicates, and 60 subplots were established in pots (Experiment 1) and in the field (Experiment 2). The FD levels comprised flooding with 10 cm (Ella et al., 2011;Sarkar, 2012) of water for 0 (FD 0 ), 2 (FD 2 ), 4 (FD 4 ), or 8 d (FD 8 ) after rice sowing. Se levels comprised dry rice seeds primed by immersion in a sodium selenite (Na 2 O 3 Se) solution at Se 0 (Se 0 ), 30 (Se 30 ), or 60 μmol/L (Se 60 ). In each treatment, after the rice seeds were disinfected with 15% NaClO for 15 min and rinsed with distilled water for 20 min, 25 g of seeds was placed in a 200-ml conical flask containing a 125 initiator solution. The flask was placed in an incubator in darkness, at 25 AE 1 C and 80% relative humidity. After 24 h, the seeds were filtered through gauze, placed in distilled water for 20 min, rinsed five times with ultrapure water, and set aside. Autoclavable glass Petri dishes lined with double layers of filter paper were placed on a laboratory bench and left to air-dry for 24 h. The soil used in Experiment 1 was a silty clay loam with 24% sand (2.00-.02 mm), 40% silt (.02-.002 mm), and 36% clay (<.002 mm), collected from the plow layer (0-20 cm) of an arable field in Yangtze University, Hubei Province, Southern China. Each kilogram of soil at pH 5.94 contained 34.52 g of organic matter, 224.19 mg of available N, 1.37 mg of available P, 127.24 mg of available K, and .29 mg of total Se. The soil was air-dried, sieved to <5 mm, and homogeneous. Basal fertilizers (120 mg N/kg soil as NH 4 NO 3 , 30 mg P/kg soil, 75.5 mg K/kg soil as K 2 HPO 4 ) were added to the soil and mixed thoroughly. The mixed soil (5 kg) was packed into plastic pot with a diameter of 25 cm in diameter and a height of 30 cm. One hundred seeds were evenly sprinkled on the soil surface in each treatment, then immediately irrigated water by 10 cm, and maintained at a constant water level during inundation. After flooding duration ended, all treatments maintained a 1 cm water level until harvest (18 days). The FD were randomly arranged on a frame inside a glasshouse at 28/25 C day/night and a 16 h/day photoperiod with natural sunlight supplemented with sodium vapor lamps to maintain a light intensity > 350 μmol m À2 s À1 . Experiment 2 was conducted in 2018-2019 at the Yangtze University farm in Jingzhou County, Hubei Province, China (112 04 0 -112 05 0 N, 30 32 0 -30 33 0 E). The soil was the same as that used in Experiment 1. After land preparation, seeds treated in the same manner as those in Experiment 1 were sown directly in the field at a rate of 60 kg ha À1 , followed immediately by 10-cm-high flooding. After flooding duration ended, the guidelines for the local high-yield field water management mode were utilized. Each plot was fertilized as follows: N 150 kg ha À1 , P 2 O 5 59 kg ha À1 , and K 2 O 120 kg ha À1 applied in the form of CO (NH 2 ) 2 , (NH 4 ) 2 HPO 4 , and KCl, respectively. Specifically, 60% of N was applied as a basal fertilizer and the remaining 40% as a tillering fertilizer, whereas 100% of P 2 O 5 and K 2 O were applied as base fertilizers. Treatments were arranged in a randomized complete block design with five replications and a plot area of 24.0 m 2 (6 Â 2 m). Manual weeding was performed in the early tiller, late tiller, and heading stages. Pest and disease incidence were intensively controlled. | Rice emergence In Experiment 1, seedling emergence was counted daily using the method prescribed by the Association of Official Seed Analysts until the 10th day. A seedling was scored as 'emerged' when its hypocotyl length was ≥2 mm. The time to start emergence (TSE) of seeds were recorded. The time taken to 50% emergence (E 50 ), mean emergence time (MET), emergence index (EI), and final emergence percentage (FEP) of seeds were calculated as follows (Khaliq et al., 2015): where N is the number of emerged seeds in 10 days and ni and nj are the cumulative numbers of emerged seeds by adjacent counts at time ti and tj, respectively (ni < N/2 < nj). where n is the number of seeds emerged on day D and D is the number of days counted from the beginning of emergence, D ≤ 10. where Et is the number of seeds emerged on day t and Dt is time where E18 is the number of emerged seeds in the 18th day. | Seedling morphology In Experiment 1, shoot and root lengths of five randomly selected seedlings were measured both 10 and 18 days after sowing from each experimental unit in normally emerging seedlings. Five measured seedlings were oven-dried at 70 C for 72 h to get the dry biomass. Then, the dried samples were ground into powder to pass through a .15-mm sieve and sealed in ziplock bags before Se analysis. | Biochemical analyses In Experiment 1, lipid peroxidation in the rice seeds on the second day was determined from the malondialdehyde (MDA) content using the thiobarbituric acid method (Yang et al., 2014). The α-amylase activity in ground rice seeds on the second day was measured according to a reported technique (Mahakham et al., 2017). Total soluble sugar and starch contents in the rice seeds on the second day were quantified according to Khaliq et al. (2015). | Yield and total dry weight In Experiment 2, grain yields and total dry weight were measured at maturity by taking 5-m 2 plant samples at the center of each plot. Plant samples were separated from the filled grains and straw. Filled grains and straw were dried in an oven at 70 C to a stable weight and weighted, and grain yield was calculated at 14% moisture content. The grain was further processed into polished rice, and polished rice samples were ground into powder to pass through a .15-mm sieve and sealed in ziplock bags before Se analysis. | Determination of Se concentrations Contents of total Se and organic Se in the seedlings (Experiment 1) and polished rice (Experiment 2) were determined according to the method of Deng et al. (2017). | Statistical analyses All experimental data are expressed as means AE standard errors (SE) of five replicates. The dates were subjected to the two-way analysis (ANOVA) to determine the effects of flooding duration, Se treatments, and interaction between them, respectively. Significant differences between flooding duration and Se treatments among the same year were tested by Duncan's multiple range tests. The significance level was p < .05. | Rice emergence Flooding duration, Se treatment, and their interaction significantly affected rice emergence, although the interaction had no significant effect on TSE (Table 1) | Seedling morphology Flooding duration, Se treatment, and their interaction significantly affected seedling morphology, although the interaction had no significant effect on shoot length on the 18th day ( | Biochemical attributes of seeds Se treatments significantly affected the biochemical attributes of rice seeds ( Notes: One unit of the enzyme's activity is the amount of enzyme that released 1 μmol of maltose by 1-ml original enzyme solution in 1 min. Means (n = 5) with different letters differ significantly at the 5% probability level based on Tukey's test. Abbreviation: ns, nonsignificance at p > .05. *Significant at p < .01. **Significant at p < .001. | Effect of Se concentration in seed soaking solution on seedlings Flooding duration and Se treatments considerably affected seedling total Se and organic Se contents, although the interaction between them significantly affected only organic Se content ( | Total dry weight, yield, and Se content in polished rice Flooding duration, Se concentration, and their interaction significantly affected total dry weight and yield in both years, but the interaction had no significant effect on Se content in polished rice in 2018 (Table 6). In 2018-2019, total dry weight and yield first increased and then decreased as flooding duration increased. Both total dry weight and yield were highest in FD 4 and lowest in FD 0 . As a result, mean dry weight and yield in FD 0 were 9.60% and 6.80% lower, respectively, compared with FD 4 . Both total dry weight and yield increased with increasing Se concentration. The corresponding mean values increased by 6.96% and 7.04%, respectively, in Se 60 , compared with Se 0 . However, the Se 60 treatment group showed reduced total dry weight and yield in FD 8 in both years. In summary, the highest yield was obtained under the 4-day flooding treatment with Se priming. | DISCUSSION In Southern China, rice seedling quality is ensured by the wet directseeding method, which involves soaking the seeds until the embryo breaks through the husk and leaks white spots (Liu et al., 2014). During the sowing period, in which the seeds are generally in the second phase of germination, anaerobic stress caused by rain, uneven fields, or flooding for weed control will cause anaerobic respiration to replace aerobic respiration, thereby reducing the efficiency of starch hydrolysis into soluble sugars (Ella et al., 2011). In the experiments reported herein, seed germination and seedling growth were restricted with increasing duration of flooding conditions (Tables 1 and 2), consistently with results of a previous study (Ella et al., 2011). Furthermore, except for Se 60 in FD 8 , Se seed priming accelerated seed germination, increased EI and FEP, and promoted seedling growth (Tables 1 and 2). Reportedly, rice germination and growth are promoted by seed priming with low Se concentrations (Moulick et al., 2016) but are inhibited at high Se concentrations (Du et al., 2019); consistently, our data showed that Se 60 reduced 18-dayold seedling dry weight in FD 8 (Table 2). Although Se 60 enhanced the anaerobic stress-escape mechanism by accelerating seed germination and promoting shoot and root growth, shoot elongation reportedly consumes more endosperm nutrients and reduces seedling dry weight under prolonged flooding (Nishiuchi et al., 2012). These results indicate that Se seed priming at a high concentration entails certain risks in the case of prolonged flooding in the field. The level of MDA, an important biochemical indicator of plant stress, increased significantly under conditions of extended flooding duration but decreased markedly with increasing Se concentration (Table 3). This indicates that seeds subjected to Se priming show higher antioxidant capacity and whole-cell membranes to resist flooding stress damage. Flooding induces oxidative stress, which in turn induces an increase in the production of MDA (Gautam et al., 2014). Concomitantly, the ability of α-amylase to hydrolyze starch into soluble sugars provides energy for coleoptile growth under limiting O 2 conditions and is therefore directly related to the ability of rice to resist flooding stress (Vijayan et al., 2018). This study showed that Se application accelerated starch hydrolysis (Table 3) and, consistently, seed priming with Se reportedly increases the activity of T A B L E 6 Effects of selenium (Se) concentration and flooding duration on total dry weight, yield, and Se content in Polish rice in 2018-2019 α-amylase (Khaliq et al., 2015). These findings verify the hypothesis that Se can strengthen the escape mechanism of seedlings from low oxygen-stress damage by accelerating starch hydrolysis. Antioxidant SOD, POX, CAT, and GPx activities play an important role in the active-oxygen scavenging system of plants (Ella et al., 2011). In this study, SOD, POX, CAT, and GPx activities were lowest in the FD 8 treatment. In contrast, SOD and POX activities were highest in FD 4 , and CAT and GPx activities were highest in FD 2 (Table 4). In agreement with these findings, prolonged flooding duration can reportedly reduce antioxidant enzyme activity (Mondal et al., 2020). Additionally, slight waterlogging stress causes MDA accumulation in rice (Table 3), thereby stimulating the antioxidant enzyme defense system to produce more SOD and POX and maintain oxidative balance in rice cells (Candan & Tarhan, 2012), which likely explains why antioxidant activities were highest in the FD 4 and FD 2 treatment groups. Application of Se also increased the antioxidant enzyme activity level (Table 4). Soluble protein content, an important indicator of total plant metabolism, decreased with prolonged flooding duration but increased with increasing Se concentration (Table 4). These findings corroborate two previous reports (Du et al., 2019;Vijayan et al., 2018). As shown in Table 4, total chlorophyll content decreased with flooding duration but increased with increasing Se concentration. Flooding reportedly promotes the consumption of nonstructural carbohydrates, resulting in a decrease in total chlorophyll content (Gautam et al., 2014). Conversely, Se can improve total chlorophyll content and photosynthesis in rice . Seed priming with low Se concentrations can also increase total chlorophyll content (Khaliq et al., 2015). These findings confirm our conclusion that seed priming with Se helps rice seedlings to restore growth under flooding stress. During imbibition, rice seeds soaked in a Se solution absorb Se through the aleurone layer into the endosperm and embryo cells (Khaliq et al., 2015). Subsequently, inorganic Se in the cells is metabolically assimilated into organic Se by enzymes such as cysteine synthase (Liu et al., 2011). In our experiment, flooding reduced the conversion of selenite to organic Se (Table 5), likely because flooding limits the activity of some enzymes related to Se metabolism in rice cells. In contrast, Se application increased Se concentration in rice seedlings (Table 5), thus corroborating the results of two previous studies (Li et al., 2016;Wang et al., 2012). However, the Se content in polished rice after Se priming did not reach the standard for Se-rich rice (Table 6). Therefore, the production of Se-rich rice requires additional Se application. According to the results of Experiment 1, it can be seen that the interaction between flooding duration and selenium treatments enhanced seed vigor and seed vigor was positively correlated with field performance in rice (Yamauchi & Winn, 1996). The conclusion was confirmed by the results of Experiment 2 (Table 6). Our research indicated that different effects of selenium concentration on total dry weight, yield, and Se content in polished rice (except 2018) at different flooding duration (Table 6). In general, with the extension of flooding duration, increasing Se level promoted and then inhibited rice yield. Among them, the highest seedling total dry weight and grain yield were recorded for the FD 4 treatment group. This might be attributed that proper seed priming treatment (Farooq et al., 2006) and low level of Se (Khaliq et al., 2015) could improve seed germination percentage and germination index, thus promoting early germination and resulting in better seedling quality and higher paddy yield (Farooq et al., 2018). However, when flooding duration was extended to 8 days, rice seedling total dry weight and grain yield decreased, thus confirming that long-term flooding caused damage to the rice plants. Furthermore, the prolonged flooding duration reduced antioxidant enzyme activity (Table 4), which might affect the antioxidant function of Se in seeds, thus affecting seedling growth and development. Fortunately, in addition to the lower grain yield in the Se 60 /FD 8 combination treatment, overall, Se increased rice grain yield, which might be related to the improvement of seedling quality. According to Sadeghzadeh and Rengel's (2011) research on Zn seed priming, it can be inferred that improvement in grain yield with Se seed priming might also be attributed to the role of Se in activation of various enzymes. Se seed priming could provide a strong foundation for the plant, which lead to better plant growth and seed setting (Moulick et al., 2018). At present, it has been proved that soaking seeds with Zn could increase rice yield Zulfiqar et al., 2021), but there are still few studies on Se seed priming to improve rice yield. Hence, it can be hypothesized that improvement in grain yield is probably due to the interaction between flooding duration and selenium treatments, which could promote seed germination, improve emergence rate and better seedling stand production, and thus improved effective tillering and seed setting. In the future, it is necessary to further study the effects of the interaction between flooding duration and selenium on rice yield and grain selenium content in order to provide more guidance for Se-rich rice planting in direct seeding fields. ACKNOWLEDGMENTS This work was supported by the National Natural Science Foundation of China (U21A2039) and National Public Welfare Industry Project (201303106). CONFLICT OF INTEREST The authors declare that they have no known competing financial interests or personal relationships that might have influenced the work reported in this paper. The results/data/figures in this manuscript have not been published elsewhere, nor are they under consideration by any other publisher. The corresponding author has read Food Policy's author responsibilities and submits this manuscript in accordance with these policies. All the material is owned by the authors, and/or no permissions were required. In addition, the manuscript has been revised by many of our colleagues, but if the editor believes that the manuscript still needs English editing services, we fully agree and will pay the relevant fees.
2022-01-22T16:33:22.267Z
2022-01-01T00:00:00.000
{ "year": 2022, "sha1": "9e9fd4cd578acec7fe00688ab0dc7e9c25cb8443", "oa_license": "CCBY", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/pld3.378", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "bba8a8b63062be30d19d1b1db8b887853acb1822", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "extfieldsofstudy": [ "Medicine" ] }
233210022
pes2o/s2orc
v3-fos-license
High-Throughput Virtual Screening of Small Molecule Inhibitors for SARS-CoV-2 Protein Targets with Deep Fusion Models Structure-based Deep Fusion models were recently shown to outperform several physics- and machine learning-based protein-ligand binding affinity prediction methods. As part of a multi-institutional COVID-19 pandemic response, over 500 million small molecules were computationally screened against four protein structures from the novel coronavirus (SARS-CoV-2), which causes COVID-19. Three enhancements to Deep Fusion were made in order to evaluate more than 5 billion docked poses on SARS-CoV-2 protein targets. First, the Deep Fusion concept was refined by formulating the architecture as one, coherently backpropagated model (Coherent Fusion) to improve binding-affinity prediction accuracy. Secondly, the model was trained using a distributed, genetic hyper-parameter optimization. Finally, a scalable, high-throughput screening capability was developed to maximize the number of ligands evaluated and expedite the path to experimental evaluation. In this work, we present both the methods developed for machine learning-based high-throughput screening and results from using our computational pipeline to find SARS-CoV-2 inhibitors. INTRODUCTION The COVID-19 disease caused by the severe acute respiratory syndrome coronavirus (SARS-CoV-2) is responsible for the most recent, severe pandemic in modern human history [33].At the onset of the COVID-19 pandemic, a worldwide effort began to identify and provide target proteins for vaccine and drug development to neutralize the virus.Two distinct proteins were rapidly solved; the trimeric spike protein (spike), which binds to human ACE2 to enter human cells [3] and the main protease (M pro ), which plays a pivotal role in viral gene expression and replication [59].In response to the pandemic, we participated in a large-scale multi-institutional effort to virtually screen, experimentally test, and optimize therapeutic leads targeting the spike and M pro SARS-CoV-2 protein targets.Two different binding sites from the spike protein (denoted spike1, spike2) and two different conformations of the M pro active site (denoted protease1, protease2) were used in the high-throughput screening calculations. Experimental tests of drug candidates are expensive and serve as a fundamental bottleneck in drug discovery.As a result, computational chemistry is used extensively to accelerate the discovery process by screening drugs and nominating the strongest candidates for experimental validation [64].High Performance Computing (HPC) plays a critical role in virtual screening [6,28,30,69,70] by accelerating computationally expensive calculations and providing the scalability necessary to screen large numbers of candidate molecules.This is crucial, as the chemical space of potential molecules has been estimated to be on the order of 10 60 [8,49].Accurately estimating protein-ligand binding affinities is an important step in drug discovery.However, even computationally expensive, biophysics-based scoring methods find predicting binding free energy a difficult task [27,64,72].Deep learning methods represent an alternative, rapid approach to binding affinity prediction which alleviates the dependence on hand-curated features, which may not capture the mechanism of binding [1,2]. The two leading deep learning approaches to structure-based binding affinity prediction fall into two categories: 3-dimensional Convolutional Neural Networks (3D-CNNs) [26,48,67] and Spatial Graph Convolutional Neural Networks (SG-CNNs) [13,37,71].Fundamentally, 3D-CNN models exploit a voxelized representation of atoms in a 3D grid, which portrays protein-ligand compounds in a Euclidean space for inference.On the other hand, SG-CNN approaches leverage a graph representation of the protein-ligand complex, allowing for multiple âĂIJedge-typesâĂİ to be encoded in the representation (e.g., distinct distance thresholds corresponding to covalent or non-covalent interactions) to sub-select groups of atoms and evaluate their pairwise interactions. These methods are significantly different in the way they represent compounds and their mechanisms of inference.However, they seek to achieve the same goal: accurate prediction of binding free energy on novel compounds.This observation led to the hypothesis that the 3D-CNN and SG-CNN likely have complementary strengths, which could be exploited by fusing the latent spaces of each modelâĂŹs learned features .This approach to âĂIJFusionâĂİ modeling was explored and shown to achieve superior generalization performance in predicting protein-ligand binding on X-ray crystallographic structures and virtually-docked poses of proteinligand compounds in hold-out test sets [27].In this work, we detail improvements to Fusion, describe its utility in high-throughput screening, and evaluate its application to the SARS-CoV-2 drug discovery problem.A key innovation reported here is the potential to train a fusion model "coherentlyâĂİ.While Coherent Fusion comes with the increased computational burden of training a more complex model, the increase in complexity is mitigated by using HPC to perform parallel, distributed training. DEEP FUSION 2.1 Fusion Modeling in Computational Chemistry Machine learning, specifically deep learning, approaches to proteinligand binding affinity prediction, represent a promising new development in drug discovery [1,2,13,16,26,56,67,71,72].At a high level, the deep-learned models being proposed for binding affinity prediction are single-pass, feed-forward systems.This fundamental model formulation results in a computational advantage, in that, the models quickly predict binding affinity in one pass over their input.The simplicity and speed of deep learning prediction relative to biophysics-based computations make them especially attractive in the context of massive virtual drug screens.The concept of fusion is a recent development in deep learning, initially applied to computer vision problems [35,50,60,66].In fusion, models are combined by integrating different modes of data or approaches for more predictive power.This concept was recently applied to computational chemistry in the form of Fusion models for Atomic and molecular STructures (FAST) [27], where fusion of the two leading deep learning models (3D-CNN, SG-CNN) was shown to improve binding affinity prediction.Specifically, the Late Fusion and Mid-level Fusion models are shown as approximately equivalent or superior to individual SG-CNNs, individual 3D-CNNs, other state-of-the-art deep learning models [26,56], and physicsbased approaches including both Autodock Vina [58] and Molecular Mechanics -Generalized Born / Surface Area (MM/GBSA) which has been shown to improve ligand pose ranking for certain target proteins [19,64].The Late Fusion and Mid-level Fusion FAST models were specifically described and made publicly available.In the context of this work, we retrain and optimize the models on data from PDBbind-2019 [62]. The Late Fusion approach is simple, but its superior performance compared to individual SG-CNNs and 3D-CNNs shows the potential of fusion modeling.In Late Fusion, SG-CNN and 3D-CNN models were separately trained to predict absolute binding affinity (Equation 1), where absolute binding affinity is defined as the negative logarithm of a binding constant .Binding affinity data in this study is measured as an inhibitory constant , disassociation constant .or inhibitory activity ( 50 ), where these measurements are treated as equivalent labels in our calculations.Late Fusion takes the unweighted arithmetic mean across the individually-predicted binding affinity values from the SG-CNN and 3D-CNN models. The Mid-level Fusion model is also described in [27].The model is defined by extracting the latent space feature vector from −3 of an N-layer SG-CNN and −1 of an M-layer 3D-CNN.Each vector of gradients is then processed through model-specific dense layers, concatenated with the originally extracted vectors, and passed through two fusion dense layers for a final prediction.In contrast to Late Fusion, Mid-level Fusion is a non-linear combination of the respective models and its performance has been shown to outperform Late Fusion in some cases. Advancing Fusion Faced with the imperative task of virtually screening molecules for inhibition of SARS-CoV-2 activity, we sought to apply and improve the fusion deep learning approach.First, we updated the previously trained models with data from the latest version of PDBbind (2019) [62], which provides approximately 4000 more compounds than the 2016 version used to train the original models and, thereby, offers the potential for improved generalization Our second step looked to improve the 3D-CNN and SG-CNN models through recent developments in the optimization of hyperparameters.Deep learning models are highly sensitive to a hu-manâĂŹs definition of their hyper-parameters âĂŞ model architecture, loss function, and optimization algorithm, among others [24].Automated hyper-parameter optimization was first addressed with parallel searches (grid or random), followed by sequential optimization methods such as Bayesian optimization [4,21,53,55].Hyperparameter optimization algorithms have improved over time in their ability to scale [54].Evolutionary Algorithms (EAs) have been shown to further improve the optimization process [5,20,24,34].Recently, a leading population-based EA, Population-Based Bandits (PB2) [24], was improved by formulating hyper-parameter optimization as a Gaussian Process (GP) bandit optimization of a time-varying function [45]. Fusion modeling in particular is marked by a significant exposure to hyper-parameters.Both the 3D-CNN and SG-CNN models have their own hyper-parameters which enable them to learn the binding affinity prediction problem optimally in isolation.Additionally, the fusion layers require another set of hyper-parameters necessary to find an optimal non-linear combination of the two models.With this in mind, we saw an opportunity for improving fusion significantly by using a PB2 automated optimization algorithm [45] on Lassen, one of the most powerful high-performance computers in the world [39].The libraries used to define the model and optimization architectures are PyTorch [46], Pytorch Geometric [14], and Ray/Ray[Tune] [43]. Finally, a new formulation of fusion, the Coherent Fusion model, was developed as a potential improvement to the previous Late and Mid-level Fusion models.In both the existing fusion approaches, a 3D-CNN and SG-CNN are individually optimized to minimize the mean squared error (MSE) between their predictions of binding free energy and ground-truth experimental values.The existing Mid-level approach then combines the independently optimized models to form a stronger predictor, by learning the latent space strengths of each model.However, in both Late and Mid-level Fusion, the 3D-CNN and SG-CNN weights are unaltered and remain in the state that was optimal for isolated prediction.Given the Late and Mid-level Fusion modelsâĂŹ superior performance compared to their isolated components, we hypothesized that fusion might be further improved by coherently backpropagating gradient through both the fusion layers and the separate models.In doing so, the Coherent Fusion model fine-tunes both the 3D-CNN and SG-CNN heads to cooperatively exploit their strengths in a joint optimization.The drawbacks of Coherent Fusion are both an increased hyperparameter search space and number of trainable parameters.To address this, we developed a parallel, distributed hyper-parameter optimization training architecture.Compared with the models in [27], the combination of these modifications to the concept of Fusion led to significant differences in the hyper-parameters for the 3D-CNN, SG-CNN, and Fusion layers of the model. OPTIMIZATION AND EVALUATION 3.1 Data PDBbind-2019 is a curated subset of the larger Protein Data Bank (PDB) [61], which is widely used to tune biophysics-and machine learning-based methods [13,26,27,37,64].The PDBbind data set is comprised of crystal structures arranged into two groups ( and ) based on size (where protein-ligand compounds containing a ligand with molecular weight >1000 Daltons (Da) are excluded from the refined set), data quality (where compounds with a measured 50 but no or measurements are excluded from the refined set), and resolution of the crystal structure (<2.5 Angstroms).From the set, a third, set is extracted using a clustering protocol based on protein-sequence similarity.The set is compiled to represent a valid test for scoring methods by creating a high-quality subset of compounds sufficiently different from the and sets.As such, we use the set as a primary means for evaluating the Fusion methods considered and comparing against published literature. In this study, we employ the quintile sub-sampling method from [27] to formulate training and validation sets from the PDBbind-2019 and groupings.The sub-sampling is done independently on the and sets and 10% of the examples from each are withdrawn to form the validation set.Quintile sub-sampling guarantees both the training and validation sets to represent the full range of binding affinity values across PDBbind, where simple random sampling holds the risk of training and validating models on different sub-spaces of affinity values [10].The outcome is a training set of 15,631 complexes, a validation set of 1,731 complexes, and the 290 PDBbind set complexes are heldout for evaluation.Details of pre-processing and feature extraction for the PDBbind data can be found in [27], where here the same tools [25,41,44,47] and sequence of operations are used. As a means for additional evaluation, we supplement the set of 290 crystal structures from PDBbind with virtually-docked representations of the complexes.In practice, docking pose data is used for large-scale virtual screening, but is noisy and error prone since the correct ligand pose is not known until a co-complex is crystallized experimentally.Therefore, a scoring functionâĂŹs performance in the docking space is critical in gauging its robustness to noise and its pragmatic utility. We leverage the ConveyorLC toolchain [68][69][70] to produce all docking complex data, as it is used in our high-throughput virtual screening pipeline.ConveyorLC generates docking poses using the Vina scoring function [58], then re-scores up to 10 best docking poses using MM/GBSA on a subset of the larger virtual screen; only a subset is re-scored because MM/GBSA is orders of magnitude more computationally expensive than docking..This sequence of down selecting to limit the search space, accompanied by increasingly complex analyses, is frequently used in drug discovery pipelines, and even molecular dynamics (MD) simulations can be used before finalizing candidates for physical experimentation.The opportunity for machine learning models, like Deep Fusion, is to replace or supplement a more costly stage of a drug discovery pipeline with either improved accuracy or speed. Training Architecture Our approach to train the various individual and fusion models was executed iteratively.Lassen uses the IBM Spectrum LSF Job Scheduler [22], which necessitates pausing, rescheduling, and resuming training jobs after a maximum run-time.As the hyper-parameter optimization began to converge, the range of hyper-parameter values was adjusted, when possible, to ensure the lower and upper-bounds of the search space were not limiting factors in model performance.The full scope of hyper-parameters and ranges evaluated for each model are provided in Table 1, where the ranges are binary (T/F), a list of options, uniformly sampled continuous variables, or not applicable (N/A). We used the Ray/Ray[Tune] [43] Python library extensively as the foundation for running trials of individual hyper-parameter combinations within the context of a PB2 optimization.Together, Ray and PyTorch provide the ability to accelerate the training process by distributing individual trials across multiple nodes/GPUs.Each of LassenâĂŹs 792 GPU nodes is made up of 44 3.45 GHz Power9 CPU cores, 4 NVIDIA Volta V100 GPUs each with 16GB of memory, and 256 GB of main memory.Depending on the complexity of each model, we distributed individual hyper-parameter configurations between 1 and 12 ranks (1 rank = 1 GPU, 10 CPU cores, 64 GB memory) to expedite the training process.Each rank also utilized 24 data workers running in parallel to pre-load future Hyper-parameter 3D-CNN SG-CNN Fusion Optimizer Adam [29] Adam [29] Adam [29], AdamW [40], RMSprop [17], Adadelta [9] Activation function ReLU ReLU ReLU LReLU [65], SELU [31] Batch batches.The combination of distributed model training and parallel data loading was central to the feasibility of an experiment with this size/scope.The PB2 hyper-parameter optimization was initialized with a quantile fraction (%) of 50, a time scale ( ) in Epochs, a perturbation interval ( ) of 100 Epochs, and an objective function () of minimum validation set MSE loss [24,45].The procedure begins with a population of initial, randomly sampled hyper-parameter hypotheses.As every trial reaches the perturbation interval , PB2 looks at the modelâĂŹs performance and determines if it is above or below the quantile fraction (%).The best performing trials (above %) continue, while the under-performing trials clone a top-performing configuration (exploiting) and modify it using a parallel GP-bandit optimization (exploring).The training process produces both an optimal model and important information about how the hyper-parameters considered effect performance. Model Architectures We draw heavily from the original FAST network architectures in [27], which holds detailed descriptions of pre-processing, feature extraction, voxel grid sizing and atom propagation, which were unaltered.The following focuses on updates to the models and the final optimized hyper-parameter configurations for each component. For brevity, we list only the final optimized hyper-parameter values, where the advantage of PB2 is in its ability to learn a schedule of hyper-parameters to converge in an end-state [45]. Individual Models. The SG-CNN in this work is structurally unaltered from [27], which uses the PotentialNet [13] architecture based on Gated Graph Sequence Neural Networks [36].The only notable difference is the size of the dense layers were set according to the Non-covalent Gather Width, such that it was sequentially reduced in size by a factor of 1.5 and then 2. A population of 90 SG-CNN trials produced the final model and hyper-parameter configuration given in Table 2. The 3D-CNN model is slightly modified from the architecture in [27].The model has dropout above the first two dense layers, 2 additional convolutional layers, the filter sizes begin at 5x5x5 and reduce to 3x3x3, the residual options shown in Figure 1 were fed to the hyper-parameter optimization, and similar to the SG-CNN, the second dense layer size was determined by the optimization and then sequentially reduced by a factor of 2. Again, a population of 90 trials was used, the final hyper-parameter values are given in Table 3, where the optimization converged to using the second residual connection shown in Figure 1 and 32 to 64 filters for the 5x5x5 and 3x3x3 convolutional layers respectively.With this larger 3D-CNN architecture (deeper than in [27]) we found it beneficial to augment the input matrices for the training set by randomly rotating the input data in X, Y, and Z, each with a 10% probability of occurring.This random rotational augmentation was applied only to the voxelized representation of a compound.While the compound is fundamentally the same, altering its presentation to the model helps to prevent overtraining (e.g., learning rotationdependent features) and to increase the effective size of the training data set.[27].On the other hand, the optimization led the Mid-level Fusion model to a modified structure.For Mid-level Fusion, every optional layer (dashed lines) in the yellow Fusion block of Figure 1 was turned on.Table 4 gives the final hyper-parameters for Mid-level Fusion, which are the output of a 180 individual trial population.The other minor differences are a SELU [31] activation was selected over the previous Leaky-ReLU activation [65], a final batch size of 1, and the usage of light dropout instead of none.In developing the Coherent Fusion model, it was unclear whether the same 3D-CNN and SG-CNN hyperparameter configurations found to be optimal in isolation would also be ideal for their collaborative prediction.As such, we gave the optimization the option to load the models individually trained for prediction or re-define their structure and train each head from scratch.Using the pre-trained models led to a significant improvement in validation loss.Therefore, Table V gives the final hyperparameters for the best performing Coherent Fusion model, which loads the weights from the SG-CNN in Table 2 and 3D-CNN model described in Table 3. The Coherent Fusion model experiment optimized a population of 270 individual trials to produce a best performer.Interestingly, the Coherent Fusion model converged to exclude the model-specific dense layer options the Mid-level Fusion model uses (Figure 1) and used a simpler (4 fusion layers) architecture overall.Additionally, the Coherent Fusion used a larger batch size of 48 and significantly stronger dropout.Across the board, the Coherent Fusion model preferred a simpler Fusion architecture with significantly stronger regularization.Our intuition for this phenomenon is that the Coherent Fusion model adjusting a larger set of learned parameters allows for a simpler architecture, faster convergence, and heavier regularization compared to the Mid-level Fusion model, which serves as preliminary evidence of a stronger predictor. Evaluation Results Over 60,000 Lassen GPU hours were used to optimize the various models.All model iterations and intervals were not run across the same number of nodes, but our training architecture was run at its peak across 66 Lassen nodes capable of over 7,300 TFLOPS using 2904 CPU cores and 264 GPUs to train in parallel. In Table 6, the Coherent Fusion model is shown to outperform the Late and Mid-level Fusion methods on the PDBbind core set of 290 compounds.While the difference between the Late and Coherent Fusion methods is only 0.03 RMSE, the genetic optimization of Coherent Fusion produced several nearly identical models, which consistently performed better than Late Fusion in all evaluated metrics.Importantly, the Coherent Fusion model converged to a model structure using an automated process that exceeds the performance of the hand-crafted fusion architecture used in the original Midlevel Fusion [27].Additionally, we provide a comparison to two other deep learning approaches (KDeep [26] and Pafnucy [56]) to view Coherent Fusion's performance in a wider scope.While the PDBbind core set is a standard benchmark for machine learning methods [26,27,56], high-throughput virtual screening overwhelmingly relies to docked poses of compounds for drug discovery.To follow suit, we leveraged ConveyorLC [70] to compare Coherent Fusion against physics-based scoring functions in the docking space.The noisier docking data also provides insight into whether the machine learning model was over-trained and how robust it is to noise when scoring more realistic data. 197 compounds from the PDBbind core set were successfully evaluated by ConveyorLC with the physics-based Autodock Vina algorithm [58] and MM/GBSA methods for comparison with Coherent Fusion.Each compound was then filtered by , where each of the 197 compounds were checked for a pose with < 1Å such that a correct pose was found and sufficiently similar to the crystal structure from PDBbind.Using the binding affinity values from PDBbind as ground-truth for the docking poses, Vina achieved a Pearson correlation coefficient of .579,MM/GBSA scored .591,and Coherent Fusion reached .745.To further examine the performance difference between the three methods, binding affinity prediction can be cast as a binary classification problem [27].Figure 2 shows the results on a subset of the 197 core set compounds.Positive and negative classes were created from 57 stronger binders and 71 weaker binders, respectively.Because the set of strong vs. weak docked poses is small (128 total), we elected to compare the different methods using a Precision-Recall Curves and 1 -scores which give a much more direct picture of how each model is performing than a ROC curve provides. The nominally small scoring improvements such as, Coherent vs. Late Fusion (0.02 MAE) or Coherent Fusion vs. MM/GBSA (0.06 1 -score), have an amplified value in large-scale screening.For example, consider a hypothetical virtual screen evaluating 1 million compounds to subsequently purchase 100 compounds (0.01%) for experimentation.If the results from classifying the PDBBind core set docking complexes translated to the top 100 of 1 million candidates with a factor of 10 decrease in precision, a virtual screen using MM/GBSA would produce 7 true positives and Coherent Fusion-based screen would produce 9 true positives in < 1/100 th of the computational time (Table 7).While significant caveats apply, any additional true positive binders are valuable, as in practice, inhibitory compounds are hard to come by and must meet additional pharmacokinetic and safety requirements.With this analysis, we considered the Coherent Fusion model valuable and validated for use in screening for SARS-CoV-2 inhibitors. HIGH-THROUGHPUT SCREENING Our SARS-CoV-2 effort screened over 500 million compounds against each of the 4 M pro and spike targets, drawing compounds from four public virtual compound libraries [33].The ZINC database [57] was used to create a set of "world-approved 2018" drugs from a list of FDA-approved and "world-not-FDA" approved drugs.An additional 1.5 million compounds were selected from ChEMBL [15]; 18 million compounds were drawn from eMolecules [11], and the remaining compounds came from EnamineâĂŹs list of drug-like compounds estimated to be synthetically feasible [12]. SMILES strings [63] from the eMolecules and Enamine databases and the 2D SDF structures from the ZINC and ChEMBL libraries were downloaded, respectively.Both forms of input were imported to the MOE program [18] to remove salts and metal-containing ligands, then the protonation states of compounds were set to the dominant form at pH 7. 3D structures of compounds were then generated and energetically minimized.The selected MOE descriptors were calculated and the final structures were exported from MOE as SDF files.These structures were then processed by the ligand preparation in the AMBER tool-chain utilizing antechamber and the GAFF force field [51].The 3D SDF files were also converted to PDBQT format for the docking calculations by the Open Babel toolbox [44].In sum, over 5 billion docking poses were generated and evaluated. Physics-based Screening Pipeline As with our comparison between Coherent Fusion and physicsbased binding affinity scoring functions, we leveraged the existing ConveyorLC [70] tool chain to search for candidate inhibitors of the two binding sites from spike and active sites for two SARS-CoV-2 protease crystal structures.ConveyorLC is made up of four parallelized programs each designed to handle a specific task in the molecular docking and re-scoring processes.ConveyorLC uses CDT1Receptor to perform protein preparation, CDT2Ligand for ligand preparation, CDT3Docking performs the molecular docking, and finally CDT4mmgbsa handles MM/GBSA re-scoring.Further details on the exact execution, pre-processing, and parameters used in the docking simulations can be found in previous studies [33,70].A job begins with 2 million poses to score, divides them per node, then each node assigns poses to its ranks and scores.Individual ranks (bottom) take their assigned poses, begin loading batches into memory and feeding them to the GPU for inference.Finally, identifiers and predictions are collected and written in parallel. The Vina scoring used in CDT3Docking, operates at approximately one minute per compound per CPU core.On a single Lassen node with 40 CPU cores (each core has 4 hardware threads) using 8 Monte Carlo simulations per compound, Vina is able to dock ≈10 docking poses per second.In contrast, a single-point MM/GBSA score takes 10 minutes per docking pose per CPU core.Because of its computational cost, MM/GBSA is often used as a re-scoring function to refine an already filtered set of compounds.[64].Even on a Lassen node, MM/GBSA is only capable of re-scoring ≈0.067 poses per second. Distributed Fusion Predictions In order to screen millions of compounds against SARS-CoV-2, we developed a scalable architecture around the Coherent Fusion model for rapid evaluation (Figure 3).The Coherent Fusion model occupies 1.5 GB GPU memory, which fits on each 16GB NVIDIA Volta V100 GPU.The remainder of the GPU memory is used to simultaneously load 56 individual docked poses into a batch alongside each model.The 4 model instances on each node were given 12 parallel data loaders to accelerate inference.Each model in every job is assigned a subset of compounds to evaluate and its data loaders complete all file reading and pre-processing operations to prepare batches of data in an individual nodeâĂŹs 256 GB of memory, which are subsequently loaded onto the GPU.After evaluating a batch, the screening code unloads the compound, target, and pose identifiers along with the modelâĂŹs predicted binding affinity.Once a job completes evaluation, the identifiers and predictions are gathered across MPI ranks and distributed across the individual ranks to be written in parallel to HDF5 files. In the context of LassenâĂŹs Job Scheduler LSF [22], we formulated Fusion evaluation jobs as many, individual 4 node processes, each assigned to evaluate an independent set of 2 million poses, which is approximately 200,000 compounds.This format was also a response to our encountering a wide range of errors (bad metadata, node failure, broken pipe errors, etc...), which led to our pipeline being tailored for fault tolerance.With this architecture, when a job fails it has minimal impact on overall throughput (another job takes its place), the reason for failure is easier to pinpoint (log files are smaller and easier to parse), and only a small set of compounds are affected or need to be rescheduled.To create each 4 node job, we relied heavily on Horovod [52], which is based on MPI concepts and uses MPI for cross-node communication.With 4 GPUs on each node, each job is a 16-rank distributed process.Each rank runs a Python script and is given a specific GPU, CPU cores, and memory allocation to execute the evaluation.At the beginning of a job, we simply divide the set of compounds assigned to the job by the number of ranks and assign each rank the subset with its index.When evaluation completes, the ranks use allgather to compile the results and subsequently write out HDF5 files.File output was identified as a bottleneck to the evaluation process early on, which was mitigated by assigning each rank compounds to be written in the same files and directories.The output file format was designed to mirror the output format of ConveyorLCâĂŹs CDT3Docking process [70] for interpretation with existing tools and further evaluation of pharmacokinetic and safety properties [33]. Table 7 includes a breakdown of how time is spent in an individual job, where the initial 20 minutes are consumed by loading HPC modules, an Anaconda [23] environment, initializing the Horovod ranks, loading an instance of the Fusion model onto each GPU, and pre-loading the initial batches of data for evaluation.The bulk of a job is, as expected, spent in an evaluation period, where batches are loaded, evaluated, and predictions are stored in parallel across all ranks.Finally, once the ranks gather together.The file writing process begins and completes about 6.5 minutes later yielding an average total run-time of approximately 5.1 hours. With jobs designed for scalability, we regularly ran more than 10 at a time during the SARS-CoV-2 screening effort.However, at set times the majority of Lassen nodes were made available to accelerate evaluation.The peak of which was an allotment of 500 nodes for Fusion screening.The impact of a large number of nodes is clearly seen in Table 7, where throughput was increased more than 100 times.Ultimately, during several hours of evaluation at scale, the Coherent Fusion model used more than 14,010 TFLOPS of LassenâĂŹs compute power to screen nearly 5 million compounds Single Job Scalability & Bottlenecks While fault tolerance encouraged the use of many individual jobs each with a small number of nodes, the optimal scale of an individual job was not immediately obvious.Several factors contributed to selecting 4 nodes as optimal including: the 12 hour job run-time limit on Lassen, the startup overhead of a job, the benefit of additional nodes, and the stability of the Python libraries used. The first parameter explored in development of the Fusion screening capability was the number of poses to evaluate per job.We found it possible to complete up to 5 million scored poses on 4 nodes under the 12 hour Lassen time limit.However, the prevalence of unpredictable errors in the docking data, featurization steps, and inter-node/rank communication, led us to instead assign 2 million poses to each job.While 2 million poses is less efficient (i.e., startup overhead is a larger percentage of each jobâĂŹs run-time), in practice this decision led to less wasted computational time, as the Fusion scoring code does not write results until it finishes scoring all poses.In future work, efficiency will be improved by creating a separate, parallel process per rank to write results as they are computed, but due to the urgent need for SARS-CoV-2 predictions we elected to mitigate unpredictable errors by narrowing the size of each Fusion job to balance fault-tolerance and computational efficiency. The next two parameters explored were the batch size per rank and number of nodes per job.The batch size per rank effects the number of times data is transferred from CPU->GPU and predictions from GPU->CPU.For the M pro and spike target sites, we found up to 56 poses (each with voxelized and connectivity representations) could consistently fit on the NVIDIA V100 GPU. Figure 4 displays the effect three different batch sizes had on a single job. In practice, the performance difference between each batch size was small with a batch size of 56 yielding a ≈10 minute run-time advantage over batch size 12. The Fusion scoring code under-utilized the Lassen GPUs, which led to consistent and relatively small offsets in run-time by batch size.This is due to the computational cost of pre-processing (e.g., file reading and data featurization), which is the most significant bottleneck in the evaluation process.Despite using 12 parallel data loaders per rank, the GPU is intermittently waiting to evaluate more poses.In future work, further optimization of the parallel data loaders will increase GPU utilization and improve throughput, which increases the value of Fusion for large-scale virtual screening. To determine the optimal number of nodes per job and performance benefit of additional nodes, we evaluated the run-times of Fusion jobs using 1, 2, 4, and 8 nodes per job.Figure 4 also displays the performance of each number of nodes where the same set of 10 jobs was run for every point evaluated.The variance between sets is also plotted surrounding each line, but was found to be small (< 5 minutes) and as a result is not clearly visible. A significant factor in choosing the number of nodes for each job was stability of the Python libraries used by the Fusion model.Inter-node and inter-rank communication errors were increasingly prevalent as the number of nodes/ranks in a job increased and the percentage of failed jobs for each number of nodes was ≈2% for 1 and 2 nodes, ≈3% for 4 nodes, and ≈20% of jobs failed when using 8 nodes.The instability was caused by the specific combination of Horovod [52] and PyTorch [46] used on the POWER9 architecture, which has since been updated.However, a 20% job failure rate eliminated 8 nodes per job as a candidate configuration. The results of our scalability experiments led to a final selection of 2 million poses per job using 4 nodes.This was the result of several different factors and adjustments including the observation that when using 500 Lassen nodes at the same time, the LSF scheduler encountered problems simultaneously running 250 2-node jobs, which was solved by using 125 4-node Fusion jobs instead. SARS-COV-2 RESULTS The high-throughput virtual drug screening pipeline described in Section 4 produced several computational results for each compound screened.The Fusion modelâĂŹs binding-affinity prediction was one of the three energy calculations (Vina, MM/GBSA, Fusion), which were used as a component of a hand-tailored cost function designed to filter which compounds to purchase for experimental evaluation and which were less likely to be successful.Full details of the ranking and reasoning may be found in [33] and computational predictions are made available at https://url-excluded [38].Virtual screening output on the computational side fed directly into an experimental process to physically interrogate candidate molecules. Experimental Validation Experimental testing of the candidate binders which were screened and purchased to target M pro used a fluorescence resonance energy transfer (FRET) based activity assay or a SDS-PAGE gel protein cleavage assay.After assay optimization, additional screens were run in order to down select compounds.For example, compounds from the ZINC database [57] were down selected to an additional Figure 5: Coherent Fusion predicted binding affinity vs. experimental percentage of inhibition at 100µM for 130 compounds against M pro protease1 (blue) and 81 compounds against M pro protease2 (orange).The spike assays were evaluated at 10µM and include 151 compounds against spike1 (green) and 113 compounds against spike2 (red).Compounds which exhibited ≤1% inhibition (no experimental binding activity) are excluded.testing of 19 compounds, which yielded 4 candidates inhibiting the activity of M pro at 100 micro-Molar (µM) concentrations.The four identified compounds include: candestartan cilexetil, FAD disodium, tigecycline, and tetracycline [33]. On the other hand, compounds predicted to inhibit the SARS-CoV-2 spike protein were screened by both a pseudo-typed virus assay and a biolayer inferometry competitive assay (BLI).Here the candidate compounds are being evaluated for their ability to inhibit ACE2-spike binding and in parallel, the spike binding candidates were screened using a cell-based infection assay at 10µM.Further details of the experimental design, assays, results, and discussion can be found in [33]. Connecting Predictions and Experimental Results Given the ground-truth experimental values for physically tested compounds, weâĂŹre enabled to retrospectively evaluate the accuracy of each computational approach which generated a prediction.Some of the obvious questions to ask are: "Which method was most correlated with the experimental results?", "Are the most accurate scoring functions the same for all four M pro and spike targets?", and "Which of the scoring methods is most accurate for the strongest experimental inhibitors?".Each experimentally prosecuted compound can be traced back to its virtually docked poses for either the two M pro or two spike binding targets.This means each scoring method may have predicted binding affinity values for up to 40 poses per compound (10 poses maximum per binding site).While in the computational domain we have predicted binding affinity values, the output from the M pro and spike assays is a percentage of inhibition normalized between 0 and 100%.These values are produced at a given concentration of the candidate drug (100µM for M pro targets and 10µM for spike targets), which gives context to the overall strength of a binder. For each scoring method (Vina, MM/GBSA, Coherent Fusion) the results per compound were aggregated and represented by the strongest prediction across all poses for each binding site (maximum for Coherent Fusion, minimum for Vina and MM/GBSA).Because of the computational cost of MM/GBSA, we instead use the ATOM Modeling PipeLine (AMPL) MM/GBSA predicted MM/GBSA values, which have been shown to be highly correlated with actual MM/GBSA calculations and were trained to predict MM/GBSA scores on each specific target [42]. Analyzing Computational Predictions Following this aggregation, the output is a single prediction per compound tied to a single percent inhibition.Each method can then be viewed as a scatter plot comparing predicted vs. actual experimental results as in Figure 5.We computed Pearson and Spearman correlation coefficients for each method across all experimentally tested M pro and spike compounds.However, most experimentally tested compounds are negatives (≤ 1% inhibition), which gives correlation coefficients near 0 for each of the three evaluated methods (table excluded for brevity).In an attempt to focus our analysis on the relative strengths and weaknesses of the different scoring methods and not the difficulty of the overall binding affinity prediction problem, we computed correlation coefficients for each method on the subset of compounds for which any experimental binding (>1%) was observed. Table 8 shows the correlations for all methods where the absolute value of the Vina and MM/GBSA scores are used.AMPL MM/GBSA gives the best correlation for the protease1 target, Coherent Fusion for the protease2 and spike1 targets, and Vina scores the spike2 binding site best.However, across the board, it is clear that even when limiting the analysis to >1% inhibition, the correlations for each method remain low and the interpretation of near-zero correlation coefficients is unavailing.While removing all the non-inhibitors gives a glimpse into which methods are most correlated with the SARS-CoV-2 binders, it also removes the context of those predictions.That is, the overall prediction strength for each method is somewhat obscured as the range of each methodâĂŹs prediction values is limited to its minimum and maximum prediction in the smaller set of SARS-CoV-2 binders.With this in mind, we sought to answer the question of which scoring methods were most accurate for the strongest experimental inhibitors by including non-binding compounds and casting the prediction problem as a binary classification of compounds with >33% inhibition (positive class) and compounds with ≤ 33% inhibition (negative class).A threshold of 33% was chosen to avoid severe class imbalances caused by higher thresholds.This problem formulation is similar to that in Figure 2 where we set a threshold to separate stronger binders from weaker binders.This approach is applicable in practice, as virtual screens eventually down select to a small set of candidates for final analysis, purchase, and experimental testing. Figure 6 displays Precision/Recall Curves and 1 -scores using a threshold of 33% to separate the positive and negative binding examples for each target.The y-axis of each P/R curve is limited to 0.35 to observe different model behaviors.Although the curves for each target appear different from the P/R plots in Figure 2, they provide information about how each model performs in practice on a noisier experimental screen, where the cutoff separating active from inactive molecules is less clear. While the P/R Curves in Figure 6 give low 1 -scores and precisions, two important observations arise.First, for each of the four binding sites, we computed CohenâĂŹs Kappa statistic () to compare each model with a random classifier [7,32].Equation 2gives the equation for computing ; where is the observed accuracy and is the expected accuracy. A random classifier achieves a = 0 by guessing according to the frequency of the positive and negative classes.Every model on each target was measured to achieve a > 0 (with the exception of Vina on spike1), indicating the models are consistently better than random across the targets.In the context of P/R Curves, a random classifierâĂŹs performance is expressed as a horizontal line (Figure 6) with constant expected precision equal to the proportion of positive examples in the data set.In practice, the three binding affinity prediction models served as input to a hand-crafted cost function [33], to select compounds for experimentation.The outcome of which produced 9 distinct compounds inhibiting 100% of M pro activity and 108 total compounds with ≥33% inhibition.Using the 33% inhibition threshold, 108 of 1042 (10.4%) experimentally tested compounds inhibit activity, indicating the models have significant predictive power.In that, while 500+ million candidates were evaluated, only a fraction (2.1 −6 %) of the best candidates were tested experimentally, yet the models successfully yielded a 10.4% hit rate. For the M pro protease1 binding site, AMPL MM/GBSA is the best predictor of experimental binding.Additionally, between the two M pro binding sites, protease1 and protease2, the AMPL MM/GBSA predictions on protease1 conjointly achieve a higher 1 -score and Pearson correlation coefficient than all other methods.In fact, for each of the four targets, not only do the best >33% 1 -scores match with the best >1% Pearson/Spearman correlations, but the relative differences across target binding sites are also in agreement. Coherent Fusion reaches the maximal 1 and correlation coefficients for the M pro protease2 and spike1 binding sites.Across all model / binding site combinations, Coherent FusionâĂŹs performance on the spike1 protein is the top performer, though followed closely by AMPL MM/GBSA on the same target site.This is unexpected, as the protease targets are much larger and nominally thought to provide better opportunity for drug-like compounds to bind. However, target specific strengths are not unexpected [64].Especially where the M pro sites are large protein pockets and the spike targets are much smaller.Interestingly, the maximal >1% correlations and >33% 1 -scores across all targets favor the spike proteins.The differing concentrations between the M pro and spike experiments is important to note, as M pro binders were evaluated at 100µM, which is a higher drug concentration and, therefore, allows for weaker binders to exhibit higher observed percentages of inhibition. CONCLUSION AND FUTURE WORK Deep Fusion was improved by coherent backpropagation and distributed, genetic hyper-parameter optimization.The optimization automatically produced an version of the Coherent Fusion model, which was shown to exceed the performance of hand-crafted Fusion model variants and alternative machine learning methods on crystal structures from the PDBbind core set benchmark.In evaluating noisier docked poses of the same test set, Coherent Fusion also achieved an improvement in correlation and 1 -score relative to the physics-based Vina and MM/GBSA methods. We utilized Coherent Fusion to screen over 500 million compounds across four SARS-CoV-2 binding sites.This was achieved by designing a high-throughput distributed architecture for faulttolerant scoring at scale.Using parallel data loaders and LassenâĂŹs 4 GPUs per node, the Coherent Fusion model is currently capable of screening ≈30 poses per second per node.Coherent Fusion was used as input to a weighted cost function across binding affinity and other scoring models in order to sub-select compounds for experimentation. Among the nearly 1000 compounds tested experimentally, several inhibited activity of M pro and spike.In analyzing which binding affinity methods were most correlated with the experimental results, we found the optimal performer to vary by target.However, two of the four targeted binding sites were best predicted by Coherent Fusion both in terms of >1% inhibiting correlation with the experimental results and >33% inhibiting binary classification.Although overall predictive power of all of the scoring functions is limited, any opportunity for computational enrichment of strong binders is needed to alleviate the otherwise prohibitive time and cost of the physical experimental screens. In future work, we aim to use our baseline Coherent Fusion model from this work to fine tune and predict for specific protein target types and binding sites.We believe introducing target specificity to the models and thereby reducing the scope of the binding affinity prediction problem will increase the value of relative differences in the modelâĂŹs binding affinity predictions.Our high-throughput architecture can also be accelerated, as the GPUs on each Lassen node were observed to be under-utilized.Increased numbers of parallel data loaders were observed to decrease the overall stability of each individual evaluation job indicating some refactoring is necessary. Figure 1 : Figure 1: Fusion model architecture with voxelized and spatial graph inputs of COVID-19 M pro (PDB: 6LU7, denoted protease1, green) in complex with an N3 inhibitor (red) [51].Components of the 3D-CNN (orange), SG-CNN (blue), and Fusion layers (yellow) which were given as options to the hyper-parameter optimization are shown with dashed lines/borders. Figure 2 : Figure 2: Binary classification of 128 docked complexes from the PDBbind core set, where the positive, "stronger" binder class represents 57 compounds with experimental pK i or pK d > 8 and the negative, "weaker" class consists of 71 compounds with pK i or pK d < 6. Figure 3 : Figure 3: Structure of a single Fusion scoring job (top).A job begins with 2 million poses to score, divides them per node, then each node assigns poses to its ranks and scores.Individual ranks (bottom) take their assigned poses, begin loading batches into memory and feeding them to the GPU for inference.Finally, identifiers and predictions are collected and written in parallel. Figure 6 : Figure 6: Precision/Recall Curves and F1-scores by SARS-CoV-2 protein target at 33% experimental inhibition.M pro protease1 (far left) shows results for 30 positive and 311 negative binders.M pro protease2 (middle left) includes 20 positive and 196 negative binders.The spike1 site (middle right) includes 32 positive and 209 negative binders.Finally, spike2 (far right) includes 26 positive and 218 negative binders.The black horizontal dashed line indicates the performance of a random classifier. Figure 7 : Figure 7: Four compounds from eMolecules [11] in complex with the M pro /protease1 (a, b) and spike/spike1 targets (c, d), where residues His41 and Cys145 in M pro and residue 501 in the spike RBD protein are green.Panels (a) and (b) show Compound IDs 76051337 and 24424612, respectively, which both had 100% inhibition at 100µM in the M pro assay.Panels (c) and (d) show Compound IDs 18594404 and 313102183, respectively, which reached 100% and 98% inhibition at 10µM in the spike assay. Table 2 : Final hyper-parameters for the SG-CNN Table 3 : Final hyper-parameters for the 3D-CNN Table 4 : Final hyper-parameters for Mid-level Fusion 3.3.2Late/ Mid-level Fusion.The Late Fusion method was implemented the same as in Table 5 : Final hyper-parameters for Coherent Fusion Table 6 : Performance of Fusion models on the PDBbind core set crystal structures Table 7 : Throughput for Fusion prediction single job (2 million poses) and peak performance (125 parallel jobs) Table 8 : Correlation of predicted binding and percent inhibition on compounds with > 1% inhibition
2021-04-13T01:15:49.084Z
2021-04-09T00:00:00.000
{ "year": 2021, "sha1": "599d8883e5f38deb97b54aaff3f8dac5abb765f7", "oa_license": "CCBY", "oa_url": "https://dl.acm.org/doi/pdf/10.1145/3458817.3476193", "oa_status": "HYBRID", "pdf_src": "ArXiv", "pdf_hash": "599d8883e5f38deb97b54aaff3f8dac5abb765f7", "s2fieldsofstudy": [ "Medicine", "Computer Science", "Chemistry" ], "extfieldsofstudy": [ "Computer Science", "Biology" ] }
55890813
pes2o/s2orc
v3-fos-license
Vitamin E levels in patients with controlled and uncontrolled type 2 diabetes mellitus The global epidemic of diabetes mellitus has severely increased in last 2 decades and also it has been escalated from 30 million in 1985 to 285 million in 2010. If the condition keeps going like this, in 2030 there will 438 million diabetic patients according to diabetes international federation’s anticipations. Although the prevalence of both diabetes type 1 and type 2 is increasing all around the world. But the type 2 diabetes is spreading much faster than the type 1. The reasons for this phenomenon probably are the rising obesity, ABSTRACT INTRODUCTION Diabetes is an epidemic disease round the world. Diabetes frequency of occurrence is rising in most of the populations especially in developing countries. 1 At the moment there are 7.5 million diabetic patients in Iran. 2 The global epidemic of diabetes mellitus has severely increased in last 2 decades and also it has been escalated from 30 million in 1985 to 285 million in 2010. If the condition keeps going like this, in 2030 there will 438 million diabetic patients according to diabetes international federation's anticipations. Although the prevalence of both diabetes type 1 and type 2 is increasing all around the world. But the type 2 diabetes is spreading much faster than the type 1. The reasons for this phenomenon probably are the rising obesity, decreased physical activities (which is due to the industrialization) and rising age of communities. In 2010 almost 1.6 million people (over 20) were diabetic who have been diagnosed recently. The risk of diabetes mellitus rises as age increases. In 2010, the prevalence of diabetes mellitus in United States was estimated 0.2% in less than 20 years and 11.3% in over 20 years old. In the people over 65 diabetes mellitus' prevalence was 26.9%. The prevalence of this disease is almost as same as in all age ranges in men and women (11.8% in men over 20 and 10.8% in women over 20) global estimations predict that in 2030, the most of the diabetics will be in range of 45 to 64 years. 3 The studies of the control of diabetes and its side effects have indicated that the diabetes side effect can be delayed or reduced with the control of the severe glycemic. 4 In the last decade there have been some considerations over oxidative stress and its role in causing side effects in diabetic patients. 5 Lots of studies have indicated the high levels of oxidative stress markers. 6 The increase in levels of insulin, free fatty acids and glucose can cause an increase in ROS and oxidative stress and also it can activate the stress sensitive pathway. 7 In last years it has been known that the most important factor in producing free radicals in diabetes is hyper glycaemia, which increases the production of superoxide radical in mitochondria. 8 It has been reported that the levels of alpha tocopherol in type 2 diabetic patient's plasma is lower than the healthy people. 9 The alpha tocopherol prescription can cause a delay in chronic side effects of diabetes. 10 Also alpha tocopherol has beneficial effects on metabolically control in diabetes by its anti-oxidation effects on lipid oxidation, protein glycosylation and its sensitivity to insulin. 11 The results of a 4 year old cohort study showed that for each 1 micromol/l decrease in alpha tocopherol plasma levels. The risk of diabetes rises about 22%. 12 Also other study showed that consuming vitamin e in diabetic patients can significantly decrease levels of micro albuminuria and thromboxane A2 in patients with micro albuminuria. 13 Other study was about effects of vitamins E and C on fat profile in patients with diabetes mellitus. The results of this study showed that consumption of vitamins E and C can significantly decrease hypertension and cause remission in insulin activity and fat profile. 14 Studies showed that vitamin E has an important effect on decreasing blood pressure and blood glucose in type 2 diabetic patients. 15 Of course other studies say that vitamin E cannot ameliorate glucose levels of patients. 16 In a research done by Onyesom and et al, vitamin a (14.38 µg/l), vitamin c (0.66 mg/dl) and vitamin e (0.51 mg/dl) levels were lower in recently diagnosed diabetic patients comparing to control group (healthy) (p<0/05). diabetic patients have 30 % shortage of vitamin a, 36% shortage of vitamin C and 12% shortage of vitamin E. 17 By considering previous studies and the effect of vitamin e supplements in diabetic patients, this study was done to measure vitamin e in controlled and uncontrolled patients with diabetes mellitus type 2 and to study its link in controlling diabetes in diabetic patients. METHODS Existing study is a descriptive analytic cross-sectional study and it's been done in diabetes clinic of Imam Khomeini hospital in Ardabil (Feb-2014 to May-2015). Statistical society of this study was diabetic patients who referred to diabetes clinic of Imam Khomeini hospital for chemotherapy. (Sampling census method was done until the completion of sample size). Sample size was determined 186 individuals by following formula with test ability of 80%, confidence level of 95%, error of 5% and diabetes prevalence of 6%. 18 N= (Zα/2+Zβ) 2×P×Q ∕ d 2 Z α/2 = 1.96, Z β = 0.85, P= 0.06, Q=0.94, d = 0.05, Power = 80% Exclusion criteria's in this study are unwillingness of patients for cooperating, having underlying diseases, being smoker and consuming nutritional supplements. In this study 186 patients with diabetes type 2 who were having chemotherapy were selected. After blood sampling patients, the levels of HbA1C (measurement with HPLC method), TG, cholesterol, HDL, LDL, Cr and vitamin E (measurement with Eliza method) were measured. And by considering patients; HbA1C they were divided through two groups: less than 7 (controlled group) and more than 7 (uncontrolled group). Food consumption of patients were asked for 3 times a week (2 nd and 3 rd times were reminiscent) to check the food frequency and after entering the amount of daily consumed food to N4 food analysis program. Amounts of calorie, protein and etc. in consumed food were calculated. The calculated numbers were divided by 3 and the resulted numbers showed the amount of consumed food in one day (the average of 3 days in week) and the results were entered in checklists. In checklists, questions about age, gender, weight, height, length of being diabetic, living in urban or rural and diet were asked (based on a standard questionnaire) and patient's information with measured levels of vitamin e entered checklist and eventually data were analyzed. After data being collected, they were coded and entered SPSS V16 statistical software. After wards, after using analytical statistical methods such as t-test and Pierson statistical exam, data were analyzed with descriptive statistical methods. In all mentioned tests, test error significance for confidence level was 0.95 less than 0.05. For obeying principles of medical ethics, information was maintained confidentially and results were reported without mentioning names. Patient's unwillingness of cooperating in performing tests, high costs of tests and overdosing vitamin supplements by patients were the restrictions of study. RESULTS In the existing study 186 diabetic patients were examined. From within examined patients, 129 (69.3%) were women and the rest were men and average patients age were 53.33±11.2. From 186 patients, 158 (84.9%) were burgess and 28 (15.1) were rural and 171 patients (91.9%) were from Ardabil. Results showed that most of the patients (114 individuals) (61.3%) have used only insulin to control their blood sugar. 90 patients (48.4%) had history of using lipid lowering drugs and the rest (51.6%) didn't. Results showed that 144 patients (77.4%) had positive diabetes type 2 in family history. And 2 patients (1%) had a history of diabetes type 1 and average period of time for being diabetic in patients were 8.4±6.6. Also patients BMI average was 28.32±4.1 (kg/m 2 ). Results showed that average levels of blood sugar in patients were 217.5±100.5 mg/dl. The least blood sugar level was 60 mg/dl and highest was 846 mg/dl. Average of HBA1C levels of patients was 8.95±2.2. The least HBA1C level was 5 and the highest was 18.1. Cholesterol, triglyceride, HDL, and LDL levels were mentioned in Table 1 below. Table 2). The level of calories, protein and fat patients are given in Table 3. Also average of received vitamin E in diabetic patients food was 4.53±4.24 nmol/l. Patients' average received food weight was 1277.4±337.01 gr. Patients were divided in two groups based on HBA1C level, first group was equal or less than 7 (controlled) second group was more than 7 (uncontrolled). At this point, 97 of the patients were controlled (52.2%) and the rest of the diabetic was uncontrolled. Results showed that there wasn't any significant difference between vitamin E level and cholesterol level within controlled diabetic patients and uncontrolled diabetic patients. This study showed that average of triglyceride is higher in uncontrolled group and by considering that, this difference was statistically significant (p=0.046). analysis have shown that there isn't any significant difference between LDL levels within controlled diabetic patients and uncontrolled diabetic patients (p=0.538) and also difference between HDL levels in uncontrolled patients are higher than HDL levels in controlled patients but this difference is not statistically significant. On the other hand results showed that average of food glucose (gr/kg) in uncontrolled group is higher than controlled patient. but this difference was not also statistically significant and average of protein levels in food is higher in controlled patients comparing to uncontrolled patients but this difference wasn't statistically significant either and average of food fat levels and weight is lower in uncontrolled diabetic patients comparing to controlled patient but this difference also wasn't statistically significant by considering this fact that significance level of test error for confidence level was 0.95 out of 0.66 and it was more than 0.05, so it could be said that there is no link between vitamin e and food vitamin e levels. DISCUSSION Vitamin E is one of the anti-oxidant vitamins which are being reduced by eliminating free radicals. Escalation of plasma glucose and glycemic index can increase oxygen free radicals. Therefor it's likely that vitamin E levels in diabetic patients are lower than healthy individuals and in individuals with weak blood sugar control it's lower than low glycemic diabetic patients. 19 In the existing study it was observed that there wasn't any direct link between cholesterol (p=0.284), HDL (p=0.362) and LDL (p=0.538) levels and blood sugar control. But in uncontrolled diabetic patient s triglyceride levels were significantly higher individuals with blood sugar control (p=0.046). In a study of Taheri et al serum total cholesterol, triglyceride and LDL in diabetic patients were higher than control group but HDL serum level were lower than control group. 20 This diversity was more significant about triglyceride. In a study of Esteghamat I and et al. 21 blood glucose level glycolysed hemoglobin, total cholesterol, triglyceride and LDL-C had a significant rise in type 2 diabetic patients but HDL-C was decreased. In a study of Ahmad and et al it was observed that LDL and VLDL levels in diabetic patients were higher than health individuals (p<0.001). And also HDL levels in diabetic patients were lower than control group (p<0.001). in a study of Sawant and et al. 22 it was observed that FBG and HBA1C levels in diabetic patients without any neuropathy were lower than in diabetic patients with neuropathy (p<0.05). In a study of Gazis and et al. 23 it was observed that in diabetic patients HBA1C (6.9 vs. 4.8, p<0.01) and systolic blood pressure (145 vs 130, p<0.01) levels were significantly divers but diastolic blood pressure levels was not different comparing to healthy individuals. In the existing study average vitamin e levels in patient s are 1488.6±692.2 nmol/l and lowest is 114.4 nmol/l and highest is 6235 nmol/l. Also data analysis indicate that there isn't any significant difference in vitamin e levels between diabetic controlled patients and uncontrolled patients (p=0.214). in the study of Esteghamati et al vitamin e and c levels in diabetic patients plasma didn't change at all comparing to control group, although vitamin e to cholesterol ratio decreased (p<0.05). 21 In a study of Srivatsan et al it was observed that vitamin e level in control group was 1.08 mg/dl, in diabetic patients with side effects was 1.19 mg/dl and in diabetic patients without side effects was 1.17 mg/dl and also there wasn't any significant diversity between vitamin e levels and studied group (p=0.64). 24 In a study of Ahmad et al it was observed that vitamin e and c levels in diabetic individuals were lesser than control group (p<0.001) and there wasn't any significant diversity in vitamin a levels between two groups. 9 In a study of Oneysom et al it was observed that vitamin a (14.38 microgram/l), vitamin c (0.66 mg/dl) and vitamin e (0.51 mg/dl) level in recently diagnosed diabetic patients were lesser than control (healthy) group's individuals (p<0.05). 17 Diabetic patients have about 30% vitamin a deficiency, 36% vitamin c deficiency and 12% vitamin e deficiency. In a study of Sawant et al it was observed that vitamin e level in diabetic patients without neurologic side effects is 1153 and in diabetic patients with neurologic side effects (neuropathy) is 730 and this diversity was statistically significant (p<0.05). 22 In a study of Salonen and et al after patient follow up and also measurements of vitamin e in them, it was observed that vitamin e deficiency can increase the diabetes risk to 3.9 times. 12 Decrease in vitamin e serum density about 1 mumol/l, can increase diabetes risk about 22%. In a study of Dogon and et al it was observed that vitamin e and c levels in diabetic patients were significantly lesser than individuals of control group. 25 In a study of Dosoo and et al it was observed that antioxidant status in diabetic patients with the receiver of sugar lowering tablets (NIDDM) was significantly lower than healthy individuals (p<0.001). 26 Also it was observed that with the increase in fasting glucose, anti-oxidant levels decreased significantly. Good control of fasting glucose in patients resulted in decrease in free radicals. And increase in antioxidant levels. In a study of Odum and et al. 27 it was observed that average of total antioxidant (1180 micromol/l), vitamin c (26.59 micromol/l) and vitamin e (1533 micromol/l) in diabetic patients were significantly lower than healthy individuals (in all cases p=0.0001) in a study of Lapolla and et al it was observed that vitamin e level and total antioxidant status in diabetic patients with peripheral arterial disease (ABI<0.9) were lower than diabetic patients without periphra arterial disease. 28 In the most of the studies among them in Baliarsingh et al, de Oliveira et al, Baolissi et al and also in park and et al vitamin e deficiency in diabetic patients was lower comparing to control patients and this low deficiency was considered as a risk factor (OR>1). [29][30][31][32] In a study of Ble Castillo and et al this vitamin again was higher in diabetic patients comparing to healthy individuals and these high levels were considered as a protection factor (OR<1). 33 In the other studies among them in Reaven et al and also Economides and et al the vitamin levels were not significantly different in both groups and also both groups' diabetic individuals were almost as same as nondiabetic individuals. 34,35 After studying several related researches it was observed that the most of them were about studying vitamin e levels in diabetic patients and health individuals. So by considering this vitamin e levels in most of the studies vitamin e levels in diabetic patients were significantly lower than healthy individuals. But in this study thence vitamin e levels were controlled or uncontrolled in diabetic patients. These vitamin levels were not significantly different. Also in most of the researches vitamin e effect on diabetic patients was studied. For example in a study of Khabbaz et al it was observed that vitamin e prescription can decrease FBS (fasting blood sugar) and triglyceride but however this de-escalation was not significant. 36 Total cholesterol, systolic and diastolic blood pressure levels didn't change significantly. In the study of Vieira and et al it was observed that treatment with alpha tocopherol can cause de-escalation in systolic hypertension and LDL and also escalation in HDL levels but it cannot significantly change triglyceride and total cholesterol levels. 37 In the study of Gazis and et al it was also observed that vitamin e prescription cannot cause vasodilation in diabetic patients. 23 In the study of Paolisso et al it was observed that long term vitamin e prescription can lower plasma glucose (p>0.05), triglyceride (p<0.02), free fatty acids (p<0.05), total cholesterol (p<0.05), LDL (p<0.04), HbA1C (p<0.05) and apportion B (p<0.05) but however it couldn't lower insulin resistance. 31 In the study of Khabbaz and et al it was observed that vitamin e prescription with 800 units in a day for 3 month cannot ameliorate blood sugar, blood fat, glycolised hemoglobin levels, fasting insulin and blood pressure in type two diabetic patients. 36 In the study of Lonn and et al it was observed that daily prescription of 400 unit's vitamin e has absolutely no effect on decreasing the possibility of cardiovascular diseases in diabetic patients. 38 In the study of Boshtam and et al it was observed that daily prescription of 200 unit's vitamin e in diabetic patients caused an insignificant de-escalation in FBS levels and insulin resistance. 39 The prescription of the vitamin also couldn't make a change in triglyceride and total cholesterol levels. In the study of Susomboon and et al. 16 which was a systemic study/ it was observed that in 9 studies, vitamin e supplements couldn't decrease blood sugar levels in diabetic patients. also it was observed that vitamin e decreased blood sugar in patients with uncontrolled blood sugar (HbA1C>8%) and also this vitamin was lower in patients with uncontrolled blood sugar (HbA1C>8%). The results of these researches indicate that vitamin e prescription does not have significant effect on lowering diabetes side effects. By considering achieved results from existing research, it has been suggested that by considering insignificant diversity in both controlled and uncontrolled diabetic groups and also similar studies which have been done in other countries to not to prescribe antioxidant vitamins such as vitamin e unreasonably and without indications due to unaffordability and also patients non-compliance for taking too much drugs. And the prescription of these kinds of drugs should be only limited to specify indications such as triglyceride, fatty liver and etc. but of course this needs more research too. In this study health individuals' group was not studied as a control group. Hence in comparing to the most of the studies which had control group we couldn't do a good comparison by vitamin e levels significance. In this study vitamin c and other antioxidants were not studied. CONCLUSION Results of existing study indicated that there is absolutely no significant diversity between controlled and uncontrolled diabetic individuals in vitamin E levels.
2019-03-17T13:11:52.610Z
2018-02-24T00:00:00.000
{ "year": 2018, "sha1": "708041e7f4edcb6a71d50e764e5783c862a57b95", "oa_license": null, "oa_url": "https://www.ijcmph.com/index.php/ijcmph/article/download/2491/1840", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "366c8f40d6ff3e4c4a04ac5900758d66926e6075", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
2925738
pes2o/s2orc
v3-fos-license
Pulsational pair instability as an explanation for the most luminous supernovae The extremely luminous supernova SN 2006gy challenges the traditional view that the collapse of a stellar core is the only mechanism by which a massive star makes a supernova, because it seems too luminous by more than a factor of ten. Here we report that the brightest supernovae in the modern Universe arise from collisions between shells of matter ejected by massive stars that undergo an interior instability arising from the production of electron-positron pairs. This"pair instability"leads to explosive burning that is insufficient to unbind the star, but ejects many solar masses of the envelope. After the first explosion, the remaining core contracts and searches for a stable burning state. When the next explosion occurs, several solar masses of material are again ejected, which collide with the earlier ejecta. This collision can radiate 1E50 erg of light, about a factor of ten more than an ordinary supernova. Our model is in good agreement with the observed light curve for SN 2006gy and also shows that some massive stars can produce more than one supernova-like outburst. time required for this contraction is sensitive to the strength of the pulse and how close the star came to becoming unbound. If the temperature after the first pulse is less than about 9 × 10 8 K, neutrino losses are inefficient and it may be decades before the star starts burning again. If the core is much hotter, it may only take days. If the remaining helium core is still over 40 solar masses, with the exact threshold depending upon the entropy lost to neutrinos during the interpulse period, the star encounters the instability again, and ejects another several solar masses. Later ejections have lower mass, because the envelope was expelled in the first pulse, but have higher energy. They quickly catch up to the first shell, which by this time is at 10 15 -10 16 cm, where the collision dissipates most of their relative kinetic energy as radiation (Fig. 1). Because of the large radius for the collision, adiabatic losses from expansion are roughly two orders of magnitude less than in a common type-II supernova. That is, a collision involving only 10 50 erg of kinetic energy can radiate as much as10 50 erg of light, more than ten times an ordinary supernova. To illustrate these general ideas, consider the evolution of a star of 110 solar masses and solar composition. Its evolution is calculated using the Kepler code 3,14 with mass loss included at a fraction of the standard value for solar metallicity stars 15,16 , 50% on the main sequence, 10% as a helium-burning red giant. Our 110-solar-mass mainsequence star then ends its life with a total mass of 74.6 solar masses and a helium core of 49.9 solar masses, well within the pulsational domain. The pre-supernova star is a red supergiant with radius 1.1 × 10 14 cm and luminosity 9.2 × 10 39 erg s 1 . Its outer 24 solar masses of low-density envelope are only bound by 9.0 × 10 48 erg. After burning helium and carbon, when the temperature exceeds 10 9 K, this star first encounters the pair instability. The helium core collapses rapidly to a maximum central temperature of 3.04 × 10 9 K and density 1.50 × 10 6 gm cm 3 , far hotter than the usual 2.0 × 10 9 K at which oxygen burns stably in a massive star. So the star violently explodes, burning 1.49 solar masses of oxygen and 1.55 solar masses of carbon and releasing 1.4 × 10 51 erg ( Supplementary Figs 1 and 2). Most of this energy goes into expanding the star. About 10%, however, goes into driving off 24.5 solar masses of envelope and core (mostly helium and some hydrogen) with a terminal speed of 100 to 1,000 km s 1 (Fig. 1). This envelope ejection gives the first supernova-like display with a luminosity ~4 × 10 41 erg s 1 for 200 days ( Fig. 2 and Supplementary Fig. 5). What is left behind is a 50.7-solar-mass remnant, slightly larger than the original helium core mass, that once again radiates neutrinos, contracts and grows hotter. Then 6.8 years later, it encounters the pair instability a second time. This time the pulse is stronger, and 6.0 × 10 50 erg is shared by a smaller ejected mass of 5.1 solar masses. The collision of this high-velocity shell with the larger mass ejected earlier (Fig. 1) produces a brilliant light curve 17 calculated here using the radiation-hydrodynamics code Stella 18 (Fig. 3). Stella uses multi-energy groups to compute the coupling of radiation transfer to the gas dynamics and produces multi-colour and bolometric light curves. Previously Stella was used successfully to resolve a very thin shell and radiative shock in the case of SN 1994W and gave multi-colour fluxes in good agreement withobservations 19 . The good agreement with SN 2006gy 1 , (Fig. 3 and Supplementary Figs 7-9) is suggestive of a light curve generated by collisions between solar masses of material as refs 1 and 20 have also proposed. Other models for SN 2006gy based upon traditional pair-instability supernovae can be very bright 12,21 , but require a large mass of 56 Ni and are difficult to reconcile with the narrow width of the observed light curve 22,23 and the narrow spectral features due to hydrogen. They also require exceptionally massive progenitor stars. The photospheric structure of these collisionally dominated supernovae is novel. Early on, the collision occurs at such high density and small radius that the shock is preceded by an optically thick photosphere ( Supplementary Figs 10 and 11). Matter inside this photosphere is ionized and optically thick. The emission is nearly blackbody and no X-ray or radio emission is produced. The mass in the second eruption quickly becomes concentrated in very thin shells. Later, the large velocity shear bounding these shells keeps the opacity in Doppler-broadened lines from becoming too small, and the emission continues to be predominantly in near-optical bands. The large column depth probably keeps any appreciable X-rays that are produced from escaping until after the optical display is over. This is consistent with the low level of X-rays detected from the supernova 20 . Nine years later, the 110-solar-mass model finishes a final phase of contraction and gently starts silicon burning at its centre, making an iron core that collapses ( Fig. 2 and Supplementary Fig. 4). A 95-solar-mass star similarly evolved, but with mild rotation (equatorial speed 100 km s 1 on the main sequence) and magnetic torques 24 , produces a similar helium-core mass, but has sufficient angular momentum in its iron core to make a neutron star with period 2 ms. This is sufficiently rapid rotation to form a magnetar 25 , or, within uncertainties in the angular momentum transport model, a collapsar 26 . Thus the final death of the star might generate a gamma-ray burst 27,28 , but one that is embedded in many solar masses of circumstellar material. The optical light curve from such an event could also be very bright and might be an alternative explanation for SN 2006gy. Indeed, as Supplementary Table 1 shows, the pulsational-pair instability mechanism can energize a variety of explosive phenomena with characteristic timescales ranging from days to centuries. We have focused here on the brightest of these events, but if the energy of the ejected shells is low and if the core of the star eventually collapses to a slowly rotating black hole, we might observe "supernova impostors" 29 and nothing else. If the star lost its envelope, but retained a supercritical helium core mass, it might form a repeating type Ib supernova 30 . 23. Umeda, H., & Nomoto, K. How much 56 Ni can be produced in core-collapse supernovae? Evolution and explosion of 30-100 solar mass stars. Astrophys. J. (submitted); preprint at http://arxiv.org/astro-ph07072598 (2007 >137 Black hole ? Column 1 gives the total mass of the (non-rotating) star when it is born. If the outer layers of hydrogen and helium are not entirely lost along the way, the second column gives the mass of the core of helium and heavier elements inside the star when it dies. Columns 3 and 4 then describe how the star dies and what sort of remnant it leaves behind. Without rotation, helium cores over 137 solar masses simply disappear into a black hole. With rotation, their evolution is uncertain. The pair instability occurs after carbon burning when the centre of the star encounters thermodynamic conditions where a large fraction of the internal energy is stored in the rest masses of electron-positron pairs. The loss of pressure renders the star briefly unstable against collapse, nuclear burning and explosion. Figure 1 Velocity structure following the second eruption of a 110-solar-mass pulsational pair-instability supernova. The velocity and enclosed mass are plotted against the log of the radius. The velocity discontinuity at 10 15 cm shows where fastmoving ejecta from the second outburst are starting to impact the slower-moving material ejected in the first pulse. Hydrogen-rich and helium-rich material immediately above this shock is moving at less than 200 km s 1 and will give rise to narrow lines in the spectrum of the emission, as was seen in SN 2006gy 1 . Most of the kinetic energy of the second ejection will be dissipated within 10 16 cm. This particular configuration resulted from a star initially with 110 solar masses that had 74.6 solar masses left when it began exploding (see text). Supplementary Fig. 5). Shock breakout produces the brief bright ultraviolet transient at the onset of this first light curve, while the plateau is due to hydrogen recombination. Then 6.9 years later a second eruption produces a brilliant event as the fast-moving ejecta collide with the debris of the first supernova ( Fig. 1 and 3). And 9 years after that, the star forms a 2.2-solar-mass iron core that collapses to a rapidly rotating neutron star or black hole. A third bright event, possibly a gamma-ray burst, might then occur. Figure 3 Absolute R-band magnitudes resulting from the strong second explosion of the 110-solar-mass model. The time axis has been adjusted so as to give the best agreement with observations of SN 2006gy 1 , plotted as the red data points, and the model results have been smoothed using a numerical averaging over 30-day intervals. Multidimensional calculations of similar models 31 suggest that instabilities in the thin dense shell, where the radiation originates, will result in the formation of a mixed layer with relative thickness R/R 0.1-0.15. The predictions of our one-dimensional model (where the radius is proportional to the time) should thus be blurred by t 30 days for a total light-curve width of about 200 days. An R-band extinction of 1.68 magnitudes is assumed for the supernova 1 . Two curves are shown, one for the nominal model discussed in the text, and a second where the velocity of all the ejecta-pulses 1 and 2-has been multiplied by two (hence an artificial increase in the explosion energy from 7.2 × 10 50 erg to 2.9 × 10 51 erg). The large variations apparent in the fainter model are absent in the brighter one because the photosphere at peak light in the more energetic model has not receded to near the shock. The actual explosion energy and mass ejected are sensitive to the initial mass of the star, its uncertain mass loss (that is, the mass of the remaining hydrogen envelope when the star dies), and details of the cooling between pulses. Other light curves, without smoothing and with variable explosion energy and density, are given in Supplementary Figs 7-9. Figure S1. Initial composition of the 110 solar mass model as it encounters the electron-positron pair instability for the first time. Carbon has already been burned away in the inner few solar masses of the star and the central temperature and density are 1.2 × 10 9 K and 9.1 × 10 4 g cm −3 respectively. Most of the "helium core" is in fact composed of oxygen. The net binding energy of the star at this point (essentially that of the helium-oxygen core) is 4.03 × 10 51 erg. The total mass of the star is 74.56 solar masses and the helium core is 49.86 solar masses. Tight hydrostatic equilibrium prevails throughout the core, though it is starting to contract rapidly. Supplementary Figure S5. Light curve resulting from the first mass ejection. The initial spike is from shock wave break out and is not accurately calculated due to coarse surface zoning and the use of a single temperature to describe the radiation and the matter. The roughly 200 day plateau occurs as the hydrogen and helium in the ejected envelope recombine releasing the energy deposited there by the shock. No radioactivity is ejected in either of the pulses, and the light curve has no tail. Zero time here corresponds to shock break out at the surface. is brightest in the red and visual bands. Unlike Fig. 3 in the main text, the light curve has not been smoothed, but shows the rapidly varying temporal structure resulting from an artificial 1D simulation of thin shells. In two or three dimensions, the thin shell is expected to break up into multiple structures at different radii. An oblateness in either mass ejection would also smooth out much of this time structure. In this and subsequent plots, an extinction in the R-band of A R = 1.68 magnitudes has been assumed in plotting the data for SN 2006gy. Zero time is the time of the second pulse plus 40 days. Supplementary Figure S9. Color magnitudes of the 110 solar mass model in which the density of the both pulses was multiplied by two. This doubles both the amount of mass ejected in each pulse and the total kinetic energy. Some combination of increased energy (Fig. S8) and density (this figure) could probably be found that would fit the SN 2006gy observations. log density (g cm-3), velocity (1000 km s-1), log temperature (K), log luminosity (10 40 erg s-1, and optical depth (τ R ) in the emitting region for the 110 solar mass model. Because of the higher density and smaller radius at this early time, the temperature in the vicinity of the shock is higher, keeping the matter ionized for some distance ahead. Consequently, the photosphere is well outside the shock and the emission is black body. No x-rays are being created because the temperature is too low and none would escape if they were because of the large column depth. Supplementary Table 1. The table on the following page summarizes the outburst history for non-rotating helium stars (initially 98.5% helium and 1.5% nitrogen) of various masses evolved to the point of final central collapse. The main sequence masses corresponding to these cores are approximately 2.2 times the helium core mass, i.e., 105 to 130 solar masses. Within this range, stars with larger mass encounter an instability that is increasingly violent and a smaller number of pulses occur before the star dies. In each case, the first pulse is the weakest but, except in the lightest case considered, more than adequate to eject any residual hydrogen envelope on the first try. The kinetic energy of the remaining pulses is larger, though never much over 10 51 erg. This energy is no longer shared with any envelope and thus has a higher velocity. Each pulse will thus produce a supernova-like display. The supernovae may be exceptionally brilliant, as in the case considered here, or quite faint for stars on the lower end of the unstable mass range. The table gives, for each pulse, the kinetic energy, (KE), and mass ejected (∆M). Following each pulse and a brief period of oscillation, the core has central temperature, T c , and density, ρ c , and radiates neutrinos for the interval given before encountering the next instability. Each interval is given in the form of a number and the power of ten in parentheses by which that number is to be multiplied. At the end of the last pulse listed, the central part of the core evolves to an iron core that collapses to a neutron star or black hole. Helium cores of increasing mass encounter an instability that is increasingly violent on the first encounter. For helium cores above 65 solar masses the entire star is disrupted in a single flash. For stars with hydrogenic envelopes the evolution is altered since the helium core can grown by hydrogen shell burning even as it burns helium in the center. The 110 solar mass model discussed in the text is similar to the 51 solar mass helium core model above, but only experienced two strong outbursts before dying.
2016-04-15T09:12:14.267Z
2007-10-17T00:00:00.000
{ "year": 2007, "sha1": "6ae1c4f06d1cc0f5366cc8623dfe02a93248f731", "oa_license": null, "oa_url": "http://arxiv.org/pdf/0710.3314", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "919fb5e8830dc36b19b6bb1fe453fc6117c83fb2", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics", "Medicine" ] }
15411572
pes2o/s2orc
v3-fos-license
Neutrino analysis of the September 2010 Crab Nebula flare and time-integrated constraints on neutrino emission from the Crab using IceCube We present the results for a search of high-energy muon neutrinos with the IceCube detector in coincidence with the Crab nebula flare reported on September 2010 by various experiments. Due to the unusual flaring state of the otherwise steady source we performed a prompt analysis of the 79-string configuration data to search for neutrinos that might be emitted along with the observed gamma-rays. We performed two different and complementary data selections of neutrino events in the time window of 10 days around the flare. One event selection is optimized for discovery of E^-2 neutrino spectrum typical of 1st order Fermi acceleration. A similar event selection has also been applied to the 40-string data to derive the time-integrated limits to the neutrino emission from the Crab. The other event selection was optimized for discovery of neutrino spectra with softer spectral index and TeV energy cut-offs as observed for various galactic sources in gamma-rays. The 90% CL best upper limits on the Crab flux during the 10 day flare are 4.73 x 10^-11 cm-2 s-1 TeV-1 for an E^-2 neutrino spectrum and 2.50 x 10^-10 cm-2 s-1 TeV-1 for a softer neutrino spectra of E-2.7, as indicated by Fermi measurements during the flare. IceCube has also set a time-integrated limit on the neutrino emission of the Crab using 375.5 days of livetime of the 40-string configuration data. This limit is compared to existing models of neutrino production from the Crab and its impact on astrophysical parameters is discussed. The most optimistic predictions of some models are already rejected by the IceCube neutrino telescope with more than 90% CL. Abstract We present the results for a search of high-energy muon neutrinos with the IceCube detector in coincidence with the Crab nebula flare reported on September 2010 by various experiments. Due to the unusual flaring state of the otherwise steady source we performed a prompt analysis of the 79-string configuration data to search for neutrinos that might be emitted along with the observed γ-rays. We performed two different and complementary data selections of neutrino events in the time window of 10 days around the flare. One event selection is optimized for discovery of E −2 ν neutrino spectrum typical of 1 st order Fermi acceleration. A similar event selection has also been applied to the 40-string data to derive the time-integrated limits to the neutrino emission from the Crab [35]. The other event selection was optimized for discovery of neutrino spectra with softer spectral index and TeV energy cut-offs as observed for various galactic sources in γ-rays. The 90% CL best upper limits on the Crab flux during the 10 day flare are 4.73 × 10 −11 cm −2 s −1 TeV −1 for an E −2 ν neutrino spectrum and 2.50×10 −10 cm −2 s −1 TeV −1 for a softer neutrino spectra of E −2.7 ν , as indicated by Fermi measurements during the flare. IceCube has also set a time-integrated limit on the neutrino emission of the Crab using 375.5 days of livetime of the 40-string configuration data. This limit is compared to existing models of neutrino production from the Crab and its impact on astrophysical parameters is discussed. The most optimistic predictions of some models are already rejected by the IceCube neutrino telescope with more than 90% CL. Introduction The Crab supernova remnant, originating from a stellar explosion at a distance of 2 kpc recorded in 1054 AD, consists of a central pulsar, a synchrotron nebula, and a surrounding cloud of expanding thermal ejecta [1]. Its bright and steady emission has made it a standard candle for telescope calibration. However, the photon emission stability in the X-ray and in the γ-ray regions is recently being questioned by a number of satellite experiments. As a matter of fact, a 7% decline of the Crab flux in the 3-100 keV region, larger at higher energies, has been observed in the period between 2008 and 2010 by the Fermi Gamma-ray Burst monitor and confirmed by Swift/BAT, RXTE/PCA, and INTEGRAL (IBIS) [2]. The pulsed emission from RXTE/PCA observations is consistent with the observed pulsar spindown suggesting that the decline is due to changes in the nebula and not in the pulsar. The source of energy that powers the Crab is the spin-down luminosity of the pulsar. The measured spin-down luminosity of the pulsar is ∼ 5×10 38 erg s −1 and its rotational period is 33 ms. While a small fraction of this energy goes into the pulsed emission, most of it is carried by a highly magnetized wind of relativistic plasma, the composition of which is not known. Both pure e ± plasma models and a mixture of e ± and protons or ions have been proposed [1,3,4,6,10]. The wind terminates in a standing shock and transfers some of the energy to accelerating particles. A part of this energy is converted into synchrotron emission from radio to MeV γ-rays by a population of high energy electrons radiating in the nebular magnetic field. The observations of the synchrotron emission from the Crab up to the MeV energies, make the Crab an undisputed galactic accelerator able to inject electrons up to energies ∼ 10 15 eV. These high energy electrons inevitably interact with the ambient photon fields through inverse Compton scattering, resulting in the production of high-energy γ-rays observable in the TeV regime [13,14,15]. The synchrotron emission from the Crab has an integrated luminosity of ∼ 1.3 × 10 38 erg s −1 , that is, at least ∼26% of the spin-down luminosity of the pulsar is involved in the acceleration of electrons in the energy range 10 11 -10 15 eV [1]. On the other hand, the presence of hadrons in the pulsar wind and the amount of energy transported by them remain as some of the unresolved and interesting questions about the Crab Nebula and plerions in general. Protons and ions do not lose their energy as efficiently as electrons, and hence it is more difficult to observe the products of their interactions. The dominant processes, discussed below, are proton-proton and proton-γ interactions, and both processes generate γ-rays and neutrinos through meson decays. Hence, neutrinos constitute an unique signature for hadron acceleration while hadronic γ-ray production has to be disentangled from inverse Compton emission. Hadronic models of the Crab emission assume that the pulsar wind is composed of a mixture of electrons and ions. These models predict that a significant part of the rotational energy lost by the pulsar is transferred through the shock radius to relativistic nuclei in the pulsar wind. Relativistic nuclei injected into the nebula can interact with the nebula matter, and produce cosmic rays and neutrinos via pion decay. Neutrino production by protons and nuclei interacting in the pulsar wind in the Crab have been discussed in Ref. [3,4]. According to these models, the nuclei can generate Alfvén waves just above the pulsar wind shock. These Alfvén waves will resonantly scatter off and accelerate the positrons and electrons that create the synchrotron emission. In the model described in Ref. [6] neutrinos are produced by heavy nuclei accelerated by the rotating neutron star that photo-disintegrate in collisions with soft photons. These models predict between 1 − 5 events per year in a cubic-kilometer detector such as IceCube when accounting for neutrino oscillations. Inelastic nuclear collisions are considered in Ref. [3]. In this paper the predicted rates depend on the Lorentz factor, Γ, of nuclei injected by the pulsar and the effective target density. The thermal matter distribution in the Crab is far from being uniform but forms filaments. For relativistic protons the effective target density is also affected by the structure of the magnetic field in and around these filaments. The authors in Ref. [3] provide several expected neutrino fluxes from the Crab Nebula as a function of energy, for different assumptions on these two parameters. For the highest values of the effective target density, IceCube begins to have the sensitivity to probe the highest possible values around Γ 10 7 while the favored values of the upstream Lorentz factor of the wind are Γ ∼ 10 6 [5]. Acceleration of positive ions near the surface of a young rotating neutron star ( 10 5 yrs) has also been investigated in Ref. [7]. This model describes how positive ions can be accelerated to ∼ 1 PeV in rapidly-rotating pulsars, with typical magnetic fields (B ∼ 10 12 G), by a potential drop across the magnetic field lines of the pulsar. Assuming that the star's magnetic moment µ and the angular velocity Ω satisfy the relation µ · Ω < 0, protons are accelerated away from the stellar surface. Beamed neutrinos (in coincidence with the radio beam) are produced by such high energy protons interacting with the star's radiation field when the ∆ production threshold is surpassed. Observation of these neutrinos could validate the existence of a hadronic component and a strong magnetic field near the stellar surface that accelerates the charged particles. The predictions in Ref. [8] based on this model account for ∼ 45 neutrino events/yr from the Crab in a cubic-kilometer detector in the most optimistic scenario where the fraction of charge depletion is assumed to be f d ∼ 1/2. In this paper we will show that IceCube data severely constrains these optimistic predictions of the model. In Ref. [12] a mean prediction of 1.2 neutrino events per year for E ν > 1 TeV was calculated for an underwater cubic-kilometer detector. This prediction is based on the H.E.S.S. measured γ-ray spectrum [13] assuming that all the γ-rays observed by H.E.S.S. up to 40 TeV are produced by pion decay and that the absorption of γ-rays is negligible. A similar calculation connecting photon and neutrino fluxes was done in Ref. [9] predicting about 5 events from the Crab accounting for neutrino oscillations. For a summary of some of the models on neutrino spectra the reader is referred to [10]. From Sep. 19 to 22, 2010 the AGILE satellite [16,17] reported an enhanced γ-ray emission above 100 MeV from the Crab nebula. The flare, however, was not detected in X-rays by INTEGRAL [20] observations between Sep. 12 and 19 partially overlapping with AGILE observations. It was also not confirmed by the SWIFT/BAT [21] in the 15-150 keV range nor by RXTE [22] on a dedicated observation of the Crab on Sep. 24. The observation was later confirmed by the Large Area Telescope on board of the Fermi Gamma-Ray Space Telescope that detected a flare of γ-rays (E γ > 100 MeV) with a duration of ∼ 4 days between Sep. 19-22 in the Crab direction [24]. The observed energy spectrum during the flare interval was consistent with a negative power-law with a spectral index of −2.7 ±0.2. The flux increase was a factor 5.5 ± 0.8 above the average flux from the Crab. Fermi also detected another flare of 16 days in Feb. 2009 corresponding to a flux increase of a factor 3.8 ± 0.5 but much softer spectral index (−4.3 ± 0.3). The ARGO-YBJ collaboration also issued an ATel on Sep. 2010 on the observation of an enhancement of the TeV emission for the same period of time but with a wider interval of 10 days. The enhanced TeV emission corresponded to a flux about 3-4 times higher than the usual Crab flux in TeV energies [23]. However, this observation was not confirmed by MAGIC [25] nor VERITAS [26]; Imaging Cherenkov Telescopes in a similar energy range as ARGO-YBJ. The spectral and timing properties of the flares indicate that the γ-rays are emitted via synchrotron radiation from PeV electrons from a region smaller than 1.4 × 10 −2 pc. This dimension is comparable to the jet knots observed close to the termination shock of the Crab Nebula [19]. Even though the Crab has always been considered to be a source of synchrotron emission, the flare represents a challenge to shock diffusive acceleration theory [24]. Nonetheless, explanations of the high variability due to electromagnetic phenomena have been proposed in Ref. [11] where the emission comes from a part of the pulsar wind shock 3 . The unusual flaring state of this otherwise steady source, the intensity of the flare, and the experimental observations in γ-rays motivated this search for neutrinos in IceCube in coincidence with the Crab flare of Sep. 2010. The IceCube collaboration started a prompt analysis of the then-running 79string configuration. The time window selected for this analysis was the 10 days interval reported by ARGO-YBJ from September 17 to September 27, which contains the Fermi flare window. An unbinned maximum likelihood (LLH) method described in Ref. [27] has been applied to search for an excess of neutrinos in coincidence with the enhanced γ-ray emission from the Crab. The non observation of neutrinos would reinforce pure electromagnetic emission scenarios and determine the level at which hadronic phenomena superimposed on an electromagnetic scenario can be probed. The IceCube Neutrino Observatory is a neutrino telescope installed in the deep ice at the geographic South Pole. The final configuration comprises 5,160 photomultipliers (PMTs) [29] along 86 strings instrumented between 1.5-2.5 km in the ice. Its design is optimized for the detection of high energy astrophysical neutrinos with energies above ∼ 100 GeV. The observation of cosmic neutrinos will be a direct proof of hadronic particle acceleration and will reveal the origins of cosmic rays (CR) and the possible connection to shock acceleration in Supernova Remnants (SNR), Active Galactic Nuclei (AGN) or Gamma Ray Bursts (GRBs). The IceCube detector uses the Antarctic ice as the detection volume where muon neutrino interactions produce muons that induce Cherenkov light. The light propagates through the transparent medium and can be collected by PMTs housed inside Digital Optical Modules (DOMs). The DOMs are spherical, pressure resistant glass vessels each containing a 25 cm diameter Hamamatsu photomultiplier and its associated electronics. Eight densely instrumented strings equipped with higher quantum efficiency DOMs form, together with 12 adjacent Ice-Cube strings, the DeepCore array that increases the sensitivity for low energy neutrinos down to about 10 GeV. Detector construction finished during the austral summer of 2010-11. This paper describes in Sec. 2 the data selection, the comparison to simulation, and the detector effective area and angular resolution for this search; in Sec. 3 we summarize the analysis method used; in Sec. 4 the results for the flare search are presented. Given the null result, upper limits are provided. In Sec. 5 the time-integrated upper limits based on 1 year of data of the 40-string configuration are presented to summarize what is the impact of the IceCube most sensitive limit on existing neutrino production models for the Crab. Conclusions are given in Sec. 6. Data Selection and Comparison to Monte Carlo The detection principle of IceCube is based on the charge and time measurement of the Cherenkov photons induced by relativistic charged particles passing through the ice sheet. The PMT signal is digitized with dedicated electronics included in the DOMs [31]. A DOM is triggered when the PMT voltage crosses a discriminator threshold set at a voltage corresponding to about 1/4 photoelectron. Various triggers are used in IceCube. The results shown here are based on a simple multiplicity trigger requiring that the sum of all triggered DOMs in a rolling time window of 5 µs is above 8 (SMT8). The duration of the trigger is the amount of time that this counter stays at or above 8 as the time window keeps moving. Once the trigger condition is met, all local coincidence hits are recorded in a readout window of ±10 µs for the 40-string run and of +6 −4 µs (to reduce the noise rate) in the 79-string run. IceCube triggers primarily on down-going muons at a rate of about 1.8 kHz in the 79-string configuration. Variation in the trigger rate determined by atmospheric muons is about ±10% due to seasonal changes [32]. Seasonal variations in atmospheric neutrino rates are expected to be a maximum of ±4% for neutrinos originating near the polar regions. Near the equator, atmospheric variations are much smaller and the variation in the number of events is expected to be less than ±0.5% [33]. For searches of neutrino point sources in the northern sky, IceCube can use the Earth as a shield to reduce the background of atmospheric muons and detect up-going muons induced by neutrinos. In the northern sky these searches are sensitive to neutrinos in the TeV-PeV region. In order to reconstruct muon tracks a LLH-based reconstruction is performed at the South Pole (L1 filter) providing a first order background rejection of poorly reconstructed events and a selection of high energy muons for the southern sky. The data sent through the satellite to the North undergo further processing that includes a broader range of more CPU consuming reconstructions. This offline processing also provides useful variables for background rejection, measurements of the energy and of the angular uncertainty, and selects about 35 Hz of the SMT8 data. However, the offline processing requires a fair amount of time to be finalized and is not suitable for expedited analysis. For the analysis of the Crab flare we used a dedicated selection for target of opportunity programs [36]. This online event selection and reconstruction is called the online Level 2 filter and selects about 4 Hz of data. It provides a reduced data rate (compared to the standard online data) because of stricter cuts than in the offline filter. The loss of sensitivity of this stream of data is marginal for E −2 neutrino spectra. The online L2 filter performs a 8-fold iterative single photoelectron (SPE) LLH fit for events with the number of DOMs triggered fewer than 300 and a 4-fold iterative SPE fit otherwise. These SPE fits are seeded by a track obtained using a single iteration LLH fit [34]. While the online Level 2 selects good quality tracks and high energy muons from the northern sky, it is dominated by the background of down-going atmospheric muons and therefore further cuts have to be applied before performing neutrino source searches. Experimental and simulated data are processed and filtered in the same way. The data used for this search concern the period from 2010/08/10 to 2010/10/12. In this period the detector was running in a stable configuration. The total live time for that period (considering deadtimes) is 60.9 days. Figure 1 shows the data rate of each run included during the selected time window as well as the South Pole atmospheric temperature. As can be seen at this level, the rate is dominated by down-going atmospheric muons, which display larger weather-dependent variations than the final up-going neutrino events. [23]. The blue dotted line indicates the temperature in the middle stratosphere of the South Pole according to [32]. We have performed two dedicated selections starting from the online L2 filter that we describe below. Straight Cuts Data Selection This dataset is obtained by requiring a good level of reconstruction and ensuring degree level accuracy in the tracking errors to reject the misreconstructed down-going atmospheric muons from the real up-going atmospheric neutrino sample. The variables used are determined in the offline data processing and have been used for the 40-string point-source analyses in [35] and [28]. The final cut level can be achieved by applying the following series of cuts on a number of variables to obtain a good agreement between data and the simulation of atmospheric neutrinos, with a contamination of the order of 5% of atmospheric muons, mainly muons from two cosmic ray showers in coincidence in the same readout window. Having these muons with different directions gives hit patterns that confuse the reconstruction so that at times the result is a misreconstructed up-going track. The cuts are: where: • N dir : is the number photons detected within -15 and 75 ns with respect to the expected arrival time of unscattered photons from the reconstructed muon-track. Scattering of photons in the ice causes a loss of directional information and will delay them with respect to the unscattered expectation; • L dir : is the maximum distance in meters between direct photons projected along the best muon track solution; • σ cr : is the uncertainty on the reconstructed track direction given by the LLH-based track reconstruction estimated by a method based on the Cramer-Rao inequality [37]; and • L red and L ′ red : are the standard reduced and modified LLH values respectively. The reduced LLH is defined as the − log 10 of the LLH value of the track reconstruction divided by the number of degrees of freedom. The number of degrees of freedom is the number of hit DOMs minus five fit parameters, two angles and three coordinates of a reference point along the track. It was found by comparing background rejection efficiency to signal selection efficiency that a good variable for rejection of background for low energy events is the number of hit DOMs minus an effective number of degrees of freedom of 2.5. An additional cut to select events in the direction of the Crab (Θ Crab = 122 • at the South Pole) has also been applied: Θ Crab − 10 • < θ rec < Θ Crab + 10 • , where θ rec is the reconstructed zenith angle of the muon track. No further selection in right ascension has been applied. In Tab. 1 the selected number of events and the expected number of atmospheric neutrinos and muons are given. The final number of events selected for the 10 day window of the flare is 354. BDT Data Selection The second dataset is obtained by using a multi-variate learning machine. In particular this data selection is based on the knowledge and experience from previous analyses looking for solar Weakly Interactive Massive Particles (WIMPs) with the IceCube detector [38]. During the austral winter the Sun is below the horizon at the South Pole and its maximum declination is equal to the obliquity of the ecliptic, 23.4 • . Since the Crab Nebula lies fairly close to the ecliptic plane, the strategies and cuts that are optimized for this specific direction can be applied for the Crab direction. Starting with the online L2 filtered data selection, as described above, a number of additional cuts were applied. The hereby selected events fulfill criteria of horizontal tracks passing the detector, to further minimize vertical tracks associated with background events. Additionally, the cuts were chosen to reduce the tails of distributions of the background into the signal region: z travel > −10 m ; σ COGz < 170 m ; σ cr < 10 • ; ρ av < 150 m ; t accu < 3000 ns (2) where: • z travel : measures the difference in the z positions of the center of gravity (COG) of the hits at the beginning of an event (first 1/4 of the hits in time) and the COG at the end of the event (last 3/4 of the hits in time); • σ COGz : is the uncertainty in meters of the z-coordinate of the COG; • ρ ave : is the mean minimal distance between the LLH track and the hit DOMs; and • t accum : is the accumulation time, defined as the time until 75% of the total charge develops in ns. Boosted Decision Trees (BDTs) [39], multi-variate learning machines, were used in the final analysis step to classify events as signal-like or background-like. Eleven event observables, split in two sets of 5 and 6 each, were obtained by choosing parameters with low correlation in background (correlation coefficient |c| < 0.5), but high discriminating power between signal and background. The selected observables include N dir , L dir , σ cr and L ′ red as described within the straight cuts data selection in Sec.2.1 and z travel from above. Additionally, observables specifying the geometry, the time evolution of the hit pattern, the quality and consistency of the various track reconstructions that is defined through the opening angle between the line-fit and the LLH tracks, and the number of hit strings are used. Training was done with simulated signal events for a soft neutrino spectrum of E −3 that also well represents the case of an E −2 spectrum with a TeV cut-off. A set of off-time real data, not used in the flare analysis, was used for training as background. The final sample is defined by a cut on the combined output (score) of the two BDTs. As in the case of the straight cuts sample, an additional requirement of reconstructed zenith tracks within ±10 • from the Crab has been applied. In Tab Table 1: Data, atmospheric muon, and neutrino expected background rates for different cut progression. The signal efficiency for an E −2 neutrino spectrum assuming an emission ±10 • around the Crab with respect to the online Level 2 is also shown. Comparison Data-Monte Carlo and Detector Performance The simulation of atmospheric and signal neutrinos that is used for determining the selection efficiency, the performance of the detector and to calculate upper limits is based on the neutrino generator ANIS [40] and the deep inelastic neutrino-nucleon cross sections with CTEQ5 parton distribution functions [41]. Neutrino simulation can be weighted for different fluxes, accounting for the probability of each event to occur. In this way, the same simulation sample can be used to represent atmospheric neutrino models such as Bartol [42] and Honda [43] neutrino fluxes from pion and kaon decays (conventional flux) and a variety of models for the charm component (prompt flux) [44,45]. Muons from CR air showers were simulated with CORSIKA [46] with the SIBYLL hadronic interaction models [47]. An October polar atmosphere, an average case over the year, is used for the CORSIKA simulation. Seasonal variations are therefore to be expected less than ±10% in event rates [32]. Muon propagation through the Earth and ice are done using MMC [48]. This simulation is used to verify the level of agreement of data and MC from trigger level to Level 1 and to understand the level of contamination at final cut level. For the optical properties of the ice we used a model obtained from calibrations using the LEDs in the DOMs called flashers [49]. This model produces a better agreement between data and MC than the model previously used [50]. The simulation propagates the photon signal to each DOM using light tracking software described in [51]. The simulation of the DOMs includes their angular acceptance and electronics. The systematic errors on the simulation of the signal used to produce the upper limits have been evaluated and presented in Sec. 6 of Ref. [35] describing the 40-string time-integrated point source search. The main uncertainties on the limits for an E −2 signal of muon neutrinos come from photon propagation, absolute DOM efficiency, and uncertainties in the Earth density profile and muon energy loss, accounting for a total of 16%. Figure 2 shows the data and simulation comparison for some variables at the final cut level for the two data samples. As can be seen, the BDT sample increases the overall rate by allowing more low quality reconstructed events (high L red ) than the straight cut sample. This is translated into a higher neutrino effective area at low energies but also a worse angular resolution as can be seen in figure 3. Likelihood analysis The method used for this analysis is an unbinned likelihood method [27]. This method looks for a localized statistically significant excess of neutrinos above the background in the direction of the Crab in coincidence with the flare. The same analysis technique has already been applied to AGN flare searches in IceCube [28]. The method uses both the reconstructed direction of the events as well as an energy proxy, the reconstructed visible muon energy, to discriminate any possible signal from background during the time interval of the flare. We consider the largest reported time window of 10 days by ARGO-YBJ. The applied method describes the data as a two component mixture of signal and background. For a data set with N total events the probability density of the i th event is given by: where S i is the density distribution for the signal hypothesis and B i for background. The parameter n s is the number of signal events and one of the free parameters of the likelihood maximization together with the spectral index, γ, of the signal spectrum distribution. The likelihood of the data is the product of all event probability densities: The likelihood is then maximized with respect to n s and γ, giving the best fit valuesn s andγ. The null hypothesis is given by n s = 0 (γ has no meaning when no signal is present). The likelihood ratio test-statistic is defined as: The background probability distribution function, or pdf, B i , is given by: and is computed using the distribution of data itself. The spatial term B space i (θ i , φ i ) is the event density per unit solid angle as a function of the local coordinates. The energy probability, B energy i (E i , θ i ), is determined from the energy proxy distribution of data as a function of the cosine of the zenith angle, θ i . This energy proxy, described in detail in [35], uses the density of photons along the muon track due to stochastic energy losses of pair production, bremsstrahlung and photonuclear interactions which dominate over ionization losses for muons above 1 TeV. The time probability B time i (t i , θ i ) of the background can be taken to be flat for this case of a 10 day time interval ignoring the seasonal modulations. The signal pdf S i is given by: where S space i depends on the angular uncertainty of the event σ i and the angular difference between the event position x i from the source position x s . The density function S energy i is a function of the reconstructed energy proxy E i , and the spectrum γ s is calculated from an energy distribution of simulated signal in a zenith band that contains the source. The signal time probability, S time i , depends on the particular signal hypothesis. In this analysis we adopt a simple cut in time between t min and t max , which can be expressed as: where t i is the arrival time of the event, t max and t min are the upper and lower bounds of the time window defining the flare, and H is the Heavyside step function. The significance of the result is evaluated by comparing the test-statistic with a distribution obtained by performing the same analysis over a set of background-only scrambled data sets. The fraction of trials above the test-statistic value obtained from data is referred to as the p-value, with smaller p-values indicating that the background-only (i.e. null) hypothesis is increasingly disfavored compared to the signal-plus-background hypothesis as a description of the data. This leads to the definition of the discovery potential: the average number of signal events required to achieve a p-value less than 2.87×10 −7 (one-sided 5σ) in 50% of trials. Similarly, the sensitivity is defined as the average signal required to obtain, in 90% of trials, a teststatistic greater than the median test-statistic of background-only scrambled samples. Results The method described in section 3 has been applied to both data samples, the one obtained with straight cuts and the one obtained using the BDTs. In both cases the best fit resulted in n s = 0 (i.e. an under-fluctuation). Figure 4 shows the event distribution for those events with a S i B i > 1, that is, only events inside the flare window that contribute to the likelihood. As can be seen, due to its higher neutrino efficiency at energies below 10 TeV the BDT sample has more atmospheric neutrino events. Since the background estimation depends on the sample, the signal-to-background ratios are different for the same events in the two samples. The highest event weight comes from the straight cuts sample. Table 2 shows the upper limits set by both data samples for different neutrino spectra. Each upper limit is shown both in terms of number of signal events that can be rejected at 90% CL, n 90% s , and the flux limit on muon neutrinos, Φ 90% νµ , for a 9.28 day interval in units of cm −2 s −1 TeV −1 , i.e. The analysis described and the results given in Tab. 2 rely on the fact that background simulation can be performed by scrambling the right ascension in real data (even if a signal is present in the data sample the scrambling will dilute it over the background). This method of estimating the background gives robust p-values in terms of systematic uncertainties. Systematic uncertainties only affect the estimate of the signal flux from the source and the upper limits. The systematic uncertainties on the expected flux come from photon propagation in ice, absolute DOM sensitivity (±8%), and uncertainties in the Earth density profile as well as muon energy loss. The main uncertainty however is the modeling of Antarctic ice and its effect on the photon propagation. In IceCube different ice models have been devised. The variation in the upper limits depending on the photon propagation model used are within < 10%. Overall the uncertainty on upper limits is 16%. Impact of IceCube time-integrated limits on models from the Crab The main goal of the IceCube telescope is the search for cosmic neutrino signals that might explain the astrophysical phenomena that give rise to the cosmic ray emission. In the absence of detection, constraining models can also provide insights about the nature of these phenomena. The best available neutrino flux limits for the Crab are based on the time-integrated analysis performed during the 375.5 d period corresponding to the 40-string configuration of IceCube. We discuss here the impact of these limits on different models of neutrino emission from the Crab. Figure 5 summarizes Spectrum Straight Cuts sample BDT sample is the limit in terms of number of signal events for a 90% confidence level and Φ 90% νµ is the flux upper limit in units of 10 −11 cm −2 s −1 TeV −1 for a 9.28 days flaring interval. The resulting neutrino luminosity limit, L 90% νµ , is given in units of 10 35 erg s −1 and it was calculated by integrating dN 90% /dE × E over the energy range from E min to E max to contain 90% signal of the spectrum and multiply by 4πd 2 where d is the distance to the Crab Nebula (d = 1850 pc). a number of different predicted fluxes described in the introduction of this paper and where the 40-string configuration limits stand [35]. Upper limits are defined as the 90% confidence level (CL) using the method from Feldman & Cousins [52]. The green line (solid) corresponds to the flux predicted in [12] based on the γ-ray spectrum measured by H.E.S.S. and the corresponding upper limit (dashed). The black line represents the estimated flux based on the resonant cyclotron absorption model proposed in [3] for the case of a wind Lorentz factor of Γ = 10 7 and the most optimistic case of the effective target density. The red and blue lines represent the two predicted fluxes according to [7] for the cases of linear and quadratic proton acceleration respectively. The most optimistic version of this model (for both linear and quadratic proton acceleration) can be rejected with more than 90% CL using the time integrated data from 40 string configuration constraining this way the value of the charge depletion fraction. Conclusions Searches for neutrinos in coincidence with the Sep. 2010 Crab flare have been presented in this paper. The data used was taken with the 79-string (1) Link & Burgio (2) Amato et al. Upper limit 90% CL Flux prediction Figure 5: Predicted fluxes and upper limits based on the IceCube 40 string configuration on several models from the Crab. Solid lines indicate the predicted flux and dotted lines the corresponding upper limit for a 90% CL. The green lines are the predicted flux and corresponding upper limit based on the model proposed in [12]. The red and blue lines correspond to the model in [7] for the cases of linear (1) and quadratic (2) proton acceleration. The black line represents the estimated flux for the most optimistic model proposed in [3] based on resonant cyclotron absorption model and its corresponding upper limit. configuration of IceCube. This is the first analysis of data taken by this configuration and represents the first rapid response analysis of IceCube to an astronomical event such as the flaring of an otherwise steady standard candle source. Two different approaches of event selection have been followed. One using direct cuts on quality reconstruction variables and optimized for discovery for E −2 neutrino spectra, and the other based on multivariate analysis and optimized for discovery at lower energies, important for galactic sources that have soft spectra with cut-offs at TeV energies. The two data sets however showed a background under-fluctuation during the time interval considered. The corresponding upper limits based on generic neutrino spectra have been shown for the flaring state of the Crab. Assuming isotropic emission from the shock (even if this may not be the case for a highly relativistic pulsar wind) our limit for E −2 corresponds to a neutrino luminosity constraint for the flare state of about ∼ 2 × 10 35 erg s −1 , and ∼ 1.5 × 10 36 erg s −1 if a neutrino cut-off of 1 TeV is assumed. In both cases the resulting neutrino luminosity constraint is about 2 -3 orders of magnitude lower than the spin-down luminosity of the pulsar and comparable to the peak isotropic γ-ray luminosity ∼ 5×10 35 erg s −1 measured by AGILE [17] in the energy range from 0.1 to 10 GeV. In addition to the flare analysis we calculated the current best limits set by IceCube on different models for neutrino emission from the Crab Nebula. These limits are based on the time-integrated analysis of IceCube with the 40string configuration of the detector. The upper regions of the most optimistic models can be rejected with more than 90% CL providing useful constraints on adjustable parameters of these models. Taking the neutrino spectrum derived from the γ-ray observations from the Crab, the constraint in neutrino luminosity for the steady emission of the Crab is 1 × 10 35 erg s −1 which is a factor ∼ 1.7 larger than the luminosity in γ-rays assuming the γ-ray spectrum measured in Ref. [13] integrated over the energy range between 400 GeV -40 TeV. In the future the IceCube detector will combine datasets from different detector configurations. When the different livetimes of the 40-string configuration data and the full detector will be summed, the sensitivity will improve by about a factor of five making this search more predictive.
2011-06-20T15:21:33.000Z
2011-06-17T00:00:00.000
{ "year": 2011, "sha1": "68891a2e8e0a77c45fa3e0133ab40234129baf3e", "oa_license": null, "oa_url": "https://bib-pubdb1.desy.de/record/139358/files/1106.3484v2.pdf", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "eeb235ce8c7b09d8c04fd86b51f5c4ce05165cea", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
10228412
pes2o/s2orc
v3-fos-license
Statistical Machine Reordering Reordering is currently one of the most important problems in statistical machine translation systems. This paper presents a novel strategy for dealing with it: statistical machine reordering (SMR). It consists in using the powerful techniques developed for statistical machine translation (SMT) to translate the source language ( S ) into a reordered source language ( S’ ), which allows for an improved translation into the target language ( T ). The SMT task changes from S2T to S’2T which leads to a monotonized word alignment and shorter translation units. In addition, the use of classes in SMR helps to infer new word reorderings. Experiments are reported in the EsEn WMT06 tasks and the ZhEn IWSLT05 task and show signi(cid:2)cant improvement in translation quality. Introduction During the last few years, SMT systems have evolved from the original word-based approach (Brown et al., 1993) to phrase-based translation systems (Koehn et al., 2003). In parallel to the phrase-based approach, the use of bilingual n-grams gives comparable results, as shown by Crego et al. (2005a). Two basic issues differentiate the n-gram-based system from the phrasebased: training data are monotonously segmented into bilingual units; and, the model considers ngram probabilities rather than relative frequencies. This translation approach is described in detail by Mariño et al. (2005). The n-gram-based system follows a maximum entropy approach, in which a log-linear combination of multiple models is im-plemented (Och and Ney, 2002), as an alternative to the source-channel approach. In both systems, introducing reordering capabilities is of crucial importance for certain language pairs. Recently, new reordering strategies have been proposed in the literature on SMT such as the reordering of each source sentence to match the word order in the corresponding target sentence, see Kanthak et al. (2005) and Crego et al. (2005b). Similarly, Matusov et al. (2006) describe a method for simultaneously aligning and monotonizing the training corpus. The main problems of these approaches are: (1) the fact that the proposed monotonization is based on the alignment and cannot be applied to the test sets, and (2) the lack of reordering generalization. This paper presents a reordering approach called statistical machine reordering (SMR) which improves the reordering capabilities of SMT systems without incurring any of the problems mentioned above. SMR is a first-pass translation performed on the source corpus, which converts it into an intermediate representation, in which source-language words are presented in an order that more closely matches that of the target language. SMR and SMT are performed using the same modeling tools as n-gram-based systems but using different statistical log-linear models. In order to be able to infer new reorderings we use word classes instead of words themselves as the input to the SMR system. In fact, the use of classes to help in the reordering is a key difference between our approach and standard SMT systems. This paper is organized as follows: Section 2 outlines the baseline system. Section 3 describes the reordering strategy in detail. Section 4 presents and discusses the results, and Section 5 presents our conclusions and suggestions for further work. N-gram-based SMT System This section briefly describes the n-gram-based SMT which uses a translation model based on bilingual n-grams. It is actually a language model of bilingual units, referred to as tuples, which approximates the joint probability between source and target languages by using bilingual n-grams (de Gispert and Mariño, 2002). Bilingual units (tuples) are extracted from any word alignment according to the following constraints: 1. a monotonous segmentation of each bilingual sentence pairs is produced, 2. no word inside the tuple is aligned to words outside the tuple, and 3. no smaller tuples can be extracted without violating the previous constraints. As a result of these constraints, only one segmentation is possible for a given sentence pair. Figure 1 presents a simple example which illustrates the tuple extraction process. Two important issues regarding this translation model must be considered. First, it often occurs that large number of single-word translation probabilities are left out of the model. This happens for all words that are always embedded in tuples containing two or more words. Consider for example the word "ice-cream" in Figure 1. As seen from the Figure, "ice-cream" is embedded into tuple t 6 . If a similar situation is encountered for all occurrences of "ice-cream" in the training corpus, then no translation probability for an independent occurrence of this word will exist. To overcome this problem, the tuple 4-gram model is enhanced by incorporating 1-gram trans-lation probabilities for all the embedded words detected during the tuple extraction step. These 1gram translation probabilities are computed from the intersection of both, the source-to-target and the target-to-source alignments. The second issue has to do with the fact that some words linked to NULL end up producing tuples with NULL source sides. Consider for example the tuple t 3 in Figure 1. Since no NULL is actually expected to occur in translation inputs, this type of tuple is not allowed. Any target word that is linked to NULL is attached either to the word that precedes or the word that follows it. To determine this, we use the IBM 1 probabilities, see Crego et al. (2005a). In addition to the bilingual n-gram translation model, the baseline system implements a log-linear combination of four feature functions, which are described as follows: • A target language model. This feature consists of a 4-gram model of words, which is trained from the target side of the bilingual corpus. • A word bonus function. This feature introduces a bonus based on the number of target words contained in the partial-translation hypothesis. It is used to compensate for the system's preference for short output sentences. • A source-to-target lexicon model. This feature, which is based on the lexical parameters of the IBM Model 1 (Brown et al., 1993), provides a complementary probability for each tuple in the translation table. These lexicon parameters are obtained from the source-to-target alignments. • A target-to-source lexicon model. Similarly to the previous feature, this feature is based on the lexical parameters of the IBM Model 1 but, in this case, these parameters are obtained from target-to-source alignments. All these models are combined in the decoder. Additionally, the decoder allows for a nonmonotonous search with the following distorsion model. • A word distance-based distorsion model. where d k is the distance between the first word of the k th tuple (unit), and the last word+1 of the (k − 1) th tuple. Distance are measured in words referring to the units source side. To reduce the computational cost we place limits on the search using two parameters: the distortion limit (the maximum distance measured in words that a tuple is allowed to be reordered, m) and the reordering limit (the maximum number of reordering jumps in a sentence, j). This feature is independent of the reordering approach presented in this paper, so they can be used simultaneously. In order to combine the models in the decoder suitably, an optimization tool is needed to compute log-linear weights for each model. Statistical Machine Reordering As mentioned in the introduction, SMR and SMT are based on the same principles. Here, we give a detailed description of the SMR reordering approach proposed. Concept The aim of SMR consists in using an SMT system to deal with reordering problems. Therefore, the SMR system can be seen as an SMT system which translates from an original source language (S) to a reordered source language (S'), given a target language (T). Then, the translation tasks changes from S2T to S'2T. The main difference between the two tasks is that the latter allows for: (1) monotonized word alignment, and (2) higher quality monotonized translation. Figure 2 shows the SMR block diagram. The input is the initial source sentence (S) and the output is the reordered source sentence (S'). There three blocks inside SMR: (1) class replacing ; (2) the decoder, which requires the translation model; and, (3) the block which reorders the original sentence using the indexes given by the decoder. The following example specifies the input and output of each block inside the SMR. El sólo podría compromiso mejorar Training For the reordering translation, we used an n-grambased SMT system (and considered only the translation model). Figure 3 shows the block diagram of the training process of the SMR translation model, which is a bilingual n-gram-based model. The training process uses the training source and target corpora and consists of the following steps: 1. Determine source and target word classes. 2. Align parallel training sentences at the word level in both translation directions. Compute the union of the two alignments to obtain a symmetrized many-to-many word alignment. (a) From union word alignment, extract bilingual S2T tuples (i.e. source and target fragments) while maintaining the alignment inside the tuple. As an example of a bilingual S2T tuple consider: only possible compromise # compromiso sólo podría # 0-1 1-1 1-2 2-0, as shown in Figure 4, where the different fields are separated by # and correspond to: (1) the target fragment; (2) the source fragment; and (3) the word alignment (in this case, the fields that respectively correspond to a target and source word are separated by −). (b) Modify the many-to-many word alignment from each tuple to many-to-one. If one source word is aligned to two or more target words, the most probable link given IBM Model 1 is chosen, while the other are omitted (i.e. the number of source words is the same before and after the reordering translation). In the above example, the tuple would be changed to: only possible compromise # compromiso sólo podría # 0-1 1-2 2-0, as P ibm1 (only, sólo) is higher than P ibm1 (possible, sólo). (c) From bilingual S2T tuples (with manyto-one inside alignment), extract bilingual S2S' tuples (i.e. the source fragment and its reordering). As in the example: compromiso sólo podría # 1 2 0, where the first field is the source fragment, and the second is the reordering of these source words. (d) Eliminate tuples whose source fragment consists of the NULL word. (e) Replace the words of each tuple source fragment with the classes determined in Step 1. 4. Compute the bilingual language model of the bilingual S2S' tuple sequence composed of the source fragment (in classes) and its reorder. Once the translation model is built, the original source corpus S is translated into the reordered source corpus S' with the SMR system, see Figure 2. The reordered training source corpus and the original training target corpus are used to train the SMT system (as explained in Section 2). Finally, with this system, the reordered test source corpus is translated. Evaluation Framework In this section, we present experiments carried out using the EsEn WMT06 and the ZhEn IWSLT05 parallel corpus. We detail the tools which have been used and the corpus statistics. Tools • The word alignments were computed using the GIZA++ tool (Och, 2003). • The word classes were determined using 'mkcls', a freely-available tool with GIZA++. • The optimization tool used for computing log-linear weights (see Section 2) is based on the simplex method (Nelder and Mead, 1965). Corpus Statistics Experiments were carried out on the Spanish and English task of the WMT06 evaluation 1 (EuroParl Corpus) and on the Chinese to English task of the IWSLT05 evaluation 2 (BTEC Corpus). The former is a large corpus, whereas the latter is a small corpus translation task. Table 1 and 2 show the main statistics of the data used, namely the number of sentences, words, vocabulary, and mean sentence lengths for each language. Units In this section different statistics units of both approaches (S2T and S'2T) are shown (using the ZhEn task). All the experiments in this section were carried out using 100 classes in the SMR step. Table 3 shows the vocabulary of bilingual ngrams and embedded words in the translation model. Once the reordering translation has been computed, alignment becomes more monotonic. It is commonly known that non-monotonicity poses difficulties for word alignments. Therefore, when the alignment becomes more monotonic, we expect an improvement in the alignment, and, therefore in the translation. Here, we can observe a significant enlargement of the number of translation units, which leads to a growth of the translation vocabulary. We also observe a decrease in the number of embedded words (around 20%). From Section 2, we know that the probability of embedded words is estimated independently of the translation model. Reducing embedded words allows for a better estimation of the translation model. Figure 5 shows the histogram of the tuple size in the two approaches. We observe that the number of tuples is similar over length 5. However, there are a greater number of shorter units in the case of SMR+NB (shorter units lead to a reduction in data sparseness). Table 4 shows the tuples used to translate the test set (total number and vocabulary). Note that the number of tuples and vocabulary used to translate the test set is significantly greater after the reordering translation. Results Here, we introduce the experiments that were carried out in order to evaluate the influence of the SMR approach in both tasks EsEn and ZhEn. The log-linear translation model was optimized with the simplex algorithm by maximizing over the BLEU score. The evaluation was carried out using references and translation in lowercase and, in the ZhEn task, without punctuation marks. We studied the influence of the proposed SMR approach on the n-gram-based SMT system described using a monotonous search (NBm or monotonous baseline configuration) in the two tasks and a non-monotonous search (NBnm or non-monotonous baseline configuration) in the ZhEn task. In allowing for reordering in the SMT decoder, the distortion limit (m) and reordering limit (j) (see Section 2) were empirically set to 5 and 3, as they showed a good trade-off between quality and efficiency. Both systems include the four features explained in Section 2: the language model, the word bonus, and the source-to-target and target-to-source lexicon models. Tables 5 and 6 show the results in the test set. The former corresponds to the influence of the SMR system on the EsEn task (NBm), whereas the latter corresponds to the influence of the SMR system on the ZhEn task (NBm and NBnm). Discussion Both BLEU and NIST coherently increase after the inclusion of the SMR step when 100 classes are used. The improvement in translation quality can be explained as follows: • SMR takes advantage of the use of classes and correctly captures word reorderings that are missed in the standard SMT system. In addition, the use of classes allows new reorderings to be inferred. • The new task S'2T becomes more monotonous. Therefore, the translation units tend to be shorter and SMT systems perform better. The gain obtained in the SMR+NBnm case indicates that the reordering provided by SMR system and the non-monotonous search are complementary. It means that the output of the SMR could still be further monotonized. Note that the ZhEn task has complex word reorderings. These preliminary results also show that SMR itself provides further improvements to those provided by the non-monotonous search. Conclusions and Further Research In this paper we have mainly dealt with the reordering problem for an n-gram-based SMT system. However, our approach could be used similarly for a phrase-based system. We have addressed the reordering problem as a translation from the source sentence to a monotonized source sentence. The proposed SMR system is applied before a standard SMT system. The SMR and SMT systems are based on the same principles and share the same type of decoder. In extracting bilingual units, the change of order performed in the source sentence has allowed the modeling of the translation units to be improved (shorter units mean a reduction in data sparseness). Also, note that the SMR approach allows the coherence between the change of order in the training and test source corpora to be maintained. Table 6: Results in the test set of the ZhEn task using a monotonous and a non-monotonous search. Performing reordering as a preprocessing step and independently from the SMT system allows for a more efficient final system implementation and a quicker translation. Additionally, using word classes helps to infer unseen reorderings. These preliminary results show consistent and significant improvements in translation quality. As further research, we would like to add extra features to the SMR system, and study new types of classes for the reordering task.
2014-07-01T00:00:00.000Z
2006-07-22T00:00:00.000
{ "year": 2006, "sha1": "9026d878a9753bc90f089a8642cddb9f691e95a3", "oa_license": null, "oa_url": "http://dl.acm.org/ft_gateway.cfm?id=1610086&type=pdf", "oa_status": "BRONZE", "pdf_src": "ACL", "pdf_hash": "d879aef65885cda80226977aac4af6a28a3c9795", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }