id
stringlengths
3
9
source
stringclasses
1 value
version
stringclasses
1 value
text
stringlengths
1.54k
298k
added
stringdate
1993-11-25 05:05:38
2024-09-20 15:30:25
created
stringdate
1-01-01 00:00:00
2024-07-31 00:00:00
metadata
dict
54755760
pes2o/s2orc
v3-fos-license
On the internal energy of the classical two-dimensional one-component-plasma We describe a new semi-phenomenological approach to estimate the internal energy of the classical one-component-plasma in two dimensions. This approach reproduces the Debye-Huckel asymptote in the limit of weak coupling, the ion disc asymptote in the limit of strong coupling, and provides reasonable interpolation between these two limits. The present analytic results are compared with those from other approximations as well as with existing data from numerical simulations. I. INTRODUCTION 4][5] Although thermodynamic properties of the OCP have been extensively studied over decades, simple physically motivated approaches are still of considerable interest. 4,6,7The purpose of this paper is to discuss a simple approach to estimate the internal energy of two dimensional (2D) classical OCP in a wide parameter regime. In two dimensions, two different systems are actually referred to as the OCP.The first is characterized by the conventional 3D Coulomb interaction potential (∝ r −1 ), but the particle motion is restricted to a 2D surface.This system has been used as a first approximation for the description of electron layers bound to the surface of liquid dielectrics and of inversion layers in semi-conductor physics. 2 It is also relevant to colloidal and complex (dusty) plasma mono-layers in the regime of week screening. 3,5,8In the second system, the interaction potential is defined via the 2D Poisson equation and scales logarithmically with distance (∝ − ln(r)).The experimental realizations of such system are less obvious, but nevertheless it received significant attention because of various field theoretical models 2 and existence of exact analytic solutions for some special cases.Our present paper is restricted to this latter case of logarithmic interaction in 2D.Note that both OCP systems (with Coulomb and logarithmic interactions), represent a very important limiting case of particle systems with extremely soft repulsive interactions, and share some common thermodynamic properties (for a recent example see Ref. 9). The system studied is characterized by the particle density n, and the temperature T (in the following temperature is measured in energy units, i.e., k B = 1).The interaction potential between two particles at a distance r from one another follows from the solution of the 2D Poisson equation around a central test particle and is logarithmic, where L is an arbitrary scaling length.It is common 10 to set L = a, where a = (πn) −1/2 is the 2D Wigner-Seitz radius.The strength of the interparticle interactions is measured by the coupling parameter, Γ = e 2 /T, and does not depend on the particle density (separation) in the considered case (as already mentioned we do not consider here the 2D systems of particles interacting via the conventional 3D Coulomb potential, which have also been extensively studied in the literature 11,12 ).As Γ increases, the OCP shows a transition from a weakly coupled gaseous regime (Γ ≪ 1) to a strongly coupled fluid regime (Γ ≫ 1) and crystallizes into the triangular lattice near Γ ≃ 135 − 140. 10,13We show below that a unified hybrid approach can be constructed that allows to estimate the internal energy of the 2D OCP across these coupling regimes. II. LINEAR DEBYE-HÜCKEL (DH) APPROXIMATION The solution of the linear Poisson-Boltzmann equation, where K 0 (x) is the zero-order modified Bessel function of the second kind and k D =  2πne 2 /T is the inverse screening length.Note the relation ak D = √ 2Γ.The reduced excess (that over non-charged particles) energy of the systems can be evaluated from where N is the number of particles (N → ∞ in the thermodynamic limit).This corresponds to the Debye-Hückel (DH) approximation for the weakly coupled (Γ ≪ 1) limit.With the help of the expansion where γ ≃ 0.57721 is the Euler's constant.This is the well known DH result, 10,14 which provides accurate description only in the limit of extremely weak coupling. III. DEBYE-HÜCKEL PLUS HOLE (DHH) APPROXIMATION To extend the applicability of the DH approach to the moderately coupled OCP in 3D, the simple phenomenological "Debye-Hückel plus hole" (DHH) approximation was proposed. 6,15The main idea behind the DHH approximation is that the exponential particle density must be truncated close to a test particle in order to avoid density to be negative upon linearization.The DHH approach has been recently applied to Yukawa systems in 3D. 16Here we demonstrate how it can be implemented for the 2D OCP. The potential inside the hole (disk in 2D case) of radius h satisfies The solution can be written as where A 2 = (e/2a 2 ).Outside the hole, the potential satisfies the linearized Poisson-Boltzmann equation, so that The two solutions should be matched at r = h, requiring φ in (h) = φ out (h) = T/e (the last condition ensures that particle density vanishes at the hole boundary in the linear approximation) and Unlike the 3D case, where the analytical solution exists, in the 2D case numerical solution is required. The numerical solution for z(Γ) is shown in FIG. 1.The reduced excess energy can be evaluated using equation ( 3), which yields u DHH = (eA 0 /2T).Thus, the DHH approximation in 2D yields In the limit Γ ≪ 1, Eq. ( 9) reduces to the DH result of Eq. ( 4), but it remains adequate at much higher Γ than the DH approach does.For example, in the special case Γ = 2, exact results can be obtained analytically. 17,18The exact excess energy at this point is u exact (2) = −γ/2 ≃ −0.28861. 17The DHH value is very close to that, u DHH (2) ≃ −0.29324, while the DH value is considerably below the exact one, u DH (2) ≃ −0.57721.In the strongly coupled regime Γ ≫ 1, the DHH approximation yields the correct scaling u ex ∝ Γ, but the coefficient of proportionality is incorrect (−1/4 instead of ≃ −3/8).In Figure 2 we compare the energies obtained using the DHH approach with those obtained using Monte Carlo (MC) 10 and molecular dynamics (MD) 13 computer simulations. IV. ION DISC MODEL (IDM) The ion disc model (IDM) in 2D OCP is an analog of the ion sphere model (ISM) in 3D OCP 2,19 (which can be also generalized to Yukawa systems 20 ).The main idea of this approximation is that in the regime of strong coupling, the particles repel each other and form a regular structure with the interparticle spacing of order a.Each particle can be considered as restricted to the cell (disc in 2D) of radius a, filled with the neutralizing background.The cells are charge neutral and do not overlap, and hence the potential energy of the system is just the sum of potential energy of each cell.The latter is readily calculated from the pure electrostatic consideration. 7The result is The result is very close to the static component of the actual excess energy of the 2D OCP in both strongly coupled fluid and solid phases.For instance, the Madelung constant of the 2D OCP forming the triangular lattice is M = −0.37438Γ.The thermal component of the excess energy, which is close to 1.0 (in reduced units) at strong coupling, can also be added. 7,21It was proven mathematically that Eq. ( 10) provides the lower bounds of the excess internal energy in the thermodynamic limit. 22The IDM asymptote is also shown in FIG. 2. V. HYBRID DHH + IDM APPROXIMATION This construction is analogous to that of the DHH + ISM approach for the 3D OCP that we have recently proposed. 23We consider a test particle along with the piece of the neutralizing charge (disc of radius h) as a new compound particle.The internal energy of such a compound particle consists of two parts: energy of a uniformly charged disk of radius h and charge q = −e(h/a) 2 and the energy of a charge e placed in the center of such a disk.Solving the Poisson equation inside and outside the disc and matching the solutions we can get for the energy of the uniformly charged disc The energy of a charge e placed in the center of such a disc is The energy of the compound particle is then ) In the limit of strong coupling, the effective charge of the compound particle tends to zero and, therefore, its internal energy should be an adequate measure of the excess energy of the whole system (per particle).We get in this limit z → 1 and u cp ≃ − 3 8 Γ, which coincides with the static IDM result. The energy associated with the remaining interaction between the compound particles (they are not charge neutral since z ≤ 1) can be estimated from the 2D energy equation where V eff (r) = −e 2 eff ln(r/a) is the effective interaction potential with e eff = e + q = e[1 − z 2 ] and g(r) is the radial distribution function.Since the effective charge e eff is considerably reduced compared to the actual charge e, especially in the strong coupling regime, it is not very unreasonable to use an expression originates from the linearized Boltzmann relation, g(r) ≃ 1 − e eff φ out (r)/T, where φ out is given by Eq. ( 7) in the DHH approximation.This yields Numerical integration is generally required in (15), but it can be shown to reduce to the DH result in the weakly coupled limit (Γ ≪ 1). Our estimate for the OCP excess energy within the hybrid DHH + IDM approximation is then simply u hyb (Γ) = u cp (Γ) + u pp (Γ). ( This is the main result of the present paper.Eq. ( 16) reduces to the DH and IDM asymptotes in respective limits of weak and strong coupling.The quality of the interpolation between these two limits is illustrated in FIG. 2 (red solid curve).The approach clearly somewhat underestimates the excess energy, especially in the transitional regime between weak and strong coupling.Nevertheless, the agreement with the accurate numerical data from MC and MD simulations is reasonable, especially taking into account the simplicity of the approach. FIG.1.Reduced radius of the hole, z = h/a, around the test particle as a function of the coupling parameter Γ in the 2D OCP. FIG. 2 . FIG.2.Reduced excess energy u ex /Γ versus the coupling parameter Γ for the 2D OCP.Symbols are the results from MC10 and MD13 simulations.Various dashed curves correspond to the DH, DHH, and ID approximations, as indicated in the figure.The (red) solid curve shows the result of the hybrid DHH + ID approaximation of Eq. (16).
2018-12-11T16:44:29.683Z
2015-08-25T00:00:00.000
{ "year": 2015, "sha1": "ee21904e2bb0dea77df3b54289c2fcc7facc664a", "oa_license": "CCBY", "oa_url": "https://aip.scitation.org/doi/pdf/10.1063/1.4929778", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "ee21904e2bb0dea77df3b54289c2fcc7facc664a", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
1223029
pes2o/s2orc
v3-fos-license
Malaria Journal Achieving High Coverage of Larval-stage Mosquito Surveillance: Challenges for a Community-based Mosquito Control Programme in Urban Dar Es Salaam, Tanzania Background: Preventing malaria by controlling mosquitoes in their larval stages requires regular sensitive monitoring of vector populations and intervention coverage. The study assessed the effectiveness of operational, community-based larval habitat surveillance systems within the Urban Malaria Control Programme (UMCP) in urban Dar es Salaam, Tanzania. Background Historically, most vector control efforts for malaria prevention in Africa have focused almost exclusively on adult stages, specifically indoor residual spraying (IRS) [1,2] and insecticide-treated nets (ITN) [3][4][5]. However, with increasing insecticide resistance [6] and behavioural avoidance by mosquito vectors [7], development and evaluation of complementary vector control strategies remains a priority. Reviews of the early 20 th century programmes in Brazil, Zambia and Egypt [8][9][10], have highlighted dramatic reductions of malaria burden achieved by integrated vector management generally and mosquito larval control specifically [11][12][13][14]. Application of microbial larvicides, such as Bacillus thuringensis var. israelensis (Bti), to larval habitats offers a control option that cannot be avoided by mosquitoes [15,16] and that has low probability of developing resistance due to the complex mode of action of the larvicide [17,18]. Furthermore, recent successes in urban Tanzania [19], the highland of western Kenya [20] and in Eritrea [21], suggest that larval control may be a valid option for malaria vector control in selected eco-epidemiological settings. Rapid growth of cities, characterized by a distinctive mix of different social, economic and cultural conditions is an important feature of contemporary African countries [22][23][24][25]. High population density associated with relatively few breeding sites suggests that area-wide application of vector control strategies is more practical and affordable in urban areas [26,27]. Moreover, stronger institutional support, governance and infrastructure offer significant advantages for establishing and sustaining vector control programmes in urban areas. However, the heterogeneity and mobility of the human population renders most urban communities less cohesive and therefore difficult to mobilize en masse to achieve impact of public health interventions. Malaria vector proliferation, transmission intensity and burden in urban areas is highly heterogeneous and focal, [23,26,[28][29][30]. Despite its growing importance, it is only recently that urban malaria is receiving the attention it deserves [23,25,26]. Cities and large towns are regarded as some of the most favourable environments for sustainable mosquito larval control, because mosquito-breeding sites are defined and easily located. However, larval control requires quite specific ecological understanding of the major vector species and their distinctive interaction with the local environment on very fine spatial scales [11,31,32]. Additionally, technical understanding of the principles and practice of larvicide application or environmental management, as well as intensive labour under challenging field conditions, are essential [11,[31][32][33]. Sustainable systems for monitoring the abundance and distribution of aquatic mosquito stages are required to enable effective decisions and actions by managers responsible for such programmes. This represents a particular challenge in Africa where the primary vector, Anopheles gambiae, can develop from egg to adult in less than a week in habitats, which can be ephemeral and difficult to detect [34][35][36]. Larval control for malaria prevention, delivered primarily through human resources mobilized from within local communities, has been recommended to minimize cost and maximize sustainable scalability [31][32][33]37]. However, given the technical, logistic and coverage requirements of larval control, which are probably greater than for current priority measures, such as insecticide-treated nets or indoor residual spraying, community-led rather than merely community-based vector control may be difficult to achieve [31,35,37]. A more sustainable approach might be the blending of vertical and horizontal strategies for the implementation of community-based systems for delivering area-wide control measures. Such an approach might rely on extensive mobilization of communitybased labour integrated into vertical management systems implemented by centralized institutions [31,35,37]. It is important to identify and understand the social and environmental factors that influence human behaviour and consequently the effectiveness of such programs. The Urban Malaria Control Programme (UMCP) in Dar es Salaam has been initiated by the Dar es Salaam City Council as a pilot programme to develop sustainable and affordable systems for larval control as part of routine municipal services [19,32,35,[37][38][39]. An in-depth look at the environmental and programmatic determinants of surveillance coverage in this urban environment was conducted to identify strengths, weaknesses and opportunities for improvement. Study area Dar es Salaam is Tanzania's biggest and most economically important city with the current population size exceeding 2.5 million inhabitants and a total area of 1,400 km 2 , corresponding to a mean human population density of 2,900 per km 2 [40]. It is situated between latitude 6.0°-7.5° S and longitude 39.0°-39.6° E. The city is divided into three municipalities: Kinondoni, Temeke and Ilala and each of these municipalities is further divided into wards. The study site comprised the 15 wards with 614,000 residents [40] included in the Dar es Salaam UMCP, [7,19,32,37] covering an area of 55 km 2 with wards ranging in size from 0.96 to 15 km 2 . All UMCP activities are coordinated by the City Medical Office of Health, and are fully integrated into the decentralized administrative system in Dar es Salaam, operating on all six administrative levels of the city: the city council, the three municipal councils it oversees, 15 wards chosen from those municipalities, containing 67 neighbourhoods referred to as mitaa in Kiswahili (singular mtaa, meaning literally street), and more than 3,000 housing clusters known as ten-cell-units (TCU) with each of them subdivided into a set of plots corresponding largely to housing compounds. The main tasks on the three upper levels are programme management and supervision, whereas mosquito larval surveillance and control is organized at ward level and implemented at the level of TCUs and their constituent plots. In principle, a TCU clusters ten houses with an elected representative known as an mjumbe, but typically comprises between 20-100 houses in practice [41]. UMCP implements regular surveillance of mosquito breeding habitats as a means to monitor effective coverage of aquatic habitats with microbial larvicides. Surveillance is applied through a community-based [35] but vertically managed delivery system [37]. The cross-sectional surveys described here to evaluate routine surveillance activities were conducted between end of June 2007 and January 2008. This period spanned a full dry season and was preceded by a typical rainfall pattern with a main rainy season from March to June and a much shorter rainy season from October to December. Routine programmatic larval surveillance by community based personnel Community owned resource persons (CORPs) were recruited through local administrative leaders including Street Health Committees and were remunerated at a rate of 3,000 Tanzanian shillings (US$ 2.45) per day through a casual labour system formulated by the municipal councils of Dar es Salaam for a variety of small-scale maintenance tasks such as road cleaning and garbage collection [32,35]. All essential standard operating procedures adopted by the recruited larval surveillance CORPs are described in detail elsewhere [37], but summarized as follows. Over 90 larval surveillance CORPs were actively employed by the UMCP during the time of survey with each assigned to a defined area of responsibility, comprising a specific subset of TCUs within one neighbourhood. These lists of TCUs were initially allocated based on local knowledge of habitat abundance, difficulty of terrain and geographic scale and subsequently refined through detailed participatory mapping of the study area [41]. On average, one CORP was responsible for an area of approximately 0.6 km 2 . All CORPs worked under the oversight of a single ward-level supervisor. Each CORP followed a predefined schedule of TCUs, which they were expected to survey on each day of the week. In wards where larviciding was taking place, the schedule of TCUs visited by the surveillance CORPs followed one day after the application of microbial larvicides by a separate set of larval control CORPs [37] so that indicators of operational shortcoming, such as the presence of late-stage (3 rd or 4 th instar) mosquito larvae, could be reacted to in sufficient time to prevent emergence of adult mosquitoes. This system was designed for routine mosquito habitat surveillance and larviciding to allow timely interpretation and reaction to entomologic monitoring data. Qualitative preliminary assessment of community-based larval surveillance The investigator (PPC) initially conducted three weeks of unscheduled guided walks with 23 of the surveillance CORPs nominated by the ward supervisor after the investigator reported to their office in the morning. The investigator did not pre-inform the CORPs nor did he reveal his role and independent status at any time before or during the visit. Both the investigator and the chosen CORPs would leave the ward office and survey TCUs that the CORPs were expected to survey according to their normal predefined schedule for that particular day [37], returning later to report to the ward supervisor. At this stage, the survey was led by the CORPs and the investigator followed passively, covertly observing and recording how CORPs conducted their routine larval habitat surveillance and prepared their daily reports for submission to the ward supervisor. Specifically, the following information was collected: did CORPs follow TCUs schedule correctly, were all TCUs and plots visited, whether fenced compounds were entered and if not, why not, how habitats were recorded, how habitats were searched for larvae, how CORPs interacted with residents. In cases of observed shortcomings in the operational practices of the CORPs or any additional opportunities for improved implementation of their duties, the CORPs were informally advised by the investigator. This approach was intended to maintain an open, non-authoritative relationship of the investigator with the CORPs, allowing him to observe and understand the operational challenges facing the CORPs and the program as a whole. A detailed formal analysis of these qualitative observations will be published elsewhere but informal appraisal of these observations was used to design a quantitative survey described as follows. Quantitative cross-sectional evaluation of communitybased larval surveillance A total of 173 TCUs from neighbourhoods distributed across all 15 wards were randomly selected from the list of TCUs in the UMCP study area. A total of 64 CORPs were responsible for these selected TCUs. The investigator accompanied the relevant CORP in guided walks through each TCU one day after their scheduled routine surveillance of that TCU and implemented his own larval habitat surveys following the standard operating procedures [37]. Results of the investigator were compared with the CORP's datasheet of the previous day. Every potential habitat found by the CORP in each plot, and any addi-tional habitats identified by the investigator that had not been detected by the surveillance CORPs, were distinguished and recorded using standardized forms (Additional file 1). Habitats were further classified into three habitat categories and constituent 11 habitat types [35] as follows: (1) natural habitats comprising (i) marshy or swampy areas, (ii) river-beds and (iii) springs or seepages; (2) agricultural artificial habitats comprising (i) rice paddies, (ii) ridge and furrow agriculture (matuta) and (iii) other habitats associated with agriculture; (3) non-agricultural artificial habitats comprising (i) drains and ditches, (ii) construction pits, foundations and other excavations (iii) water storage containers, (iv) tyre tracks and puddles and (v) ponds or pools. Additional information was collected regarding the presence or absence of a fence around a plot and whether or not a particular TCU was targeted with larvicide application at the time that it was surveyed. Lastly, records were taken regarding evidence of lack of familiarity of a CORP with the specific TCU and plots. Unfamiliarity was assumed if the CORP was not readily able to find his or her way around the TCU or plot, when plot boundaries could not be clearly defined and/or when residents of the plot were unable to recognise him/ her as a regular visitor to the area. Statistical analyses All the data were entered in coded numeric form and analysed using SPSS 15.0. Any association between the occupancy of different mosquito habitat categories and types by Anopheles and Culicine larvae was analysed using multivariate binary logistic regression [42]. Specifically, generalized estimating equations (GEE) were fitted to determine the influence of lack of familiarity of the CORP with the area, presence of a fence around the plot and whether larviciding was operational in that time and place upon the proportion of wet habitats (detection coverage) reported by CORPs and the proportion of habitats which contained larvae that were reported to be occupied by the CORP (detection sensitivity) for different habitat categories or types. While all observed habitats were included in the model fits to assess detection coverage, only those found to contain larvae by the investigator were considered in the denominator of models to assess detection sensitivity. The detection of the wet habitat or larval occupancy by the CORP was treated as the binary outcome variable and was fitted to a binomial distribution with a logit link function. CORP identity was treated as the subject variable and an exchangeable correlation matrix chosen for the repeated measurements distinguished by plot identity as the within subject variable. Differences between frequency distributions were assessed using likelihood ratio χ 2 analysis. Habitat characteristics found during cross-sectional evaluation A total of 8,395 plots were visited during the cross-sectional surveys, 60.0% (5,039) of which were from larviciding areas (Figure 1). Approximately one quarter of these plots (26.8%; 2,253) was behind fences. There was an unequal distribution of fenced plots between the visited larviciding and non-larviciding areas with the majority of the fenced plots (69.7%; 1,571) recorded in areas where larviciding was taking place. Overall 3,997 potential mosquito breeding habitats were recorded. Ofthese, 2,965 (74.2%) contained water at the time of survey. The vast majority of these wet habitats were non-agricultural artificial habitats (90.0%), such as drains, ditches, construction sites, foundations, man-made holes and tire tracks. The remainder was composed of a small number of natural habitats (7.4%), such as swampy areas with high groundwater level, riverbeds, seepages and springs, and a few agricultural artificial habitats (2.6%) mainly associated with rice and sweet potato cultivation; crops grown in ridge and furrow systems known as matuta (Table 1). Almost half (45.6%; 1,351/2,965) of all aquatic habitats were located within fenced plots. One fifth (20.5%; 608/ 2,965) of all aquatic habitats were recorded in plots with which CORPs clearly appeared to be unfamiliar and 91.9% (539/608) of these were located behind fences. A large number of wet habitats were surveyed in both larviciding areas (1,895) and in non-larviciding areas (1,070) and the proportion of habitats within fenced plots was higher in areas with larviciding than those without (50.8% (962) versus 36.4% (389), respectively; χ 2 = 57.3, df = 1, P < 0.001). The probability of a habitat containing Anopheline larvae depended on category and habitat type (Table 1). Agricultural sites were twice as likely to contain Anopheline lar-vae than natural habitats, whilst the chance of finding larvae in artificial non-agricultural habitats was much lower. Nevertheless, non-agricultural artificial habitats were the most abundant (90%) and, therefore, constituted 58% (135/231) of all Anopheles-occupied habitats ( Table 1). Over one quarter of wet habitats contained culicine larvae (Table 1), with 25.9% (767) and 22.1% (656) inhabited by early-stage and late-stages respectively. Natural and agricultural habitats were equally likely to harbour culicine larvae whilst the probability of their presence was significantly higher in artificial, non-agricultural habitats ( Table 1). CORPs' detection of aquatic habitats CORPs recorded 1963 wet habitats during their routine surveillance. Seven of these habitats were confirmed to be non-existent by the investigator, suggesting these CORPs had filled the surveillance forms without visiting the rele-vant plots so these were excluded from the analyses. Therefore, CORPs correctly recorded two thirds of wet habitats (Table 2). Detection coverage varied significantly between individual CORPs and between different habitat types (P < 0.001 for both as determined by logistic regression). CORPs were unfamiliar with 20.5% (608) of wet habitats and 92% (539) of these were located behind fences. Furthermore, the majority of wet habitats that the CORPs failed to record (61.1%; 619/1009) were located within fenced plots. Detection coverage differed significantly for different habitat types (χ 2 = 432.8, df = 10, p < 0.001) and categories (Table 3) with artificial non-agricultural habitats 1.6 times more likely to be recorded than others (Table 3). Consistent with the baseline evaluation [35] conducted before the introduction of current procedures for mapping [41], surveillance and larvicide application [37], most conspicuous habitat types like ponds, rivers, seepages, springs and drains were more readily recorded, whereas water The proportion of wet habitats found by investigator to contain Anopheles and Culicine larvae; Odds ratio (OR) and P values for the likelihood of occupancy determined with a binary logistic regression treating habitat category or type as potential determinants. a N is the total number of all wet habitats found during cross-sectional surveys while n is the number of either Anopheles or Culicine larvae positive habitats found b is the reference group for comparing habitat categories, c is the reference group for comparing the habitat types, CI = confidence interval NA; Not applicable Aerial photos for planned (A) and unplanned (B) settlements of urban Dar es Salaam with ground-based photos of common features for each (C and E versus D and F, respectively) D and F, respectively). Planned settlements are characterized by relatively wealthy inhabitants, fences, tight security and restricted access but often contain suitable habitat within spacious plots (E was photgraphed within the compound seen in from the ground in C and from the air in A). Unplanned areas are characterized by dense settlement, scarce space for habitats, almost no fences and few but often prominent habitats which are readily accessible (F is located at the bottom of the valley pictured from the ground in D and from the air in B). Figure 1 Aerial photos for planned (A) and unplanned (B) settlements of urban Dar es Salaam with ground-based photos of common features for each (C and E versus receptacles were poorly detected (Table 2). Furthermore, the type of habitats that CORPs did not find was significantly different between the fenced and unfenced plots: the majority of water storage containers, tyre tracks and artificial pits were located behind fences ( Figure 2). The probability of a CORP detecting and recording a wet habitat was similar in larviciding and non-larviciding areas but was 84% less likely if he or she was unfamiliar with the area (Table 3). As mentioned earlier, the vast majority of the sites with which the CORPs were unfamiliar were within fenced plots. The covariance between these two variables (Pearson correlation, r 2 = 0.40, P < 0.001) implies that the presence of fences around plots contributed to the lack of familiarity with plots among CORPs. Although excluded from the selected model pre- The probability that a wet habitat was detected by the CORPs was modelled with a binary distribution and logit link function using Generalised Estimating Equations (GEE) treating intervention status, CORPs' unfamiliarity with the plots and habitat category as the potential predictors a the reference group for the particular variable, CI; confidence interval, CORPs; community-owned resource persons N; the number of wet habitats found during cross-sectional surveys n; the number of wet habitats found by the CORPs during their routine habitat survey, NA; Not applicable OR; Odds ratios, sented in Table 3 because of this covariance, fenced plots, selected when familiarity was excluded, reduced the detection coverage by half (OR [95%CI] = 0.49 [0.37-0.65], P < 0.001). Anopheline larvae were identified by CORPs in only 29 of the 97 occupied habitats which they recorded as wet so overall detection sensitivity was 29.9%. More importantly they appeared unfamiliar with very few of both the anopheline-positive habitats which they reported as wet (5.2%, 5/97) and those which they did not (7.4%, 5/68). It therefore appears likely that not reporting larvae is due to insufficient dipping, examination or training in mosquito identification rather than not visiting the site. Notably, the detection sensitivity for culicine larvae in habitats that were reported as wet was much higher with almost three quarters of habitats containing these more obvious larvae being successfully identified (Table 4). Figure 2 Proportions of wet habitats (A) and late-stage Anopheles-positive habitats (B) found by CORPs within fenced (Black bars) and unfenced (White bars) plots. Larval detection sensitivity was different for different habitat types for anophelines (χ 2 = 28.9, df = 10, P = 0.001) and culicines (χ 2 = 21.6, df = 10, P = 0.016). CORPs more readily detected anopheline larvae in larger, more obvious habitats like drains, riverbeds, ponds and matuta (Table 4). To enable fitting of a logistic model, these types had to be pooled into categories which had no significant effect. However, the probability of CORPs reporting larval anophelines occupying a habitat was drastically reduced if the habitat was located in an area where larviciding was ongoing (Table 5). Late-stage Anopheles occupancy was reduced by over 70% in habitats in the intervention areas where the surveillance CORPs actually found and reported the wet habitats ( Table 6). Note that no significant reduction of late-stage Anopheles occupancy was revealed for habitats in areas not covered by the intervention, regardless of whether the surveillance CORPs found them or not (Table 6). Discussion The observation that CORP surveys at this stage of the UMCP's development had detected 66% of all aquatic habitats represents an improvement upon the 41% reported at the baseline surveys [35] but nevertheless leaves significant room for improvement. The majority of the habitats that were not reported by CORPs, including most of those containing larvae, could be attributed to CORPs' unfamiliarity and, most importantly, to the presence of a fence. The latter is one of the most prominent features in urban settings, presumably resulting from growing security challenges. Limited access to the fenced plots reduces the chances of habitats being found, reported or treated, and undermines coverage of surveillance and vector control activities. The fact that 75% of habitats with Anopheles mosquitoes that the CORPs did not find came primarily from three habitat types (puddles, marshes and construction sites), most of which were behind fences (30.3%), suggests considerable opportunity to achieve improvement through targeted training and increased emphasis upon these habitat types and plot characteristics ( Figure 2). Notably, the CORPs more readily reported permanent sites such as ponds and riverbeds, rather than temporary puddles and rice fields where dipping might be more difficult and detecting larvae requires more effort. Detection and consequent reporting of late-stage Anopheles larvae is considered an important indicator of successful larval control in programmatic settings because it is the most practical scalable indicator for imminent emergence of adult malaria vectors. It is important to note that CORPs detection sensitivity for this key indicator was low and clearly not adequate for monitoring and management of larviciding activities. The observation that CORPs in the larviciding areas detected proportionately fewer habitats with Anopheles larvae, compared with those in nonlarviciding areas despite the higher number in the former and even when they had reported the habitats is particularly interesting. This may be attributed to lower larval density in treated habitats and/or reduced thoroughness among individual CORPs when searching habitats as they assume sites have been treated. Moreover, biases in the perspectives and CORP supervision practices of the ward supervisors with the competing interest of being responsible for larvicide application and surveillance, may account for these trends. The fact that larval occupancy in areas with larviciding was only reduced if habitats had been found by surveillance CORPs, suggests that if surveil- The probability of mosquito larvae detected by the CORPs modelled with a binary distribution and logit link function using Generalised Estimating Equations (GEE) treating intervention status and habitat category as the potential predictors. a ; reference group for particular variable CI; confidence interval CORPs; community-owned resource persons N; the number of habitats that were reported to be wet by CORPs during routine habitat surveys and contained larvae during cross-sectional surveys n; the number of habitats where CORPs found larvae during their routine habitat survey, NA; Not applicable The Odds of change of late Anopheles habitat occupancy subject to CORPs detection sensitivity of wet habitats and subsequent larvicide application as interacting terms modelled with a binary distribution and logit link function using Generalized Estimating Equations (GEE) a reference group for a particular variable, CI; confidence interval, NA; Not applicable n/N; the proportion of all habitats found to contain late stage Anopheles larvae by observations of the CORPs and the investigator. OR; Odds ratios. lance CORPs did not enter a plot or detect the habitat larviciding CORPs were also less likely to enter and treat them. Although a large number of CORPs were employed and a substantive internal quality control system formed an integral part of the routine protocols of the UMCP [37], it is striking that these did not detect these substantive problems in the front-line surveillance systems. These findings call for special emphasis upon directed strategies ensuring a more compliant operational team and engagement of the community in holding these teams accountable, as well as allowing area-wide access to plots and compounds. Conclusion The full true programmatic value of larviciding can only be established through evaluations of sustainable systems, which achieve much improved coverage relative to that reported here. The study has shown that unless improved access to fenced plots, and consequently detection of aquatic habitats and of larvae in them, is achieved, larviciding effectiveness will remain limited. To effectively implement larval control, we recommend that a less extensive surveillance system, focusing more on internal quality assurance based on accurate and timely reporting, be adopted. The labour-intensive and therefore expensive surveillance system implemented during the pilot phase of the UMCP [37] should be abandoned. Instead, it is recommended that rigorous external quality control of the internal process indicators used by implementers will be essential to make such monitoring systems meaningful and effective.
2018-05-08T18:02:35.415Z
0001-01-01T00:00:00.000
{ "year": 2009, "sha1": "0f55888870af4c39a5d0d41e58fb08479883f8c2", "oa_license": "CCBY", "oa_url": "https://malariajournal.biomedcentral.com/track/pdf/10.1186/1475-2875-8-311", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "fd23687a8dcf0f331376d2d15a81dc8eb5a5afe7", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Geography", "Medicine" ] }
118670712
pes2o/s2orc
v3-fos-license
Photo-induced valley currents in strained graphene The theoretical results are presented showing that strain-induced anisotropy of graphene spectrum gives rise to the valley currents under the illumination by normally incident light. The currents of the two graphene valleys are mutually compensated providing zero net electric current. The magnitude and direction of the valley currents are determined by the parameters of strain and light polarization. For not too high photon energy strain-induced valley current exceed that due to intrinsic warping of the graphene spectrum which suggests feasibility of strain-mediated valleytronics. INTRODUCTION Although importance of the valley structure of the carrier spectrum in crystals is recognized since the early age of solid-state physics, the idea to employ valley degree of freedom as an internal characteristic independent on electric charge and spin was formulated only recently [1]. The related theoretical concepts and first successful experiments suggest emerging of new direction called valleytronics. It assumes that in multi-valley crystals non-zero currents in individual valleys can be generated keeping zero total electric current. In experiments, the valley control is realized with the use of carrier photo-excitation in two-valley MoS 2 monolayer [2][3][4] and six-valley Si-based structure [5]. Moreover, valley Hall effect was observed recently for MoS 2 under the valley-selective optical excitation [6]. As graphene band structure has two inequivalent valleys, this material is a potential candidate for development of valleytronics. Although no experimental indication of valley currents in graphene is present so far, various approaches of their generation as well as valley filtering were proposed. The activity was started by the papers [7,8] where the specific valley-dependent edge states of graphene nanoribbons are proposed to be used for the valley filtering. Another approach explores valley Hall effect in graphene with lifted sublattice equivalence [9][10][11]. After that, a number of various approaches were suggested [12][13][14][15][16][17][18][19][20][21][22][23][24][25][26]. One of them, [23], employs warping of the graphene spectrum, which is essential at high carrier energies, above 1 eV. Such warping gives rise to valley currents under the optical excitation with light propagating normally to the graphene layer. It is important, that anisotropy of the graphene energy spectrum can be not only due to its intrinsic properties (warping), but also under application of the external strain [27][28][29]. In this paper we analyze the valley currents of illuminated strained graphene. It is known that application of strain to graphene conserves the Dirac form of its spectrum, but leads to essential anisotropy of the Fermi velocity. Theoretical and experimental analysis suggest that such a behavior is sustained for strain magnitude as high as 10% [30][31][32][33]. According to our results, application of strain gives rise to greater photo-induced valley currents for mid-infrared or softer illumination, in compare to that due to natural warping. In addition, it allows external control of the valley currents in graphene structures with tunable strain parameters. It is worth to mention that other materials of graphene family can also posses straininduced spectrum asymmetry (see, for example [34] where the spectrum of strained bi-graphene was addressed), which suggests their perspectives for strain-controlled valleytronics. SPECTRUM OF GRAPHENE UNDER UNIFORM STRAIN The honeycomb crystal lattice of unstrained graphene and corresponding first Brillouin zone are shown in figures 1 (a) and (b), respectively. The Brillouin zone extrema are at two inequivalent corners of the hexagon, K and K'. The effect of the uniform strain on the energy spectrum of graphene was initially explored within the tight-binding approach and firstprinciples calculations [27,[35][36][37][38][39][40][41][42]. The main results were that the opening of a gap in the energy spectrum requires very high values of strain, of the order of 20%. This means that the energy spectrum remains gapless and cone-like for moderate uniform strains. However, the Dirac points in strained graphene no longer coincide with the edges of the Brillouin zone, K and K'. Moreover, the strong strain-induced anisotropy of Fermi velocity appears [27][28][29]35,37]. On the other hand, the properties of intrinsic graphene [43,44] and graphene subject to various fields, [28,[45][46][47][48][49][50][51], can be addressed based on the symmetry considerations. In particular, the kp Hamiltonian of strained graphene can be developed [28,47,48]. It results in the Dirac-like electron and hole spectra ) , ( v c E k with anisotropic electron and hole Fermi velocities: (1) Here k and ϕ are the absolute value and polar angle of momentum k , + and − signs correspond to the conduction and valence bands, and we dropped inessential for our problem strain-related momentum and energy shift of the Dirac point. In terms of the uniaxial, , and shear, xy ε , components of strain we have Here Fermi velocity in unstrained graphene are responsible for the anisotropy of the Fermi velocity and were determined in [28] by the comparison with the first-principles calculations. As we see, the energy spectrum of strained graphene is essentially anisotropic and strain breaks not only the equivalence of k and k − directions but also the symmetry of electron and hole spectra as it is shown in Fig. 2. The corresponding solution for the wave function is: where, as in (1) Photo-generation valley currents. Before proceeding to rigorous analysis of the valley currents, let us discuss qualitatively its physical origin. In general, valley current can appear due to anisotropy of the carrier group velocity and photon-induced transition probabilities. In Fig.2 we plot the spectra of unstrained (red line) and strained (black line) graphene along y k direction for the case of pure shear strain, where f is the carrier distribution function, recombination and interband photo-generation rates, and index i marks the valley. We concentrate on the case of moderate temperatures and excitation photon energy below the doubled inter-valley (about 157meV, zone-edge transverse phonon mode) energy [52]. In this case we can neglect by inter-valley scattering, and the kinetic equations for each valley are decoupled. In the following we analyze kinetic equation for K valley and drop the valley index for all values discussing the results for K' valley at the end of the section. We assume also that actual carrier energies are less than that of optical phonon (about 200meV). Thus, we can also , considering scattering due to the longitudinal acoustic phonons (LA), impurity scattering (im), and electronelectron scattering (ee). Below we consider both intrinsic and doped graphene. However, we always assume that optical excitation generates carriers away from the Fermi energy level and we deal with fully populated initial carrier state and empty finite state. Formally, this means that G does not depend on the distribution function. In the presence of strain both wave functions and light-electron interaction Hamiltonian [23] are modified, and we have is dimensionless fine structure constant; 0 I , u and ω are the intensity, polarization and the frequency of the incident light, respectively. We introduce also here the electric field amplitude transmission coefficient assuming the graphene sheet is placed at the substrate with the refractive index n . are the wave functions of the conduction and valence bands which are determined by Eq. (3). The strain-induced contribution to the light-electron interaction, ε δ u H , is analogous to the ε k H terms in the Hamiltonian of strained graphene [28] and is determined by the same constants: In linear in strain approximation ε δ u H makes no contribution to the valley currents. Therefore, to avoid dealing with cumbersome expressions, we omit below the corresponding terms. To solve Eq.(4) we use the standard approach [53], introducing as independent variables of the distribution function energy, E , and ϕ , and expanding f into the Fourier series: In these variables the generation term is In the following, we assume that elastic scattering on various kinds of defects is the most efficient one. For the elastic scattering integral calculated assuming no strain effect on the carrier scattering probabilities, where n τ are determined by the elastic scattering probabilities [53]. Since and the corresponding valley current could be treated as a lower estimate. Using the ϕ , E variables, the expression for the valley current is where is the carrier group velocity and ) ( ) . Restricting ourselves by the first-order contribution with respect to the magnitude of strain, we conclude that there are two inputs to the valley current. The first one is . For the hole part, we have analogous expressions but with minus sign and Recombination-induced valley currents. In analogy to the considered above photo-generated valley currents the strain-induced anisotropy of the energy spectrum leads also to the appearance of the valley currents due to the inverse recombination processes. In general, a number of recombination processes are possible in graphene, including radiative, phonon-assisted, and Auger process [55]. For the considered excitation energy optical-phonon-assisted recombination is suppressed, while the Auger recombination is inefficient (see [56]). Thus, we concentrate on the former mechanism where spontaneous and thermal radiation-induced interband transitions take place. To estimate this effect we use the collision integral for the thermal radiation interband transitions given in Ref. [57]. So, for positive energies corresponding to the conduction band, an explicit expression for (14) and for negative energies, corresponding to the valence band, R J can be written in an analogous way. Here is the Plank distribution function, T is temperature in energy units, and is the characteristic radiative velocity. As we mentioned above, to analyze the recombination current, we need to determine the isotropic component of the distribution function, . Restricting ourselves by the linear in strain magnitude contributions to the valley current, we should address this problem assuming no presence of strain. Even in this case this is a complicated problem, requiring, in general, extensive numerical simulations. Below we consider two limiting cases, which allow an approximate solution: the cases of intrinsic and heavily doped graphene. Intrinsic graphene. At low temperatures the concentration of carriers of the intrinsic graphene is small and as a result one can neglect by the carrier-carrier interaction. This case was thoroughly analyzed in [57]. The distribution function at low pumping is split as is the equilibrium distribution and the small nonequillibrium is determined by the interplay between the thermal radiation generation-recombination processes and the quasielastic energy relaxation due to the acoustic phonon scattering. After some algebra for the first order in strain contribution to the recombination scattering integral we obtain which provides the following expression for the recombination valley current Naturally, if carrier relaxation due to acoustic phonon scattering is weak, f ∆ is concentrated and the absolute value of the recombination current is of the same order as generation one, leading to its partial compensation. However, this is typically not the case [57], is localized in the region T E~. To analyze importance of the recombination current we have to take into account that particle conservation under the generation-recombination process requires that ∫ [57]. For elastic scattering by the unscreened Coulomb impurities should be determined from the energy balance which equate the energy input rate due to optical excitation and energy relaxation rate, which in our case is due to acoustic phonon scattering. As we mentioned above, the light-induced heating T ∆ is determined from the energy balance equation: Using the explicit form of the collision integral LA J [57] { } we arrive to the following expression for the light-induced heating: is characteristic acoustic-phonon scattering velocity. The contribution of recombination to the energy balance is negligibly small and the corresponding term is omitted in Eq. (20). Note, that in some actual setups lattice and photon temperatures could be different and this changes the energy balance conditions [60]. Finally, using Eqs. (17,20) we obtain an expression for the ratio of the recombination and generation-induced valley currents: . So one can make the conclusion that the recombination-induced valley currents in doped graphene are negligibly small compared to the generation ones. CONCLUSIONS To conclude, we analyzed appearance of the valley current in strained graphene under monochromatic optical excitation. The valley current is possible due to the strain-induced electron-hole spectrum anisotropy. Under mid-infrared and softer irradiation for realistic strain magnitudes the considered mechanism of the valley current generation is considerably more efficient than that related to the natural graphene spectrum warping, proposed previously. It is shown that the reverse process of carrier recombination is inessential for valley current formation for both intrinsic and doped graphene due to efficient carrier energy relaxation. The feasible valley current magnitude is about m pA/ 10 3 µ − and potentially it can be governed in straincontrolled structures. ACKNOWLEDGMENTS The author thanks V. A. Kochelap for stimulating discussions and acknowledge the hospitality of the Abdus Salam International Centre for Theoretical Physics (Trieste), where this work was completed. This work was supported by the STCU Project №5716.
2019-04-13T08:44:56.421Z
2014-06-04T00:00:00.000
{ "year": 2014, "sha1": "e1424f67ef5b468f39903e0c16d7bc9d56c48b3e", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1406.1118", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "e1424f67ef5b468f39903e0c16d7bc9d56c48b3e", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
53367241
pes2o/s2orc
v3-fos-license
Substantia nigra 6-OHDA lesions mimic striatopallidal disruption of syntactic grooming chains: A neural systems analysis of sequence control It has been suggested that the coordination of complex sequences of behavior requires normal nigrostriatal function. A previous study showed that kainic acid lesions of the corpus striatum can disrupt the performance of natural, untrained stereotyped sequences (syntactic chains), which occur regularly during normal rodent grooming. In this study, intranigral injections of 6-OHDA were used to destroy nigrostriatal dopamine projections in order to deplete the striatum of dopaminergic inputs. Striatal dopamine depletion was found to disrupt the effective completion of syntactic grooming chains. Syntactic chain disruption was correlated with the aphagia that follows dopamine depletion. These data were compared with those from previous studies of chain completion after striatal damage, after sensory deafferentation, and after decerebration at various levels of the brainstem. These comparisons indicated that the loss of either dopamine projections or intrinsic striatal neurons produced equivalent disruption of syntactic grooming chains. It also showed that loss of either component of the nigrostriatal system disrupted chaining to a degree equal to that produced by loss of the entire forebrain. These results suggest that the integrity of nigrostriatal systems is crucial to forebrain implementation of this stereotyped sequence. It has been suggested that the coordination of complex sequences of behavior requires normal nigrostriatal function. A previous study showed that kainic acid lesions of the corpus striatum can disrupt the performance of natural, untrained stereotyped sequences (syntactic chains), which occur regularly during normal rodent grooming. In this study, intranigral injections of 6-0HDA were used to destroy nigrostriatal dopamine projections in order to deplete the striatum of dopaminergic inputs. Striatal dopamine depletion was found to disrupt the effective completion of syntactic grooming chains. Syntactic chain disruption was correlated with the aphagia that follows dopamine depletion. These data were compared with those from previous studies of chain completion after striatal damage, after sensory deafferentation, and after decerebration at various levels of the brainstem. These comparisons indicated that the loss of either dopamine projections or intrinsic striatal neurons produced equivalent disruption of syntactic grooming chains. It also showed that loss of either component of the nigrostriatal system disrupted chaining to a degree equal to that produced by loss of the entire forebrain. These results suggest that the integrity of nigrostriatal systems is crucial to forebrain implementation of this stereotyped sequence. The system formed by the corpus striatum (caudateputamen and globus pallidus), and its connections with the substantia nigra, cortex, and thalamus, has long been recognized as crucial to motor function. Clinical diseases (e.g., Huntington's chorea, Parkinson's and Wilson's diseases, and certain tardive dyskinesias) of the nigrostriatal system or of its connections produce profound behavioral disturbances . Yet these disorders are difficult to characterize in terms of simple motor deficits and, in comparison to other "motor structures" such as precentral cortex, the motor functions of the nigrostriatal system remain poorly understood. It seems dear, however, that the role of this system goes beyond the coding of elementary actions. Although single units within the striatum have been found to fire in conjunction with the performance of simple movements (e.g., Aldridge, Anderson, & Murphy, 1980;Anderson & Horak, 1985;Crutcher & DeLong, 1984;DeLong, 1971), such simple movement-linked cells are relatively rare in the striatum in comparison to their frequency in the primary motor cortex (Lidsky, Manetto, & Schneider, 1985;Rolls & Williams, 1987). A number of investiga-This project was supported by National Institutes of Health Grant NS 23959. I am very grateful to I. L. Venier for 6-0HDA group maintenance, and to T. E. Robinson for the doparnine histological analysis. Requests for reprints should be sent to Kent C. Berridge, Departrnent of Psychology, University of Michigan, Neuroscience Laboratory Building, Ann Arbor, MI 48109. tors have proposed that nigrostriatal systems are more closely concerned with the organization of movement into coordinated sequential patterns than with the production of motor elements. This hypothesis has arisen largely from studies of animals with striatal damage but has also been supported by clinical observations. For example, nigrostriatal Parkinson's patients show particu1ar deficits in certain sequential movement tasks (Benecke, Rothwell, Dick, Day, & Marsden, 1987;Stelmach, Worringham, & Strand, 1987), and corticallesions of Broca's area that produce sequentially recurring utterances may generally be accompanied by additional striatal damage (Brunner, Kornhuber, Seemüller, Suger, & Wallesch, 1982). Tbe present study focused upon the role of nigrostriatal circuits in the control of natural, species-specific sequences of action. Species-specific action patterns provide an opportunity for exarnining the neural control of complex coordinated action within the natural behavioral context that brain systems have evolved to control. A rich variety of coordinated sequential patterns occur in rodent groorning (Berridge, Fentress, & Parr, 1987;Fentress, 1972;Richmond & Sachs, 1978). One ofthe most highly structured of these sequential patterns is a "syntactic" (Lashley, 1951) chain that organizes many individual forelimb stroke and lick actions into four stereotyped phases (Figure 1; Berridge & Fentress, 1986). This stereotyped chain is specified by neural systems residing primarily in the brainstem (Berridge, 1989) and is largely independent of any need for peripheral sensory feedback (Ber- 377 Copyright 1989 Psychonornic Society, Inc. (top) and syntactie ehain (bottom) modes of faeial grooming. Time proceeds from left to right. Diagonal deviations above and below the horizontal axis denote movements of the two forepaws along the face and away from the nose (axis). Open squares denote paw lieks; filled square denotes body liek. Chain phases are (1) tight ellipses around the mouth; (2) asymmetrical or unilateral strokes; (3) bilateral, large-amplitude strokes; and (4) body lieking. ridge & Fentress, 1987a). Yet a study of striatopallidal involvement in the coordination of this highly structured sequence showed that completion depended significantly upon the integrity of striatal circuits. Extensive loss of intrinsic striatopallidal neurons from either the anterior or the posterior corpus striatum reduced the percentage of chains that were completed syntactically to approximately half the normal level (Berridge & Fentress, 1987b). This observation of disruption supports the proposition that the syntactic structure even of natural and untrained behavioral sequences depends to a large degree upon striatopallidal function (Bury & Schmidt, 1987;CooIs, 1985;Schmidt, 1984). There are a number of ways by which striatal and related systems might exert control over this instinctive sequential pattern. Different investigators have suggested nigrostriatal involvement in sequencing to be either direct, via the generation and initiation of centrally programmed sequences of action (Cools, 1985;Evarts, Kimura, Wurtz, & Mikosaka, 1984;Jaspers, Schwartz, Sontag, & Cools, 1984;Marsden, 1982), or indirect, via the hierarchical modulation of sensorimotor loops and central pattern generators (Abercrombie & Jacobs, 1985;Berridge & Fentress, 1987b;Cools, 1985;Iverson, 1979;Lidsky et al., 1985;Schneider, 1984. Hypotheses of direct striatal generation of patterning would suggest that these neurons participate in the actual specification of the serial order that constitutes the pattern. Such hypotheses would account for chain disruption after striatal lesions on the grounds that the syntactic rule itself cannot be properly generated in the absence of these neurons. Hypotheses of indirect hierarchical modulation would suggest that the striatum normally functions to modulate sensorimotor loops and to switch control between competing centrally generated signals and external sensory cues. According to this view, chain disruption would occur after striatal damage because distracting sensory events would fail to be suppressed during chain execution, and so could successfully compete with chaining and cause grooming to revert to a sequentially flexible or stimulus-guided mode (Berridge & Fentress, 1987b). Whether striatopallidal circuits participate directly in the actual generation of chain sequential structure or instead act hierarchically to implement patterns generated entirely elsewhere, it is clear that no single brain structure alone can be assigned responsibility for the coordination of high-level behavioral patterns. Rather, a systems view that incorporates afferent and efferent connections is required to understand how separate structures interact to form coordinating complexes. Ascending projections from dopamine neurons within the substantia nigra and ventral tegmentum form a crucial component of striatal systems. These dopamine projections are likely to participate in any striatopallidal control of sequential coordination, whether it be through pattern generation or sensorimotor modulation. Regarding pattern generation, Divac (1975, cited in Divac, Öberg, & Rosenkilde, 1987 has argued for a role of sub-stantia nigra doparnine projections by distinguishing between pattern coding neuronal functions and permissive or rate-setting ones. Divac and bis colleagues suggest that coordination of the patterned activity of striatal neurons arises through intrinsic circuits and cortical connections, but that this requires the presence of doparninergic afferents "setting the stage, as it were, for the patterned activity ofthe neostriaturn" (Divac et al., 1987, p. 62). Arriving at a similar conclusion from a different direction, Marsden (1980) has suggested that "the lesion that disrupts normal basal ganglia function most effectively is inactivation of the doparninergic input from the rnidbrain" (p. 285) and that motor prograrnrning is impaired in the absence of this input. In this study, the necessity of substantia nigra dopamine projections to the striatum for the performance of syntactic grooming chains was exarnined. Rather than damaging the striaturn directly, injections of 6-hydroxydoparnine (6-0HDA) were made bilaterally into the substantia nigra in order to remove doparnine projections, and the resulting effects on chaining were assessed. To better evaluate the results within the context of a larger systems analysis, these effects were also compared explicitly to those that follow other neural manipulations relevant to the execution of natural groorning chains: striatallesions, sen-SEQUENCE CONTROL 379 sory deafferentation, and decerebration at various brainstern levels. Bilateral skull holes were drilled 5.0 mm posterior to bregma and 2.0 mm lateral to midline, with bregma and lambda in the same horizontal plane. A 30-ga cannula was placed 7.3 mm below dura to reach the rostral zona compacta of the substantia nigra. Tbe 6-OHDA HBr (2.0 /lg//LI) was dissolved in cold 0.9% saIine-ascorbate solution (0.1 mg/ml). After 1 min, 4 /LI of the 6-0HDA solution was infused graduaIly over 10 min, and the cannula was left in place an additional 2 min after the infusion ended. Eight additional rats were used as controls. Control rats received intranigraI infusions of the saIine-ascorbate vehicle solution, without 6-0HDA, using the same procedure. Half of both groups of rats also received injections of desipramine (20 mg/kg, i.p.) 30 min before surgery, and each rat was also implanted with chronie oral cannulae for reasons described elsewhere (Berridge, Venier, & Robinson, 1989). Maintenance Rats were housed on a 14: 10 h Iight:dark cycle. Postsurgical aphagia was carefully monitored. To prevent dehydration, each rat was intubated with 10 ml water twice on the day after surgery. Tbe rats had free access throughout to chow pellets and water and to a palatable cereaI mash. Rats that ate neither food pellets nor cereaI mash were c1assified as aphagic. Rats that ate either any pellets or mash were c1assified as nonaphagie for the purposes of this study. Any rat that lost body weight was intubated each day with 10 ml of a liquid diet (sweetened condensed milk, water, vitamins) for every 5 g of weight lost, up to a maximum of three intubations per day. BehavioraI Testing Grooming sequences were elicited from rats by spraying them with a light water mist, beginning after 7 days of recovery from surgery. Each rat was placed in a transparent test chamber, under which a mirror was positioned to reflect a view of the head and face of the rat into the zoom lens of a video camera. Tbe rats were videotaped daily in 30-min sessions until at least a total of 10 min of continuous grooming had been taped for each rat. Chain-Completion Analysis Videotaped grooming sequences were analyzed in slow motion by observers blind to the experimental condition of the rats. Syntactie chains were identified first at 1/30 normal speed and then transcribed frame by frame using the graphie grooming notation system as in Berridge andFentress (1987a, 1987b). Syntactic chains were defined by the serial occurrence of four phases: (1) a bout of five to nine elliptieal strokes of small amplitude centered over the nose (6-7 Hz); (2) a single unilateral (or asymmetric bilateral) stroke or short series of strokes ascending to the dorsal border of the mystacial vibrissae; (3) a repeated series of large-amplitude strokes, typieally bilateral and symmetrical with respect to forepaw trajectories; and (4) tucking ofthe head and shifting ofthe body to begin a bout of ventrolateral body Iicking (Figure 1). Criterion for chain completion in this study was the serial occurrence of all four phases in correct order within 5 sec of the initiating Phase I bout of rapid ellipses. Histology Eight aphagic rats with 6-0HDA lesions and 4 rats from the vehicIe-control group were used for striataI dopamine-depletion analysis. The rats were decapitated, and their brains were removed and cooled in iced saline within 40 sec. Fur wetting elicited prolonged bouts of grooming in all rats. Depletion ofbrain dopamine levels by 6-0HDA did not diminish the amount of time that rats spent in grooming. In fact, rats that had been treated with 6-0HDA spent significantly more time grooming than did control rats [control mean ± SEM = 24.5 ± 4.2 sec of grooming per minute of observation vs. 6-0HDA = 38.4 ± 3.5 sec; F(1,34) = 4.55, P < .05]. Sirnilarly, the rate at which the rats initiated syntactic chains (defined by the occurrence of Phase 1 ellipses followed by Phase 2 strokes) was not diminished by dopamine depletion. Control and 6-OHDA rats did not differ in the absolute rates at which they began syntactic chains [control = 0.45 ± 0.08 chains initiated per minute of observation vs. 6-0HDA = 0.51 ± 0.06; F(1,34) = 0.37]. Even in terms ofrelative rates of chain initiation (relative to time spent in actual grooming), the control and 6-0HDA groups differed only marginally [control = 1.25 ± 0.16 chains per minute of actual grooming vs. 6-0HDA = 0.86 ± 0.21; [F(1,34) = 3.00, p = .09]. The percentage of initiated chains that were successfully completed was reduced, however, in 6-0HDA rats relative to controls [control = 87.5 ± 3.2; 6-0HDA = 69.8 ± 5.1; F(1,34) = 4.79,p < .05]. Disrupted chains followed the same pattern as that seen in disruptions caused by striatopallidallesions (Berridge & Fentress, 1987b): the rats initiated and completed Phase 1 and often continued to Phases 2 and 3. Disruption of the pattern typically occurred dudng the series of largeamplitude, bilateral strokes that constitute Phase 3 ( Figure 2). It is noteworthy that these interruptions did not usually halt the grooming bout but generally led immediately to a continuation of sequentially flexible facial grooming, just as in striatallesion disruptions (Berridge & Fentress, 1987b). Yet.the rats did not lose the capacity to engage in Phase 4: the 6-0HDA rats did not show an overall reduced incidence of paw or body licking as a These observations support the conc1usion that both le-sions can disrupt the sequential organization of grooming without removing the capacity to produce grooming actions. It appeared that the degree of chain disruption was not distributed randomly among the 6-0HDA rats, but rather seemed associated with the severity of aphagia that was induced by dopamine depletion. A correlation analysis between the number of days of aphagia and the chaincompletion rate for each rat showed that this was true. The length of aphagia induced by 6-OHDA was correlated positively with the percentage of disrupted syntactic chains [Spearmansrho = 0.40, t(25) = 2.2, p < .05].lnspection of the data suggested further that a ' 'critical threshold" for aphagia existed at roughly 6 days, which could be used to predict whether a rat' s completion of syntactic chains would be impaired. To test this, the 6-OHDA rats were divided into aphagic (days of aphagia ~ 6; n = 19) or temporary (days of aphagia ~ 5; n = 8) categories, and chain completion was reanalyzed. Chain-completion rates differed by category [F(2,32) = 6.35, p < .01] in a way that supported the notion that chaining was linked to a critical severity of aphagia: 6-OHDA rats that were aphagic for 6 or more days completed significantly fewer chains (61 % ± 5%) than did either control (88 % ± 3 %; Newman-Keuls, p < .01) or temporary 6-0HDA rats (90% ± 4%; P < .01], whereas the control and temporary 6-0HDA categories did not differ from one another (Figure 3). Finally, separate Spearman's correlations were calculated between chain completion and aphagia, and between chain completion and striatal dopamine levels, for only those rats in the aphagic group. These analyses did not show a significant relationship between these variables, once the 6-day criterion had been exceeded (rho< 0.25 for each). DISCUSSION A minimum degree of integrity among nigrostriatal projections is required for the syntactic sequencing of grooming chain actions. The disruption of syntactic chaining by 6-0HDA is not accompanied by a general reduction in the overall amount of grooming. This preservation of overall grooming levels is consistent with previous reports that grooming need not be reduced (and may even be enhanced) by brain dopamine manipulation (e.g., Dunn, Alpert, & Iversen, 1984;Whishaw & Dunnett, 1985) or by pharmacological manipulation of related neurotransmitter systems (e.g., Dunn, Berridge, Lai, & Yachabach, 1987). Rats also retain full capacity to initiate syntactic chains at rates comparable to normal after brain dopamine depletion. However, there is a marked impairment in the rate of syntactic chain completion after 6-0HDA administration. This impairment of syntactic efficacy is linked in a stepwise fashion to the wellknown impairment of feeding that follows dopamine depletion (e.g., Ungerstedt, 1970Ungerstedt, , 1971. Only 6-0HDA injections that produce a substantial degree of aphagia, in this case aphagia of at least 6 days, appear sufficient to disrupt sequential chain completion. This threshold of apbagia relationship to sequence disruption appears sirnilar to the threshold of doparnine depletion that has been described for aphagia itself after 6-0HDA administration (e.g., Stricker & Zigmond, 1976). These results support the idea that nigrostriatal circuits contribute to behavioral sequencing, and reinforce the point that their contribution is made as a coordinated system of neural elements working together. For this natural action chain, darnage either to the intrinsic striatal neurons that constitute the targets of ascending doparnine fibers (as in Berridge & Fentress, 1987b) or to the substantia nigra source ofthese fibers (as in the present study) reduces the reliability of syntactic chain exeeution. The substantia nigra focus of the 6-0HDA injections in this study would have lirnited the population of affeeted doparnine neurons to those of midbrain origin, but would not have been specific to a neuronal subpopulation targeted to any particular striatal region. Different striatal regions may show considerable functional (e.g., Fairley & Marshall, 1986;Iversen, 1984;Pisa, 1988;Pisa & Schranz, 1988) and anatomical heterogeneity (e.g., Malach & Graybiel, 1987;Nauta & Domesick, 1984). It is conceivable that a study using focused lesions, which would be more specific than the massive striatal dopamine depletion used here, would reveal the existence of a crucial nigrostriatal subcircuit. In addition, it is not yet possible to tell whether the elose association between the motivational deficit of aphagia and the motor or sensorimotor deficit of sequencing occurred because the two deficits are mediated by separate subcircuits, each of which is simi1arly aepleted by a 6-0HDA injeetion, or because a single neural doparnine system is actually responsible for both behavioral functions . It can be noted from previous results that chain disruption was not linked elosely to aphagia following intrinsic striatal damage by kainic acid, although certain other sensorimotor changes were (chain initiation, choreic paw treading) (see Berridge & Fentress, 1987b;Berridge et al., 1988). Comparison of Neural Manipulations of Syntactic Chaining A growing body of data exists regarding the effeets of neural manipulations on the execution of natural syntactic chain sequences in grooming. These manipulations range from the removal of facial sornatosensory feedback by peripheral deafferentation, to central neurotoxin lesions of the forebrain and midbrain, to the complete isolation of the brainstem by transection at levels ranging from the caudal hypothalamus down to the rostral medulla oblongata. Each ofthese studies has been conducted using sirnilar testing and analysis procedures, and legitimate comparisons can be made of the various outcomes. Such comparisons rnay allow us to build a more complete "neural systems" understanding ofhow natural sequential patterns are generated and controlled by the brain. For this purpose, and to place the present observations on 6-OHDA effeets in better perspective, data from this and earlier studies were explicitly compared in a statistical analysis. The consequences of eliminating somatosensory feedback from the face during chaining (n = 8) were taken from the deafferentation study of Berridge and Fentress (1987a), which exarnined the effeets ofbilateral transection of the sensory maxillary (face and nose) and mandibular (jawand tongue) branches ofthe trigeminal nerve. The effects on chaining from loss of intrinsic striatopallidal neurons, induced by intrastriatal injeetions of 1 p.g kainic acid in either the anterior or the posterior corpus striatum were taken from Berridge and Fentress (1987b). Anterior and posterior striatopallidal lesions resulted in equivalent disruption of chain completion in that study, so data from those lesions were combined into a single striatallesion group (n = 19) for this analysis. The effeets of decerebration at various brainstem levels on chaining were taken from Berridge (1989). Complete isolation of the brainstem in that study by spatula transeetion at a level that removed the midbrain but left the pons, cerebellum, and medulla intact (i.e., a metencephalic decerebrate: suprapontine transeetion) produced a degree of chain disruption that was no greater than that sustained by a mesencephalic decerebrate, which did possess a midbrain and bad lost merely a forebrain (supracollicular transection). Data from rats with those transections were therefore combined into a single high decerebrate category (n = 18) for this analysis. Transections at the pontine-medullary function, which isolated the medulla oblongata (myelencephalon) from all rostral structures, produced a very different effect on syntactic chain compietion in that study, however, SO rats with myelencephalic transections were considered separately here as a low decerebrate category (n = 6). The effects of substantia nigra 6-0HDA lesions were contributed by the aphagic (~ 6 days) 6-0HDA group from the present study (n = 19). bined to fonn a single control group (n = 29). These six groups were compared for syntactic completion of grooming chains by single-factor ANOV A and Newman-Keuls tests ( Figure 4). As expected, chain-completion rates differed markedly among these six groups [F(5,93) = 26.42, p < .0001]. Post hoc tests showed that the six different conditions assembled into the three distinct clusters shown in Figure 4. Control rats completed chains successfully at a rate of approximately 85 %, and this rate was not impaired by peripheral trigeminal deafferentation (Berridge & Fentress, 1987a). Both control and trigeminallevels were significantly higher than striatal kainic acid-lesion (Newrnan-Keuls, p < .01 each), substantia nigra 6-0HDA-lesion (p < .05 each) , or high decerebrate (p < .01 each) groups. These latter three groups did not differ significantly from one another and thus fonned a second cluster. This is confinned by the fact that the five original constituent groups that made up the 6-0HDA, striatal (posterior and anterior), and high decerebrate (mesencephalic and metencephalic) categories did not vary when tested by aseparate ANOVA [F(4,51) = 1.31]. Finally, low decerebrates, which rarely or never completed chains containing all four phases, differed from every other group (p < .01 each), and so fonned a third cluster by thernselves. GENERAL DISCUSSION It is not surprising that loss of either intrinsic striatal neurons or afferent doparninergic projections can disrupt the completion of syntactic grooming chains. This chain is a complex action sequence, although stereotyped, and a considerable body of evidence exists to indicate a dependence of complex behavioral sequencing upon the integrity of nigrostriatal systems. It is somewhat surpris-SEQUENCE CONTROL 383 ing, however, that a rat that possesses a complete forebrain, excepting only the corpus striatum or ascending substantia nigra projection to that target, should be no better able to complete syntactic chains than a rat that lacks an entire forebrain (mesencephalic decerebrate) and even midbrain (metencephalic decerebrate) in addition to nigrostriatal structures. A pontine (metencephalic) decerebrate is capable of completing chains with a success rate of approximately 40%. Addition of a forebrain containing an intact nigrostriatal system (i.e., the control condition) can roughly double the effectiveness of syntactic chain completion. Addition of a forebrain minus this system (Le., the striatal and 6-OHDA groups), however, does not significantly improve sequential perfonnance. The apparent fragility of forebrain function in promoting chain execution is open to a number of interpretations. The equivalence of removal of either the nigrostriatal system or the entire forebrain could reflect a preeminent role for striatal-related systems in forebrain contributions to behavioral sequencing. Conversely, it remains possible that rnany unrelated types of forebrain damage will prove capable of disrupting syntactic chaining. Whether nigrostriatal systems do playaspecial role in this regard should become clearer when more is known about the localization of the chaining function within subregions of the corpus striatum (e.g., Fairley & Marshall, 1986;Pisa, 1988;Pisa & Schranz, 1988), the role of cortical projections to the striatum, and the contributions of other forebrain structures and connections to syntactic sequences (e.g. , Kolb, Whishaw, & Schallert, 1977). A related issue is the nature of the function contributed by nigrostriatal systems. The basic sequential structure of the syntactic chain can be specified by an isolated brainstern, as seen in midbrain and pontine decerebrates, and this generative sufficiency of the hindbrain might indicate thal the forebrain's contribution is not one of syntactic rule generation but rather of rule implementation. One possible mechanism of implementation is sensorimotor modulation. It has been suggested that chain execution involves a phasic modulation of sensorimotor function, which is reflected by a reduction of reliance upon somatosensory guidance to shape grooming actions that occur within the chain (Berridge & Fentress, 1987b). This reduced reliance upon sensory cues is demonstrated by the immunity of chain components from the distortions of grooming-action fonn induced by deafferentation (Berridge & Fentress, 1986). It is thus reasonable to posit that chain implementation involves a hierarchical switching among sensory-guided and centrally patterned control signals. The striatum is an excellent candidate for perfonning such sensorimotor switching functions (see Lidsky et al., 1985;Schneider & Lidsky, 1987). An alternative to the "implementation by modulation" hypothesis is the possibility that nigrostriatal/forebrain systems actually participate in the sequential generation of syntactic patterns rather than acting to modulate other generator systems. The demonstration that brainstem circuits are capable of generating syntactic sequences does not by itself rule out an auxiliary role for the corpus striatum in pattern generation, although it does show that the striatum is neither the sole generator of such patterns nor even an essential element of a closed pattern-generating circuit. The sequential pattern of notated chains produced by midbrain, pontine, and medullary decerebrates has been observed to be consistent with the po~sibility that brainstem pattern-generating circuits may be organized as adegenerate or parallel system distributed along the rostro-caudal axis of the hindbrain (Berridge, 1989). A distributed, parallel system of sequential pattern generation might in principle extend rostrally to include striatalbased circuits. If this were true, then nigrostriatal systems could act as partially redundant pattern generators, unnecessary for basic pattern specification but still important to facilitate execution and to bring completion rates to normal levels. As yet, there is less evidence to support a "degenerate patterning" role for nigrostriatal systems than exists for a "sensorimotor modulating" function (see Schneider & Lidsky, 1987) in syntactic chaining, but both can be retained as legitimate hypotheses to guide future investigations. Conclusion Ascending dopamine projections from the substantia nigra are essential to the role of striatopallidal systems in organizing stereotyped chain sequences of grooming. Loss of a substantial portion of this system reduces the effective completion of chains to an extent equivalent to that produced by large lesions of the corpus striatum itself. Dopamine projections appear to contribute to this sequencing function in a stepwise manner: depletion by 6-0HDA beyond a criticallevel, linked to the severity of induced aphagia (approximately 6 days), disrupts chain completion. When 6-0HDA lesions do not produce a degree of aphagia that exceeds this critical threshold, however, chaining is not impaired. Finally, comparisons with other studies reveal that the loss of either component of the nigrostriatal system disrupts chain completion as effectively as does loss of the entire forebrain. The nigrostriatal system thus appears to play a crucial role in forebrain coordination of this stereotyped natural sequence.
2018-10-15T22:38:09.409Z
1989-12-01T00:00:00.000
{ "year": 1989, "sha1": "f9d9120ec1f3f9dee0b1a6627921548e6b648b9e", "oa_license": null, "oa_url": "https://link.springer.com/content/pdf/10.3758/BF03337797.pdf", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "3bab0d16d5fa60339acd6f064edfa5fc4e010314", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology" ] }
259501436
pes2o/s2orc
v3-fos-license
Design and Processing as Ultrathin Films of a Sublimable Iron(II) Spin Crossover Material Exhibiting Efficient and Fast Light-Induced Spin Transition Materials based on spin crossover (SCO) molecules have centered the attention in molecular magnetism for more than 40 years as they provide unique examples of multifunctional and stimuli-responsive materials, which can be then integrated into electronic devices to exploit their molecular bistability. This process often requires the preparation of thermally stable SCO molecules that can sublime and remain intact in contact with surfaces. However, the number of robust sublimable SCO molecules is still very scarce. Here, we report a novel example of this kind. It is based on a neutral iron(II) coordination complex formulated as [Fe(neoim)2], where neoimH is the ionogenic ligand 2-(1H-imidazol-2-yl)-9-methyl-1,10-phenanthroline. In the first part, a comprehensive study, which covers the synthesis and magnetostructural characterization of the [Fe(neoim)2] complex as a bulk microcrystalline material, is reported. Then, in the second part, we investigate the suitability of this material to form thin films through high-vacuum sublimation. Finally, the retainment of all present SCO capabilities in the bulk when the material is processed is thoroughly studied by means of X-ray absorption spectroscopy. In particular, a very efficient and fast light-induced spin transition (LIESST effect) has been observed, even for ultrathin films of 15 nm. INTRODUCTION In hexacoordinated Fe II spin crossover (SCO) complexes, the energy gap between the so-called lowspin (LS, t2g 6 eg 0 ) and high-spin (HS, t2g 4 eg 2 ) states is of the order of magnitude of the thermal energy. Studying the thermal stability of these compounds as well as the retainment of their switchable SCO properties once deposited as ultrathin films by sublimation or even as isolated molecules on different types of surfaces has become a growing research activity in the last 10 years, 21,[23][24][25][32][33][34][35][36][37][38][39][40][41][42] as recently reviewed. 43,44 ere, we report the synthesis, processing as thin films and characterization of a novel neutral iron (II) coordination complex formulated as [Fe II (neoim)2], where neoimH is the ionogenic ligand 2-(1H-imidazol-2-yl)-9-methyl-1,10-phenanthroline (see scheme 2).In bulk, this molecular complex exhibits both thermal and light-induced spin transitions. Theseproperties are retained when this complex is deposited as thin films by sublimation, as shown by means of X-ray absorption spectroscopy, which also evidences the outstandingly fast and effective sensitivity of these films towards both visible and X-ray light-induced LS to HS conversions.Regarding the crystal packing of the complexes (Figure 2), a perfect superposition between them is found along a-direction defining columns.However, along b and c directions remarkably short intermolecular contacts are found.In particular, atoms C3 and C4 of the phenanthroline moiety of a molecule act as a wedge, filling the dihedral space generated between the imidazole and phenanthroline moieties of the adjacent complex and defining contacts well below the sum of the of the cMT product, where cM is the magnetic molar susceptibility and T is the temperature.For the solvated form, the cMT value is constant and around 0.1 cm 3 K mol -1 in the whole temperature range, indicating that this material is essentially in the diamagnetic LS state.The loss of the solvent molecules provokes a dramatic change in the magnetic properties, which exhibit a spin transition.Thus, at 390 K, the cMT product shows a value ca.3.49 cm 3 K mol -1 , consistent with a HS state.Upon cooling, cMT decreases very gradually down to ca. 1.28 cm 3 K mol -1 at ca. 148 K. Upon further cooling, in the interval 148-132 K the slope of the cMT vs T plot increases significantly delineating an inclined short step followed by a steeper cMT vs T dependence until reaching a value ca.0.30 cm 3 K mol -1 at 80 K that stays constant down to the lowest temperature investigated and indicates that most of the Fe II centers are in a LS state.This second step exhibits a ca.8 K wide hysteresis loop upon heating. Synthesis Photo-generation of the metastable HS* state from the LS state -the so-called light-induced excited spin state trapping effect (LIESST) 45 -was carried out at 10 K irradiating the desolvated powder with green light (λ = 532 nm, 11.2 mW).Thus, χMT increases saturating to a value ca.2.0 cm 3 K mol -1 in 15 minutes, which considering the "high" temperature of the experiment and the penetration depth of the laser through the powder sample is noticeably fast and efficient.After switching off the light and heating the system at a rate of 0.3 K min -1 a gradual increase of χMT is induced that attains a maximum value of 2.32 cm 3 K mol -1 in the interval of 22-38 K.This small additional raise in χMT reflects the thermal population of different microstates originated from the zero-field splitting of the HS* state and corresponds to a population of the HS state ca.66.5 %. Above 38 K, χMT decreases until joining the thermal SCO curve at ca. 80 K, indicating that the metastable HS* state has thermally relaxed back to the stable LS state.The corresponding TLIESST temperature, evaluated as δ(χMT)/δT, 2 is ca.65 K.[48][49] Highly homogeneous thin films of [Fe(neoim)2] [Fe(neoim)2] was sublimed under UHV conditions following a similar protocol used by us for other SCO molecules. 25,38At this respect, ca. 100 mg of the desolvated bulk powder are loaded inside the Knudsen cell fitting our molecular beam epitaxy evaporation chamber (customized CREATEC system within a clean room class 10000) and preconditioned by means of a smooth heating up to 120°C for degassing.The optimal sublimation conditions consist in heating the material at 245°C in a vacuum pressure ca.5•10 -8 mbar to reach a constant deposition rate ca.0.4 Å•s -1 as monitored with a calibrated quartz crystal microbalance (QCM) located next to the deposition substrate.Different films were grown in the range 15 to 150 nm thick, verified with nanometric resolution via profilometry.The appearance and topography of these films were characterized using optical microscopy (OM) and atomic force microscopy (AFM).The results from both microscopies show extremely homogeneous coverages of the substrates (Figure 4a,b), featured by the lack of distinguishable crystal grains and by very low roughness (ca.0.5 nm calculated as root mean square (RMS) value using Gwiddion program).No diffraction peaks can be observed by means of surface Xray diffraction technique, indicating a highly amorphous character of the molecular films. Regarding chemical integrity, the composition of the thin films was determined via IR and Raman spectroscopies and compared to that of the bulk (Figure 4c,d).Two 100 nm thick films were separately grown using the same sublimation procedure; one was deposited on a Au covered glass substrate destined for IR spectroscopy, and a second one was deposited on a SiO2 substrate for Raman spectroscopy.The collected IR spectrum of this film shows a good matching with that obtained for the bulk powder between 2000 and 600 cm -1 (Figure 4c).The only noticeable differentiating feature comes from the presence of a very small fraction of remnant water molecules within the bulk powder, evidenced by the presence of a noisy band at ca. 1600 cm -1 .Regarding the Raman spectra, the resulting data show a perfect matching between both film and bulk powder in the 1000 -1700 cm -1 region.Interestingly, no laser damaged was observed during this characterization on neither sample despite using a highly energetic laser (473 nm -blue, power ca. 1.08 mW•µm -2 ).This result further evidences the good stability of this molecular SCO material. Overall, both techniques prove the retainment of the chemical integrity of the molecular complex when sublimed as thin film. Spin-crossover properties of the [Fe(neoim)2] thin films The SCO behavior of different films was studied by means of X-ray absorption spectroscopy (XAS) technique focused at the Fe L2,3 edges.Thus, XAS spectra were collected for each film at different temperatures to respectively determine the electronic configuration (spin state) of their conforming molecules.At the same time, the bulk material was characterized in order to get reliable referential spectra and to provide a direct correlation tool to estimate the HS fraction present in each sample at each temperature.Last but not least, the LIESST effect was additionally studied for the bulk material and the thinnest film prepared (15 nm thick) by irradiating each of them respectively at 2 K for 10 min with a red laser (633 nm, 12 mW) and subsequently collecting their respective XAS spectra.The results are presented in Figure 5.To characterize the bulk material, spectra were collected at different temperatures by performing first a cooling from room temperature (300 K) to 2 K and then the subsequent heating up to 370 K. As it can be observed in (Figure 5a,b, red line), the initial state detected at 300 K can be assigned to a mixed HS-LS state where the HS one is predominant.This is characterized by the presence of both most characteristic peaks of the HS state at the Fe L3 edge (ca.708.0 and 708.9 eV respectively), as well as that of the LS state most intense peak (ca.709.7 eV).Upon cooling down to 80 K, a clear progressive change is observed consisting on a decrease in intensity of the aforementioned HS state peaks at the Fe L3 edge counterparted by an increase in intensity of the LS one (Figure 5a, black line).These observations, along the typical increase in intensity of the main peak of the Fe L2 edge, appearing around 721 eV (Figure 5b, black line), are consistent with the thermal HS to LS conversion of the material characterized by magnetic susceptibility measurements.However, a deviation in such trend is observed upon further cooling.Particularly, the spectra collected down to 20 K show the progressive reversion of the spin state of the material towards a higher HS state fraction (Figure 5a,b, blue line).This behavior is typical of a soft X-ray induced excited spin state trapping (SOXIESST) effect that photoexcites the material from LS to a metastable HS* state during the spectra collection. 50In fact, despite the precautions taken regarding the use of a very low photon flux, as performed in previous experiments on other sublimable SCO molecules, 25,38 the effect on the bulk [Fe(neoim)2] molecule seems outstandingly effective.Thus, we next decided to cool further down to 2 K and irradiate with a red laser to investigate the LIESST effect as well with this technique. Interestingly, an almost full HS state spectrum is achieved only after 10 min irradiation (Figure 5a,b, orange line).Noticeably, apart from confirming the LIESST effect observed in magnetic susceptibility measurements, this result further evidences the high susceptibility of this material towards lightinduced SCO phenomena.Subsequently, the reversibility of these observed SCO-related behaviors were studied upon heating up to 370 K. Two different trends can be distinguished: first, a thermal relaxation of both SOXIESST and LIESST effects is observed, with the full LS state recovered at ca. 100 K (Figure 5a,b, grey line); second, upon further heating, the expected progressive conversion from LS to an almost full HS state at 370 K occurs (Figure 5a,b, green line).In order to facilitate the visualization of all these changes, we estimated the temperature dependence of the HS fraction by fitting each collected spectrum to a linear combination of the "full" HS and LS states spectra (collected at 80 and 370 K, respectively) (Figure 5c).In this plot the different phenomenologies aforementioned are clearly monitored and quantified.For instance, at room temperature the bulk material presents a HS fraction of ca.80 % which fully converts to the LS state at ca. 80 K; below this temperature the SOXIESST effect induces an overall HS fraction increase of ca.20 % down to 20 K; finally, a complete LIESST effect is reached after 10 min red laser irradition at 2 K. The XAS characterization of the sublimed thin films with three different thicknesess (100, 150 and 15 nm) is plotted in Figure 5d-i.The 100 nm film was submitted to a cooling between 300 K and 20 K followed by a heating to 240 K (Figure 5d-f).The results obtained closely resemble those attained for the bulk material, exhibiting a large HS fraction (ca.95 %) at room temperature, which decreases down to ca. 37 % at ca. 80 K; at lower T clear SOXIESST effect is observed, reaching an outstandingly high HS fraction of ca.68 % at 20 K (Figure 5f) despite using a very low photon flux as for the bulk powder (0.026 nA); and the completion of the SOXIESST effect's thermal relaxation is accomplished at 90 K upon heating. Since the SOXIESST effect was very strong in this first sample and suspecting that the thermal SCO transition of the film might be more complete in absence of this phenomenon, a second sample with similar thickness (150 nm) was studied, but using a slightly lower photon flux (ca.0.020 nA compared to 0.026 nA).Still, SOXIESST effect also emerged below 80 K (Figure 5g-i) and the overall behavior showed the same trend as for the first film but with noisier spectra.Consequently, the sample was not studied in further detail. A last experiment was performed on a very thin film (15 nm) with the aim of investigating the effect of such miniaturization on the SCO behavior.Interestingly, the main SCO features are retained in this thin film (Figure 5j-l) including a SOXIESST effect below 80 K and an outstandingly effective LIESST effect.Due to the also very effective SOXIESST effect this film shows a more incomplete thermal SCO transition than the thicker ones.Although, the presence of a larger pinned fraction of molecules in the HS state in the 15 nm film could also contribute to this observation, given its larger surface to volume fraction. 51,52As far as the LIESST effect is concerned, a full HS state is reached within the 10 min irradiation period.This remarkable result on this 15 nm film, together with the amorphous and highly homogeneous nature of the films of [Fe(neoim)2], evidences the high suitability of the material to exploit its light-induced switchability in spintronic devices.In fact, one of the open goals in this area is that of fabricating multifunctional spin valve devices in which the transport properties can be not only tuned by applying a magnetic field but also by light irradiation. The reason why this goal has not been achieved so far is twofold.On the one hand, it is due to the difficulty to prepare homogeneous ultrathin films from SCO molecules fulfilling the requirements of a spintronic vertical device: spin transport through the film while, upon thinning, avoiding the presence of shorts and retaining the SCO features of the molecules.4][55] On the other hand, the further limitation that has been preventing the use of LIESST in spintronic devices is that it has only been reported one example of a sublimable SCO molecule exhibiting an efficient and effective LIESST effect electrical response within an integrated electronic device. 38Accordingly it acomplishes a full photoexcitation in minutes yet, unfortunately, the grown films of this material present high roughness and are composed by crystallites of ca.50 nm, thus preventing its integration as ultrathin films in vertical devices. temperature along the full thermal cycle (blue line -cooling and red line -heating).XAS spectra collected at 300, 80 and 2 K during cooling, 2 K after 10 min red laser irradiation, and 100 and 300 K during heating in the Fe j) L3 and k) L2 edges for a 15 nm thick film of [Fe(neoim)2] deposited on SiO2 and l) calculated HS fraction from each XAS spectra collected at each temperature along the full thermal cycle (blue line -cooling, orange line -LIESST and red line -heating). CONCLUSIONS Herein, we have reported a novel example of sublimable SCO material exhibiting extraordinary lightinduced spin transition properties, which have been preserved when going from the bulk material to the thin films.This robustness has been accomplished departing from the chemical design of a novel ionogenic tridentate ligand (neoim) that affords the new neutral complex [Fe(neoim)2].The desolvated form of this complex displays a very progressive thermal SCO transition ranging from 80 K to 390 K, but, more importanty, it experiences a quantitative and fast light-induced spin transition (LIESST effect).The sublimation of this material yields the formation of remarkably homogeneous and amorphous thin films in the range 15 to 150 nm whose chemical integrity is identical to that of the bulk material.The SCO properties of these films have been determined by XAS measurements. These studies indicate that, in terms of both thermal and light-induced spin transitions, the behavior of these films matches very well with that of the bulk powder.Thus, the thermal SCO behavior of the bulk material is to a large extent well-preserved in the films.More importantly, the LIESST effect is well found to occur in both bulk and thin films.Remarkably, the retainment of such phenomenon in the ultrathin films makes this material highly appealing for its implementation in applications such as electronic and spintronic devices.In fact, such phenomenon has been so far poorly exploited in the field of molecular electronics, 38,53,64 since not many SCO materials display it, especially in the case of sublimable ones, and their performance is generally not very outstanding in terms of quickness and effectiveness.In spintronic devices such phenomenon has not been still exploited. Hence, in a next step, this material could be envisioned for its implementation in horizontal electronic devices integrating these SCO molecules with conducting/semiconducting 2D materials, 25,38,[64][65][66][67][68][69] as well as in vertical spintronic devices in which the SCO ultrathin film is encapsulated in between two ferromagnetic electrodes.More specifically, we aim to improve the few reported results attained so far by exploiting the light-induced SCO switchability of this type of magnetic molecular materials. was collected.The yield of the target compound is about 5.0 g (51%). Complex [Fe(neoim)2] (D). To a solution of neoimH (1000 mg, 3.8 mmol) in MeOH (20 ml), Fe(BF4)2•6H2O (640 mg, 1.9 mmol) was added.The resulting dark red solution was refrigerated (4°C) overnight, and the plate-like red crystals that formed were filtered off, air-dried, and suspended in a mixture of NH3(aq), 25% (30 ml), and CHCl3 (100 ml).The violet-colored organic layer was separated, and the aqueous phase was extracted three more times with CHCl3 (in portions of 50 ml each).The organic solutions were combined, dried over MgSO4, and evaporated to dryness, producing a brown-colored powder of the complex.Optical microscopy.OM imaging was performed using a Nikon Eclipse LV-150N microscope coupled to a Nikon DS-FI3 camera and through a 50x objective. Atomic Force Microscopy.AFM images were collected using a Bruker Dimension Icon with Scan Assyst in tapping mode and processed and analyzed using Gwyddion program.The statistical value of roughness was calculated using this software as RMS value. Infrared spectroscopy.IR spectrum was collected using a Fourier Transformation-Infrared Spectrometer NICOLET 5700 from Thermo Electron Corporation equipped with a module that allows measuring the transmittance of the reflected IR light from a film sample between 3100 cm -1 and 650 cm -1 and specifically grown on a 3 cm × 3 cm Au coated glass substrate. Raman spectroscopy.Raman spectra were collected separately on a film grown on a SiO2 substrate and on the bulk powder directly scattered on a glass-slide using the same equipment and acquisition parameters, Horiba LabRAM HR Evolution equipped with a 473 nm Laser beam with a maximum power of 1.08 mW•µm -2 and in the 1000 -1700 cm -1 Raman shift region. X-ray absorption spectroscopy.XAS characterization on the different films of [Fe(neoim)2] indicated in the results discussion (deposited on 7 mm × 3 mm Si/SiO2 substrates) and the desolvated bulk powder was performed at Boreas beamline in ALBA synchrotron during both granted beamtimes and in-house experiments.The substrates films were fixed to copper sample holders using aluminum clips and sitting on pieces of indium foil for their proper thermalization.Bulk powder was directly scattered onto C-tape stripes attached to the copper sample holder.For each case, the measurements implied the collection of three consecutive energy scans (for averaging) at each of a series of different temperatures within the range 2 -370 K in separately cooling and heating modes and focused at the Fe L2,3 edge region.The scans were performed using total electron yield mode with a low photon flux (0.020 nA ≤ intensity ≤ 0.025 nA) and only exposing the samples to X-rays during the scan collection periods.For the LIESST effect studies, the samples were irradiated with a red laser (He-Ne LASER from Research Electro-optics Inc. (R-30993), wavelength 633 nm and power 12 mW) at 2 K, right after the cooling process, and for only 10 min.All collected spectra for this study were processed for their analysis through background subtraction and normalization using Igor program.The dependence of the HS fraction (%) with temperature for each case was calculated from a fitting to a linear combination of the spectra showing the characteristic shapes for full and none (approximately) HS fractions of the bulk material (370 K during heating and 80 K during cooling respectively). . Scheme II.Simplified scheme of the pathway followed to afford the [Fe(neoim)2] complex. Figure 2 . Figure 2. Representative crystal packing of [Fe(neoim)2]•H2O•2CHCl3. Thin bicolor rods represent intermolecular short contacts.The hydrogen atoms of the complex have been omitted for clarity. Figure 4 . Figure 4. (a) OM image of a 100 nm thick [Fe(neoim)2] film (green area) deposited on a SiO2 substrate (purple area).(b) AFM image of a 1 µm x 1 µm region collected on the same film indicated in (a).(c) IR spectra of a 100 nm thick [Fe(neoim)2] film deposited on a Au substrate (red line) and of the bulk powder reference (black line).(d) Raman spectra of a 100 nm thick [Fe(neoim)2] film deposited on a SiO2 substrate (red line) and of the bulk powder reference (black line).In (a) and (b) the scales are 50 µm and 400 nm respectively.In (b) the roughness of the film calculated as RMS value is indicated. Figure 5 . Figure 5. XAS spectra collected at 300, 80 and 20 K during cooling, 2 K after 10 min red laser irradiation, and 100 and 370 K during heating in the Fe a) L3 and b) L2 edges for [Fe(neoim)2] scattered on C-tape and c) calculated HS fraction from each XAS spectra collected at each temperature along the full thermal cycle (blue line -cooling, orange line -LIESST and red lineheating).XAS spectra collected at 300, 90 and 20 K during cooling, and 90 and 240 K during heating in the Fe d) L3 and e) L2 edges for a 100 nm thick film of [Fe(neoim)2] deposited on SiO2 and f) calculated HS fraction from each XAS spectra collected at each temperature along the full thermal cycle (blue line -cooling and red line -heating).XAS spectra collected at 300, 80 and 50 K during cooling, and 80 and 180 K during heating in the Fe g) L3 and h) L2 edges for a 150 nm thick film of [Fe(neoim)2] deposited on SiO2 and i) calculated HS fraction from each XAS spectra collected at each
2023-07-11T01:00:46.464Z
2023-07-10T00:00:00.000
{ "year": 2023, "sha1": "547b1fec3b81cfaddcbcab10d90a5b473011ec2e", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.1021/acs.chemmater.3c01704", "oa_status": "HYBRID", "pdf_src": "ArXiv", "pdf_hash": "547b1fec3b81cfaddcbcab10d90a5b473011ec2e", "s2fieldsofstudy": [ "Materials Science", "Physics" ], "extfieldsofstudy": [ "Medicine", "Physics" ] }
2126978
pes2o/s2orc
v3-fos-license
Synthesis of Bioactive Microcapsules Using a Microfluidic Device Bioactive microcapsules containing Bacillus thuringiensis (BT) spores were generated by a combination of a hydro gel, microfluidic device and chemical polymerization method. As a proof-of-principle, we used BT spores displaying enhanced green fluorescent protein (EGFP) on the spore surface to spatially direct the EGFP-presenting spores within microcapsules. BT spore-encapsulated microdroplets of uniform size and shape are prepared through a flow-focusing method in a microfluidic device and converted into microcapsules through hydrogel polymerization. The size of microdroplets can be controlled by changing both the dispersion and continuous flow rate. Poly(N-isoproplyacrylamide) (PNIPAM), known as a hydrogel material, was employed as a biocompatible material for the encapsulation of BT spores and long-term storage and outstanding stability. Due to these unique properties of PNIPAM, the nutrients from Luria-Bertani complex medium diffused into the microcapsules and the microencapsulated spores germinated into vegetative cells under adequate environmental conditions. These results suggest that there is no limitation of transferring low-molecular-weight-substrates through the PNIPAM structures, and the viability of microencapsulated spores was confirmed by the culture of vegetative cells after the germinations. This microfluidic-based microencapsulation methodology provides a unique way of synthesizing bioactive microcapsules in a one-step process. This microfluidic-based strategy would be potentially suitable to produce microcapsules of various microbial spores for on-site biosensor analysis. a biocompatible material for the encapsulation of BT spores and long-term storage and outstanding stability. Due to these unique properties of PNIPAM, the nutrients from Luria-Bertani complex medium diffused into the microcapsules and the microencapsulated spores germinated into vegetative cells under adequate environmental conditions. These results suggest that there is no limitation of transferring low-molecular-weight-substrates through the PNIPAM structures, and the viability of microencapsulated spores was confirmed by the culture of vegetative cells after the germinations. This microfluidic-based microencapsulation methodology provides a unique way of synthesizing bioactive microcapsules in a one-step process. This microfluidic-based strategy would be potentially suitable to produce microcapsules of various microbial spores for on-site biosensor analysis. Introduction Recently, the generation of spatially well-defined, three dimensional (3D) microstructures for whole-cell sensing system have attracted interest in the development of portable bacterial whole-cell biosensing systems, high-throughput cellular analysis as well as in fundamental studies of cell biology [1][2][3]. To achieve this goal, three major aspects should be considered: (a) selection of biocompatible materials to construct 3D microstructures; (b) fabrication methods to control the size and uniform shape of the 3D microstructures; (c) polymerization methods to produce hydrogels [4][5][6]. In the aspect of selection of biocompatible materials, different types of biocompatible materials have been used over the past few decades. Among the various types of biocompatible materials, hydrogels have been attractive to biochemical, biomedical and biomaterial researchers because of their non-toxic, robustness and inertness properties. Among the different kinds of hydrogel materials, including polyethylene glycol (PEG), alginate and poly(N,N-dimethylacrylamide) (PDMAA), the hydrogel based polymer poly(N-isoproplyacrylamide) (PNIPAM), has been one of the most promising materials for the manipulation of biomaterials [7][8][9][10]. In addition, unlike other hydrogels, PNIPAM produces a thin wall which provides a unique hollow inner structure. Because of this unique structural advantage, microorganisms could be encapsulated inside the resulting microcapsule and it could provide enough space for their growth [7,11]. The fabrication method used to generate 3D microstructures of uniform size and shape is another important factor. In order to achieve these goals, microdroplet-based microfluidic systems have been developed and widely used [12,13]. With the advanced technologies for the transport and manipulation of droplets, many possibilities exist nowadays to carry out synthesis and functionalize microdroplets for biomedical applications, including therapeutic delivery and biomedical imaging, biotransformation, diagnostics, and drug discovery [3,[14][15][16]. For this reason, microdroplets have been widely employed as a container to encapsulate various types of biological substances (i.e., cells, DNAs, and proteins) in discrete microdroplets [7,12,16]. Unlike conventional systems, the selection of materials for the production of microfluidic devices is important for the generation of microdroplets of uniform shape. Among several materials, polydimethylsiloxane (PDMS) has been widely used as a material of choice to produce microfluidic devices due to the low-cost fabrication of the microfluidic channels, high transparency, and biocompatibility. The polymerization method of a monomer mixture in microdroplets is also important to produce the hydrogels [7,9]. Previous studies show the great potential fabrication method using UV irradiation in many fabrication methods, especially in the fabrication of polymer particles. However, the polymer residues are normally stacked in the orifice or microchannels and they disrupt the flow inside of PDMS channels rendering difficult the stable production of particles of uniform size. In addition, the PDMS itself adsorbs UV light, preventing proper curing of polymers [17]. In order to overcome this issue, chemical polymerization has been developed and applied to fabricate hydrogels [8,18]. In this study, we have developed a new method to produce bioactive monodisperse PNIPAM-based microcapsules by using a combination of a microfluidic device and a chemical polymerization method. Monodisperse microdroplets were formed by two immiscible fluids in a flow-focusing microfluidic device. The polymerization process of continuously producing microdroplets was initiated by the addition of N,N,N',N'-tetramethylethylenediamine (TEMED) which acts as a catalyst. In addition, the produced microcapsules were highly monodispersed and suitable for the mass production of microcapsules at room temperature with easy size control. To further demonstrate the ability of the synthesized PNIPAM-based microcapsules to act as bioactive containers, BT spores were employed, and encapsulated inside the microcapsules and subsequently cultivated inside the microcapsules for shuttling to the vegetative cells. Preparation of Bacterial Spores All bacterial strains and plasmids used in this study were reported previously [19]. We cultured BT subspecies israelensis 4Q7 harboring expression vector for displaying proteins in CDSM media [20] at 37 °C with 250 rpm for 48-60 h. Pellet was obtained from 100 mL of culture by centrifugation (10,000 × g, 10 min) and suspend the pellet in 1 mL of 20% (wt/vol) urografin. This suspension was gently layered over 10 mL of 50% (wt/vol) urografin in a 15 mL centrifuge tube, and then centrifuged for 4 °C, 10 min at 16,000 × g. The collected pellets containing only free spores were stored at −20 °C [20,21]. Fluorescence Analysis The purified EGFP-displaying BT spores by the urografin gradient method [20] were washed and resuspended at ~1.0 × 10 8 CFU/mL in PBS. Fluorescence assay was performed using a multi-plate reader, SpectraMax M2 (Molecular Devices, Sunnyvale, CA, USA). Flow cytometry data was obtained using a FACSCalibur TM flow cytometer and the Cell Quest Pro TM software (BD Bioscience, San Jose, CA, USA). Spores displaying EGFP was analyzed for relative fluorescence intensity using an FL1 green fluorescence detector with a 530/30 nm bandpass filter. The mean value (M) indicates the mean fluorescence intensities obtained by FL1 detectors. Imaging of EGFP-Displayed Spores The purified EGFP-displaying spores were mounted on poly-L-lysine-coated glass slides (Cel & Associates, Pearland, TX, USA) and analyzed under an LSM 510 confocal laser scanning microscope (Carl Zeiss, Göttingen, Germany). Samples were excited at 488 nm with an argon laser, and the images were filtered with a longpass 505 nm filter. All final images were generated from 4-5 serial images made by automatic optical sectioning. Fabrication of PDMS Microfluidic Devices PDMS/PDMS bonded microfluidic channel designs were fabricated by soft lithography and PDMS molding technique. The silicon master was coated with SU-8 photoresist by spin-coating and transferred the design onto the wafer using the mask and UV light exposure. Microfluidic devices were obtained with PDMS using silicon master with SU-8 pattern. A mixture of PDMS prepolymer and curing agent (10:1 Sylgard184, Dow Corning) was stirred and degassed in a vacuum oven at 70 °C. After curing, the PDMS replica was peeled away from the silicon master then bonded with another PDMS using O 2 plasma. Droplet Polymerization and Spore Germination The droplets were generated using the microfluidic device with a flow-focusing technique. The dispersive phase (DP) consisted of the mixture of potassium persulphate (initiator, 0.19 wt%), D-sorbitol (cross-linker, 0.6 wt%), PBS solution (56 wt%), NIPAM (24.8 wt%) and BT spores and LB broth (0.18 wt%). The continuous phase (CP) is the mixture solution of G-oil and Abil EM90 (2 wt%). The microdroplet generation in the microfluidic device was observed using an optical microscope with a charge-coupled-device camera (Elipse Ti-S, Nikon, Tokyo, Japan). Once the microdroplets were generated through flow-focusing, the microdroplets were collected and suspended in TEMED/G-oil mixture (7.7 vol%) for the polymerization. TEMED is acted as a catalyst for encouraging the polymerization and produce hydrogel microcapsules. In addition, the Abil-EM90 was used as a surfactant to prevent the coagulation between the generated microdroplets during the polymerization. The polymerized spore encapsulated microcapsules were washed with IPA and PBS solution several times and then dispersed in LB medium and stored for overnight. A confocal microscope (LSM510 META NLO, Carl Zeiss, Göttingen, Germany) was used to monitor fluorescence intensity changes of BT spores in the PNIPAM microcapsules. Results and Discussion The major dimensions of microfluidic device were 50 μm of orifice and 100 μm of height for all microchannels, and the detailed dimensions of the microfluidic device and its picture are shown in Figure S1. For the production of microdroplet-based hydrogel beads, the mixture of NIPAM (20%, w/w), MBA (5%, w/w), initiator, and mixture solution of EGFP-displayed BT spores (1.0 × 10 5 CFU/mL) were injected through the center inlet of PDMS-based microfluidic device as a DP. In order to generate microdroplets, a mixture of G-oil and Abil Em90 as a surfactant was employed as a CP through the other inlets. The overall fabrication processes and the dimension of microfluidic device are schematically illustrated in Figures 1 and S1. In this study, the G-oil and Abil Em90 were selected because it is inert, immiscible with PNIPAM monomer and prevents the potential merging of produced microdroplets. As DP passing through the orifice of the device, the DP flow is squeezed and sheared off by applied CP flow and the orifice to form monodisperse microdroplets. The production of different sizes of bioactive microcapsules is important to control the amount of loading of biomolecules, cells or biomaterials for further applications. For this reason, the microdroplet-based microfluidic device was fabricated for controlling the size of microcapsules. The size of microdroplets was simply controlled by changing of both CP and DP flow rate in the microfluidic device, and the results are demonstrated in both Figure 2 and Figure S1-S4. Figure 2(A) shows the relationships between the droplet size and CP and DP flow rates in the microfluidic device. Under the same CP flow rate, the high-flow rate of DP generates the relatively large size of microdroplets compared to the low-flow rate of DP. Moreover, as increasing the CP flow rate under the same DP flow rate, the size of produced microdroplets is decreased. These results indicate that high-flow rate of CP strengthens the shearing force and accelerate the detachment of the droplets from the DP flow at the orifice in the microfluidic device. Because of this mechanism, a broad size range of microdroplets from 186 μm to 61 μm was easily obtained using a microfluidic device by controlling both CP and DP. The detailed results of microdroplet production using the microfluidic device under different flow rate conditions are shown in Figures S2-S5. The CP and DP flow rates were easily controlled by a syringe pump. Even though the generated microdroplets were close to each other, each droplet was separated with each other due to the presence of surfactant as shown in Figure 2(B-E). Unlike the UV-based polymerization method, no surface discoloration of PDMS, which is an indication of the PDMS damage by UV irradiation, were observed during the chemical polymerization. Moreover, there was no blockage of microchannels or orifices in the microfluidic device by polymer residues, which is a common occurrence during UV-based polymerization processes. These stable conditions lead to continuous production of BT spores-encapsulated microcapsules through the microfluidic devices. Once the microdroplets were formed through the orifice, the obtained microdroplets were merged in the mixture of TEMED which is acted as a catalyst. As soon as the mixture of NIPAM monomer, initiator, crosslinker, and BT spores in the microdroplets exposed to the TEMED, NIPAM monomers start to polymerize through the chemical polymerization. During the polymerization, the potassium persulfate as an initiator induced the radicals, and NIPAM was crosslinked with MBA as a crosslinker and physically enclosed the BT spores in the PNIPAM microcapsules. In the most of the cases, the shape and size of hydrogels in microdroplets are changed during the polymerization process. For the investigation of morphological changes microdroplet during the polymerization process, we selected the DP flow rate as 1 μL/min and the CP flow rate as 5 μL/min, respectively. As shown in Figure S1, the average produced droplet size was around 62 μm. After the produced microdroplets were polymerized, the monomers inside the droplets were polymerized and microcapsules were fabricated. During the polymerization process, the microcapsules maintained their uniform spherical shapes even after the polymerization, as shown in Figure 3, and the average produced PNIPAM microcapsules were 60.29 ± 2.19 µm in diameter, which is similar to the average size of microdroplets. There are slight changes of the diameter, and no shape changes were observed, as shown in Figure 3. This result demonstrates the successful fabrication of monodisperse microcapsules. The production of PNIPAM microcapsules using NIPAM monomer was confirmed by employing Fourier transform infrared spectroscopy (FT-IR), which is a powerful tool to identify specific chemical bonds of the surface. Both NIPAM monomer and PNIPAM were analyzed, and the characteristic absorbance bands are marked with numbers and arrows as shown in Figure 3(B). Three distinctive absorbance peaks from NIPAM were observed at 917, 962, and 985 cm −1 , and these peaks indicate the vibration of the C=C double bond. Moreover, the broad peak around 2,970 and 3,295 cm −1 indicates asymmetric -CH 2 stretching and secondary amide N-H stretching. After polymerization of NIPAM and converted into PNIPAM, the chemical structure were clearly observed at 1,388 cm −1 (deformation of methyl group), 1,459 cm −1 (-CH 3 and -CH 2 deformation), 1,542 cm −1 (secondary amide N-H stretching), 1,631 cm −1 (secondary C=O stretching), 2,854 cm −1 (-CH 3 symmetric stretching), 2,929 cm −1 (asymmetric -CH 2 stretching), 2,975 cm −1 (asymmetric -CH 3 stretching), and 3,286 cm −1 (secondary amide N-H stretching and bending) [7,22,23]. Comparing the NPIAM and PNIPAM, the three distinct C=C peaks from NIPAM disappeared and could not be observed in the PNIPAM FT-IR spectrum. Moreover, other major peaks, except for the C=C peaks, were still observed due to the similar chemical structures between NIPAM and PNIPAM. This result indicated the successful polymerization of NIPAM and fabrication of PNIPAM-based microcapsules using the microfluidic device. To demonstrate the biological feasibility of PNIPAM microcapsules, EGFP-displayed BT spores were employed for further investigation. In addition, the BT spores were mixed with monomer and injected to produce microdroplets and the microorganism was encapsulated inside of the microcapsules after polymerization. The black small dots, which are encapsulated BT spores inside the highly transparent microdroplets, were easily observed using an optical microscope as shown in Figure 3. As a first step towards developing our bioactive microcapsules, we sought to engineer the InhA-mediated spore-surface display of EGFP as a model protein. The EGFP-displayed BT spores were confirmed by flow cytometry (Figure 4(A)), fluorescence assay (Figure 4(B)) and confocal microscopy ( Figure 4(C)). Specific fluorescence signals of the spores were observed in EGFP-displayed BT spores, indicating that green fluorescence was observed only with EGFP-displayed BT spores. For the confirmation of the encapsulation of EGFP-displayed BT spores in the microcapsules BT spore-encapsulated PNIPAM microcapsules were collected and investigated the fluorescent characteristics by confocal microscopy. As shown in Figure 5(A), the strong green fluorescent signal which is derived from encapsulated BT spores was observed from the fluorescent image. The highly transparent and monodisperse microcapsules were also obtained as shown in Figure 5(B). To investigate the viability of the encapsulated spores and subsequently shuttle into vegetative cells, we transfer the BT spores-encapsulated microcapsules were placed in Luria-Bertani (LB) medium for germination and incubated at 37 °C for 24 h. Once germinated, the vegetative cells did not display EGFP on their surface anymore. Nonetheless, it is noteworthy that the fluorescence of vegetative cells became weaker under the germination condition. In particular, the fluorescent and optical property change would be the strong evidence that the BT spores were converted into vegetative cells by the germination in microdroplets. In these reasons, we investigated the fluorescent and optical changes. The BT spores maintained their viability under the microencapsulating conditions and were successfully germinated into vegetative cells. In addition, there were no fluorescent signals in the microcapsules as shown in Figure 5(C). The spatially included microstructures of vegetative cells (live cells) were observed after 24 h of incubation, and some free vegetative cells were observed ( Figure 5(D)). In addition, the highly transparent microcapsules were also converted into dark-gray color microcapsules. The darkness of the inside of microcapsule indicates that vegetative cells are agglomerated in the microcapsules. Some microcapsules were covered with vegetative cells as shown in Figure 5(D). Once the vegetative cells were growing and packing inside of the microcapsules, the hydrogels were flexible enough to break out the cells from the microcapsules. These results suggest that the strategy present herein should be useful in generating microstructure of any microbial cells by spatially addressing their spores within microdroplets. Conclusions We developed a novel method for producing bioactive microcapsules which encapsulated a biological species, BT spores, in a one-step process using a microdroplet-based microfluidic system. The size of microdroplets is mainly dependent on the both CP and DP flow rate applied to the microfluidic device. For the protection of BT spores from the surrounding environment, PNIPAM was used as a bio-inert material to produce the microcapsules. The viability of encapsulated BT spores was confirmed by the culture of the produced droplets for shuttling into vegetative cells after the spore germination. This method could provide an accurate, efficient and robust means to prepare bioactive microdroplets for droplet-based drug screening, biosensors, and biotransformations. Furthermore, it can be potentially applicable to develop whole-cell biosensors, having the potential to be developed into a rapid, high-throughput, field-portable method for the detection of biological and environmental samples.
2014-10-01T00:00:00.000Z
2012-07-26T00:00:00.000
{ "year": 2012, "sha1": "14cbb05120f39e0fd8443d5802d4ce40669260f6", "oa_license": "CCBY", "oa_url": "http://www.mdpi.com/1424-8220/12/8/10136/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "14cbb05120f39e0fd8443d5802d4ce40669260f6", "s2fieldsofstudy": [ "Biology", "Chemistry", "Engineering", "Environmental Science", "Materials Science" ], "extfieldsofstudy": [ "Computer Science", "Chemistry", "Medicine" ] }
263826943
pes2o/s2orc
v3-fos-license
Usefulness of endoscopic ultrasound‐guided transhepatic biliary drainage with a 22‐gauge fine‐needle aspiration needle and 0.018‐inch guidewire in the procedure's induction phase Abstract Endoscopic ultrasound (EUS)‐guided transhepatic biliary drainage is usually performed with a 19‐gauge fine‐needle aspiration (FNA) needle and a 0.025‐inch guidewire. The combination of a 22‐gauge FNA needle and a 0.018‐inch guidewire is reported to be effective as a rescue option when the bile duct diameter is small or technically challenging. Experts in EUS‐guided transhepatic biliary drainage have reported that bile duct puncture with a 19‐gauge FNA needle is possible in most cases, but is not easy to reproduce by endoscopists with less experience in EUS‐guided transhepatic biliary drainage. We investigated the usefulness of EUS‐guided transhepatic biliary drainage using a 22‐gauge FNA needle and a 0.018‐inch guidewire during the procedure's induction phase. Consecutive patients who underwent EUS‐guided transhepatic biliary drainage at our institution from March 2021 to May 2023 were evaluated, and 37 were included. Biliary drainage was performed for malignant bile duct stricture in 36 patients and choledocholithiasis in one patient. The median target bile duct diameter was 4.5 mm (2.5–9.4). Biliary access, fistula dilation, and stent placement were successful in the 37 patients (100%). The median procedure time was 35 min (16–125). Adverse events occurred in four (10.8%) patients. EUS‐guided transhepatic biliary drainage using a 22‐gauge FNA needle and a 0.018‐inch guidewire is a useful and promising option for endoscopists with limited experience in EUS‐guided transhepatic biliary drainage in the procedure's induction phase. INTRODUCTION Recently, endoscopic ultrasound (EUS)-guided biliary drainage (EUS-BD) has become a widely used method of bile duct drainage for obstructive jaundice. 1,2EUS-BD can be categorized into EUS-guided choledochoduodenostomy (EUS-CDS) and EUS-guided transhepatic biliary drainage, depending on the approach route.EUSguided transhepatic biliary drainage is the treatment of choice when conventional endoscopic retrograde cholangiopancreatography (ERCP) is difficult because of duodenal obstruction or surgically altered anatomy. 3,4n some cases, antegrade stenting or bridging drainage to the right liver lobe is performed simultaneously. 5,68][9] However, most reports are from high-volume centers, and EUS-guided transhepatic biliary drainage is not an easy procedure for endoscopists with little experience, particularly during the procedure's induction phase.The technical success rate during the induction phase of EUS-guided transhepatic biliary drainage is relatively low at 64.7%. 10 A number of cases is required to stabilize the technique. 11Therefore, various technical tips have been reported to improve the success rate. 12uncture of the target bile duct, cholangiography, and placement of a guidewire into the bile duct are very important early steps in EUS-guided transhepatic biliary drainage.Although the combination of a 19-gauge fine-needle aspiration (FNA) needle and a 0.025-inch guidewire is commonly used for these steps owing to good guidewire maneuverability and compatibility with dilation devices, 13 in some cases, it is difficult to find an adequate puncture line with a 19-gauge FNA needle owing to thin bile ducts or intervening blood vessels.In such cases, the usefulness of a 22-gauge FNA needle has been reported. 9,14In addition to improved puncture ability, a 22-gauge FNA needle provides a wider range of needle movement than a 19-gauge FNA needle, making it easier to select the bile duct branch that can be punctured. 12owever, a 0.018-inch guidewire used previously is not the first method of choice because of its inferior maneuverability compared with a 0.025-inch guidewire and the limited dilation devices available after guidewire placement. 13Therefore, EUS-guided transhepatic biliary drainage using a 22-gauge FNA needle and a 0.018-inch guidewire was reported as a rescue option when the bile duct diameter was small and difficult to puncture. Recently, a 0.018-inch guidewire with improved maneuverability has become commercially available and compatible with dilation devices. 15,16Although the usefulness of EUS-guided transhepatic biliary drainage with this new 0.018-inch guidewire has also been reported, the results are mainly for difficult cases in high-volume centers where EUS-guided transhepatic biliary drainage is performed in large numbers. 17For this method to be commonly used, it was necessary to study the results in non-high-volume centers among endoscopists with little experience in performing EUS-guided transhepatic biliary drainage. Therefore, a retrospective observational study was conducted to investigate the usefulness of EUS-guided transhepatic biliary drainage with a 22-gauge FNA needle and a 0.018-inch guidewire in the induction phase of EUS-guided transhepatic biliary drainage in a non-high-volume center. Study design This retrospective observational study was conducted at Tonan Hospital, Sapporo, Hokkaido, Japan.The study protocol was approved by our hospital's ethics committee (institutional ID: 2022-1-8-1).All participants provided written informed consent before the procedure. Patients Consecutive patients who underwent EUS-guided transhepatic biliary drainage at our institution from March 2021 to May 2023 were evaluated.Age, sex, indications of EUS-guided transhepatic biliary drainage, site of bile duct stenosis, puncture site, diameter of the targeted bile duct, success rate of biliary access, technical success rate, clinical success rate, total procedure time, and adverse events rate were analyzed.Data associated with the endoscopic procedure were evaluated using the interventional EUS database, endoscopy reports, and video records. Definitions Success of biliary access was defined as successful bile duct puncture and insertion of a guidewire into the bile duct. Technical success was defined as the successful placement of a stent in the appropriate position. Clinical success was defined as a 50% decrease in or normalization of the serum total bilirubin level in jaundice cases.In cases with segmental cholangitis, the disappearance of clinical symptoms including abdominal pain and fever was considered a clinical success. Total procedure time was defined as the time from scope insertion to removal. Adverse events were classified and graded according to the American Society for Gastrointestinal Endoscopy lexicon. 18 EUS-guided transhepatic biliary drainage All endoscopic procedures were performed or supervised by a physician (Kei Yane) who has experience with more than 2000 ERCPs and 500 EUS-FNA procedures, but less than 10 EUS-guided transhepatic biliary drainage experience at the time of the first procedure of this study (Video S1).Two other endoscopists (Masahiro Yoshida: two cases of EUS-guided transhepatic biliary drainage before the study; Takayuki Imagawa: no EUSguided transhepatic biliary drainage experience) also performed the procedure during the study period.As regards EUS-guided interventions other than transhepatic biliary drainage, Kei Yane has an experience of 22 cases of cyst drainage, three cases of pancreatic duct drainage, three cases of gallbladder drainage (GBD), and two cases of CDS; Masahiro Yoshida has an experience of one case of GBD; Takayuki Imagawa has no experience with any of these.There were four assistants (Kei Yane, Masahiro Yoshida, Kotaro Morita, and Hideyuki Ihara: overlapping with the endoscopists), two with less than 10 cases of EUS-guided transhepatic biliary drainage assistance, and two with no experience in EUS-guided transhepatic biliary drainage assistance.A 22-gauge FNA needle (EXPECT Slimline; Boston Scientific) and a 0.018-inch guidewire (Fielder 18; Olympus Medical Systems) were used with an oblique-viewing linear echoendoscope (EG-580UT or EG-740UT; Fujifilm Corp.) for all cases (Figure 1).First, an endoscopic clip (SureClip; Micro-tech) was placed at the esophagogastric junction to provide a guide on the fluoroscopic image.Subsequently, the target bile duct (B2 or B3) was visualized, and the bile duct diameter at the puncture site was measured and punctured with a 22-gauge FNA needle.Following cholangiography, a 0.018-inch guidewire was advanced and placed in the bile duct through the FNA needle. After the guidewire was inserted and advanced into the hilar side of the biliary tree, tract dilation was performed as required.A tapered tip ERCP catheter (ERCP CATHETER; MTW Endoskopie), a doublelumen catheter (Uneven double lumen cannula; Piolax), tapered tip mechanical dilator (ES dilator; Zeon Medical), balloon dilator (REN; Kaneka), and drill dilator (Tornus ES; Olympus Medical Systems) were used for tract dilation at each endoscopist's discretion.An ERCP catheter was then inserted into the bile duct.Bile was aspirated to decompress the bile duct, and cholangiography was performed to confirm the stenosis.A 0.025-inch guidewire (VisiGlide 2; Olympus Medical Systems and EndoSelector; Boston Scientific) was then inserted and placed in the appropriate position (i.e., common bile duct, intrahepatic bile duct, duodenum, or jejunum) depending on the case and the planned procedure, and a biliary stent was placed (Figure 2). As a principle, in cases where antegrade stenting was planned in addition to EUS-guided transhepatic biliary drainage, a laser-cut type uncovered metal stent with a thin delivery system (ZEOSTENT V; Zeon Medical) was used, and a plastic stent (TYPE-IT; Gadelius Medical) was used in the fistula site.In cases with ascites in the puncture tract, partially covered metal stents (Niti-S Stype or Spring Stopper; Taewoong Medical Inc.) were used at the fistula site. In cases where only EUS-guided transhepatic biliary drainage was planned, partially covered metal stents or plastic stents were used at the endoscopist's discretion.Clinical symptoms and physical findings were recorded after the procedure.Laboratory data and computed tomography were also assessed the following day.If there were no adverse events, the patient was allowed to eat the following day. RESULTS Thirty-seven patients (median age, 71 years; 21 men and 16 women) were included.The patients'characteristics are shown in Table 1.The reason for biliary drainage was malignant bile duct stricture in 36 patients (pancreatic cancer, 17; gastric cancer, 10; gallbladder cancer, two; cholangiocarcinoma, two; other malignancy, five) and choledocholithiasis in one patient.The indications of EUS-guided transhepatic biliary drainage were surgically altered anatomy (n = 20), duodenal obstruction (n = 10), segmental cholangitis that is difficult to control by ERCP (n = 5), and obscure ampulla due to tumor invasion (n = 2).The sites of bile duct stenosis were distal (n = 27), hilar (n = 6), choledochojejunal anastomosis (n = 2), and distal plus hilar (n = 1).There was no bile duct stenosis in the choledocholithiasis patient.The puncture target was B3 in 23 patients (62.2%) and B2 in 14 patients (37.8%).The puncture site was the stomach in 30 patients (81.1%) and the jejunum in seven patients (18.9%).The median target bile duct diameter was 4.5 mm (2.5-9.4).The results of the procedure are shown in Table 2.A total of 23 procedures were performed by Kei Yane, 12 by Masahiro Yoshida,and two by Takayuki Imagawa.Target bile duct puncture was successful in the 37 patients (100%).In one patient, the portal vein was punctured accidentally and recognized after guidewire insertion.Thus, the puncture needle was removed and the bile duct was punctured again.In six patients, bile duct puncture was successful and cholangiography and guidewire insertion could be performed,but the guidewire went into the peripheral bile duct and was difficult to advance to the hepatic hilum.For this reason, the guidewire was advanced to the hepatic hilum in two cases using the liver impaction technique. 19In the remaining four cases, the puncture needle was removed and re-punctured to successfully insert the guidewire in the appropriate site.In these cases, the second puncture was easily performed with little change in the EUS image and dilated state of the bile duct.Finally, biliary access and guidewire placement to the target site were possible in all 37 patients.None of the patients required a 19-gauge FNA needle for biliary access, and in all patients, biliary access was possible with a combination of a 22-gauge FNA needle and a 0.018-inch guidewire.In one case of segmental cholangitis with hilar biliary stenosis, in which B3 was punctured and a plastic stent (TYPE-IT) was placed in the same area, the procedure was completed with a 0.018-inch guidewire, whereas the other 36 cases were replaced with a 0.025-inch guidewire.Fistula dilation was also successful in all the 37 patients.One patient scheduled for hepaticogastrostomy (HGS) with antegrade stenting underwent antegrade stenting alone owing to HGS stent breakage during stent deployment.Overall technical success was achieved in the 37 patients (100%).The procedures performed were EUS-HGS with antegrade stenting (n = 18), EUS-guided hepaticojejunostomy (EUS-HJS) with antegrade stenting (n = 7), EUS-HGS (n = 10), EUS-guided antegrade stenting (n = 1), and EUS-HGS with bridging stenting to the right liver lobe plus antegrade stenting (n = 1).Of these patients who only underwent antegrade stenting, HGS was performed with a plastic stent (TYPE-IT) after antegrade stent placement.However, the stent was difficult to insert into the bile duct, and when an attempt was made to pull it back, the stent did not follow the delivery system and fell out of place, resulting in unsuccessful placement.The median procedure time was 35 min (16-125). Clinical success was achieved in 35 patients (94.6%).Two patients did not show a sufficient reduction in the serum total bilirubin level after the procedure.Adverse events occurred in four (10.8%) patients.In the acute cholangitis patient owing to insufficient biliary drainage, HGS stent exchange and additional antegrade stent placement were performed via the HGS fistula site.In the acute cholecystitis patient, percutaneous transhepatic gallbladder aspiration was performed.In the patient with stent dislodgement 10 days postoperatively, computed tomography showed no obvious bile leakage, and the bile duct remained dilated.Therefore, EUS-HGS with antegrade stenting was performed urgently, and further stent dysfunction was not observed.There was no clinically evident bile peritonitis, including 5 patients who required bile duct re-puncture. DISCUSSION Most previous studies on EUS-guided transhepatic biliary drainage have reported that a 19-gauge FNA needle is appropriate for puncturing the bile duct, 3,7,8,12 and the Japanese guidelines 2018 recommend the use of a 19-gauge FNA needle. 13The guidelines also recommend the use of a 0.025-inch or 0.035-inch guidewire.This is because although the combination of a 22-gauge FNA needle and a 0.021-or 0.018-inch guidewire allows for bile duct puncture, subsequent device insertion is difficult.However, target puncture is reportedly easier with a 22-gauge FNA needle than with a 19-gauge FNA needle on EUS-FNA, 20 and theoretically, the use of a 22-gauge FNA needle could facilitate bile duct puncture on EUS-HGS for the same reason.There have been reports of EUS-HGS with a 22-gauge FNA needle and a 0.018-inch guidewire combination in difficult cases. 9,15Although the previous 0.018-inch guidewire was difficult to use owing to its poor maneuverability, good results have been reported with improvements in dilation devices mainly in high-volume centers. 14ecently, reports of this technique have increased with an improved 0.018-inch guidewire.Ogura et al. reported that puncture was possible in all cases, including non-dilated bile ducts. 17These reports show that the combination of a 22-gauge FNA needle and a 0.018inch guidewire makes puncture easier than the use of a 19-gauge FNA needle, and is a useful rescue technique, particularly in difficult cases. On the other hand,there have been no comprehensive reports on this technique's usefulness in non-highvolume centers during the procedure's induction phase.Thus, we conducted a retrospective observational study of the technical outcomes of all EUS-guided transhepatic biliary drainage procedures with a 22-gauge FNA needle and a 0.018-inch guidewire, performed within a certain period of time at institutions during the procedure's induction phase, without limiting the target bile duct diameter. The main advantage of a 22-gauge FNA needle is its ease of biliary access.Experts in EUS-guided transhepatic biliary drainage have reported that bile duct puncture with a 19-gauge FNA needle is possible in most cases, 7,8 but is often difficult to reproduce by an endoscopist with less experience in EUS-guided transhepatic biliary drainage. 10Although it is difficult to provide definitive data, we believe that a 22-gauge FNA needle is not only easier to use for bile duct puncture but also makes it easier to advance the guidewire towards the hepatic hilum by selecting the appropriate puncture site, as it can puncture at a deeper angle than a 19-gauge needle.From these points, we believe that the combination of a 22-gauge FNA needle and a 0.018-inch guidewire is highly advantageous in difficult and normal cases, particularly for endoscopists without extensive EUS-guided transhepatic biliary drainage experience. In this study, target bile ducts could be punctured in all patients.Moreover, guidewires could be advanced to the hepatic hilum by standard guidewire manipulation in 83.8% (31/37) of cases.Two cases required an advanced technique (liver impaction technique), 19 and four cases required needle removal and re-puncture owing to the difficulty of guidewire insertion to the hepatic hilum.The results compared favorably with previous reports. 10,11In four cases that required re-puncture, the second puncture was easily performed with little change on the EUS-image and dilated state of the bile duct, possibly because of the less bile and contrast medium leakage. A 0.018-inch guidewire is considered to be difficult to use because of its poor maneuverability, and even after successful guidewire placement, fistula dilation is often difficult because of limitations in the applicable devices.5][16] In the present study, tapered tip ERCP catheters, mechanical dilators, balloon dilators, and drill dilators were used depending on each physician's discretion, and there were no unsuccessful cases during the fistula dilatation step. This study has several limitations.First, the retrospective study design with a single-center setting might cause patient selection bias.Also, there is no control group.Therefore, a multicenter validation study in a similar procedure induction situation should be performed.On the other hand, the number of included patients is relatively large compared with previous similar studies and hence is worth reporting. Second,in most patients,we used an uncovered metal stent with a fine-gauge (5.4 Fr) delivery system and a 7 Fr diameter dedicated plastic stent.In many patients, antegrade stenting was combined with EUS-guided transhepatic biliary drainage.Therefore, it cannot be determined whether similar results would be achieved if EUS-HGS with a thick delivery system metal stent was performed without antegrade stenting.However, in almost all procedures, cholangiography using an ERCP catheter and replacement with a 0.025-inch guidewire were performed after fistula dilation.Therefore, theoretically, this method should be effective regardless of the details of subsequent procedures. In conclusion, EUS-guided transhepatic biliary drainage using a 22-gauge FNA needle and a 0.018inch guidewire is a useful and promising option for endoscopists with limited EUS-guided transhepatic biliary drainage experience in the procedure's induction phase in a non-high-volume center. F I G U R E 1 A 0.018-inch guidewire (Fielder 18; Olympus) inserted into a 22-gauge fine-needle aspiration needle (Expect Slimline; Boston Scientific). F I G U R E 2 (a) Following cholangiography, a 0.018-inch guidewire was advanced and placed in the bile duct through a 22-gauge FNA needle.(b) An ERCP catheter was inserted into the bile duct, bile was aspirated to decompress the bile duct, and cholangiography was performed to confirm the stenosis.(c) A laser-cut type uncovered metal stent was deployed across the papilla.(d) A plastic stent was deployed from the intrahepatic bile duct to the jejunum. Patient characteristics. Results of endoscopic ultrasound-guided transhepatic biliary drainage using a 22-gauge fine-needle aspiration needle and a 0.018-inch guidewire. TA B L E 2
2023-10-12T05:06:15.453Z
2023-10-10T00:00:00.000
{ "year": 2023, "sha1": "6d32a7cf1c72cd46e6fa6d7eb17eee7ef0bfe946", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "6d32a7cf1c72cd46e6fa6d7eb17eee7ef0bfe946", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
7814943
pes2o/s2orc
v3-fos-license
Distributed under Creative Commons Cc-by 4.0 Genome-wide Identification and Characterization of Gras Transcription Factors in Sacred Lotus (nelumbo Nucifera) The GRAS gene family is one of the most important plant-specific gene families, which encodes transcriptional regulators and plays an essential role in plant development and physiological processes. The GRAS gene family has been well characterized in many higher plants such as Arabidopsis, rice, Chinese cabbage, tomato and tobacco. In this study, we identified 38 GRAS genes in sacred lotus (Nelumbo nucifera), analyzed their physical and chemical characteristics and performed phylogenetic analysis using the GRAS genes from eight representative plant species to show the evolution of GRAS genes in Planta. In addition, the gene structures and motifs of the sacred lotus GRAS proteins were characterized in detail. Comparative analysis identified 42 orthologous and 9 co-orthologous gene pairs between sacred lotus and Arabidopsis, and 35 orthologous and 22 co-orthologous gene pairs between sacred lotus and rice. Based on publically available RNA-seq data generated from leaf, petiole, rhizome and root, we found that most of the sacred lotus GRAS genes exhibited a tissue-specific expression pattern. Eight of the ten PAT1-clade GRAS genes, particularly NnuGRAS-05, NnuGRAS-10 and NnuGRAS-25, were preferentially expressed in rhizome and root. In summary, this is the first in silico analysis of the GRAS gene family in sacred lotus, which will provide valuable information for further molecular and biological analyses of this important gene family. INTRODUCTION The GRAS genes are plant-specific transcriptional regulators, which have been characterized in many higher plant species. The name GRAS is an acronym of the first three functionally characterized members in this family, including the gibberellic acid insensitive (GAI), repressor of ga1-3 (RGA), and scarecrow (SCR) (Bolle, 2004). The GRAS proteins contain a variable N-terminal region and a highly conserved C-terminal region with five different sequence motifs in the following order: leucine heptad repeat I (LHR I, LRI), VHIID motif, leucine heptad repeat II (LHR II, LRII), PFYRE motif and SAW motif (Pysh et al., 1999). The GRAS proteins are usually composed of 400-770 amino acid residues (Bolle, 2004). Much research has shown that LRI, VHIID, LRII motif or the whole LRI-VHIID-LRII pattern might function as one of the DNA-binding or protein-binding regions during interaction between GRAS and other proteins. Functions of the PFYRE and SAW motifs are still not very clear, while the highly conserved residues in the PFYRE and SAW motifs indicate that these motifs are potentially related to the structural integrity of the GRAS proteins (Sun et al., 2011). In fact, the order of the conserved domains in all GRAS proteins are similar, and the regulatory specificity of the GRAS proteins seem to be determined by the length and amino acid sequences of their variable amino-termini (Tian et al., 2004). Based on studies on the model plants Arabidopsis and rice, members of the GRAS gene family are divided into eight distinct branches, namely LISCL, PAT1, SCL3, DELLA, SCR, SHR, LS and HAM, which are named after a common feature or one of their members (Hirsch & Oldroyd, 2009). However, another phylogenetic analysis grouped the Arabidopsis GRAS proteins into ten branches, including LISCL, AtPAT1, AtSCL3, DELLA, AtSCR, AtSHR, AtLAS, HAM, AtSCL4/7, DLT (Sun et al., 2011). In another study, the GRAS genes can be divided into at least 13 branches (Liu & Widmer, 2014). The identification of GRAS members in different species was also slightly different among studies. There were 32 to 34 genes identified for sacred lotus in this study. In addition, 57 and 60 were identified as GRAS genes in rice in two reports (Liu & Widmer, 2014;Tian et al., 2004). In addition, there were 21, 46, 48, 53 and 106 GRAS transcription factors identified in tobacco (Chen et al., 2015), Prunus mume (Lu et al., 2015), Chinese cabbage (Song et al., 2014), tomato (Huang et al., 2015) and Populus (Liu & Widmer, 2014), respectively. Members of the GRAS gene family have diverse functions and exert important roles in plant development and physiological processes, such as in gibberellin acid (GA) signal transduction, root development, axillary shoot development, phytochrome signal transduction and transcriptional regulation in response to abiotic and biotic stresses. For example, the phosphomimic/de-phosphomimic RGA, a representative DELLA protein of the GRAS protein family in Arabidopsis regulates different bioactivities in GA signaling (Wang et al., 2014). AtSCL3 plays an essential role in integrating multiple signals during root cell elongation in Arabidopsis (Heo et al., 2011). The gene MONOCULM 1 (MOC1) which is highly related to rice tillering has been cloned. As a result of a defect in the formation of tiller buds, the moc1 mutant plants contain no tillers except a main culm . As a member of the PAT1 branch, the Arabidopsis Scarecrow-like 13 (AtSCL13) is a positive regulator of continuous red light signals downstream of phytochrome B (phyB), it also regulates phytochrome A (phyA) signal transduction in a phyB-independent way according to genetic evidences (Torres-Galea et al., 2006). With increasing the activity of two stress-responsive enzymes α-amylase and Superoxide dismutase (SOD) in transgenic seedlings, transgenic Arabidopsis plants overexpressing PeSCL7 from Populus euphratica Oliv exhibited enhanced tolerance to salt and drought stress due to increased activity of stress-responsive enzymes in the transgenic seedlings (Ma et al., 2010). Sacred lotus (Nelumbo nucifera Gaertn) is a perennial aquatic herb with large showy flowers. It is widely distributed in Australia, China, India, Iran and Japan (Mukherjee et al., 2009). Sacred lotus has been cultivated as a crop in Far-East Asia for over 3,000 years. This plant is widely used for food (edible rhizomes, seeds and leaves) and medicine, and also plays an vital role in cultural and religious activities (Shen-Miller, 2002). The individual parts of sacred lotus, including leaf, stamen, stem, rhizome and seed, have various medicinal properties. For example, the embryos of seeds are used as traditional Chinese medicine in the treatment of nervous disorders, high fevers (with restlessness), insomnia and cardiovascular diseases (e.g., hypertension, arrhythmia). The leaves are mainly used for clearing heat, cooling blood, removing heatstroke and stopping bleeding. The flowers are useful in treating premature ejaculation, bloody discharges and abdominal cramps (Mukherjee et al., 2009). The GRAS family genes have been well characterized in several plant species, but they have not been investigated in sacred lotus. In 2013, the China Antique variety of the sacred lotus had its complete genome sequenced and the draft genome was released. The final assembly covers 86.5% of the estimated 929 Mbp total genome size, and the genome was well-assembled with a scaffold N50 of 3.4 Mbp. In the recent assembly, a total of 26,685 genes were annotated in the sacred lotus genome study, of which 4,223 represented a minimum gene set for vascular plants by comparisons of the available sequenced genomes (Ming et al., 2013). The release of the whole genome sequence of sacred lotus enabled us to conduct genome-wide identification and comparative analysis of the GRAS gene family in this plant. In this study, we identified 38 GRAS genes in sacred lotus and constructed a phylogenetic tree of the GRAS genes from eight plant species. Also, we analyzed the gene structure and conserved motifs of the sacred lotus GRAS genes, performed comparative analysis for the GRAS genes from Arabidopsis, rice and sacred lotus. Furthermore, expression profiles of the sacred lotus GRAS genes were investigated in four different tissues using publically available RNA-seq data. This study provided essential information for the GRAS family genes in sacred lotus and enhanced our understanding of the GRAS family genes in plants. Identification and characterization of the sacred lotus GRAS genes The sacred lotus genome sequence, the annotated sacred lotus gene and protein sequences were downloaded from Lotus-DB (http://lotus-db.wbgcas.cn/, v1.0) (Wang et al., 2015). The newest HMM model for the GRAS transcription factor gene family named PF03514.11 was downloaded from the Pfam database (http://pfam.xfam.org/) (Eddy, 2011;Finn et al., 2014). The HMMER software was employed in searching for GRAS proteins in the entire protein dataset of sacred lotus with a cut-off E-value of 1e −5 using PF03514.11 as a query. The identified potential GRAS proteins were manually checked given their presence of the motifs essential for a protein to be defined as GRAS and all of them were retained for further analysis. The coding sequences and the genomic sequences of the identified sacred lotus GRAS genes were obtained in accordance with the GFF3 file specification. The genomic position of each GRAS protein on the assembled mega scaffolds of sacred lotus was also obtained based on the GFF3 file. Phylogenetic analysis of the GRAS genes Eight representative species were selected for comparative analysis of their GRAS proteins. The genome sequences of spruce (Picea abies), the first available gymnosperm genome assembly (Nystedt et al., 2013), was collected from the Congenie Website (http://congenie.org/). The annotated proteins of algae (Chlamydomonas reinhardtii), moss (Physcomitrella paten), fern (Selaginella moellendorffii), columbine (Aquilegia coerulea), Arabidopsis thaliana, and rice (Oryza sativa) were downloaded from the Pfam database (v10) (Goodstein et al., 2012). The GRAS genes in these species were identified using the aforementioned method. No GRAS gene was identified in algae. Given that there were two GRAS domains in two rice genes (LOC_Os12g04370, LOC_Os11g04570), only the best hit domain for each of them was selected for phylogenetic tree construction to avoid redundant amino acids. The ClustalX2 program was used to generate the multiple sequence alignments of the GRAS proteins with the Gonnet protein weight matrix (Larkin et al., 2007). The MEGA program (v6.06) was employed to construct a maximum-parsimony phylogenetic tree using the JTT model with 500 bootstrap replicates (Tamura et al., 2013). All sites in each of the GRAS proteins were involved in the phylogenetic tree construction. To clearly distinguish the genes in this phylogenetic tree, the terms of ''moss'' and ''fern'' were added as prefixes to indicate they were genes from Physcomitrella paten and Selaginella moellendorffii, while the common names were used as prefixes for other species. The frequency of each divergent branch was displayed if its value was higher than 50%. The Adobe Illustrator software was used to clearly show the GRAS branches after classification of all proteins based on the known background information. Gene structure and motif analysis The gene structure was analyzed using the Gene Structure Display Server tool (http://gsds.cbi.pku.edu.cn/, v2.0) (Hu et al., 2015). The UTR sequences were gathered and were displayed in the final gene structure. The MEME software (http://memesuite.org/doc/download.html, v4.11.0) was used to search for motifs in all 38 sacred lotus GRAS proteins with a motif window length from 6 bp to 50 bp (Bailey et al., 2009). The default number of motifs to be found was set to 10, and these motifs were allowed to be distributed for any number of repetitions. Identification of orthologous and paralogous genes OrthoMCL (v2.0.3) (Li, Stoeckert & Roos, 2003) was used to search for orthologous, coorthologous and paralogous genes in sacred lotus, Arabidopsis, and rice using the entire GRAS protein sequences. We ran ortholog analysis using OrthoMCL with an E-value of 1e −5 for all-against-all BLASTP alignment, and a match cut-off value of 50. The orthologous and paralogous relationships were gathered, and displayed using the Circos software (http://circos.ca/) (Krzywinski et al., 2009). Analysis of the GRAS gene expression in different tissues Transcriptome data of leaf, petiole, rhizome internode and root of sacred lotus have been previously generated and processed. The FPKM (fragments per kilobase per million measure) representing the gene expression level of each sacred lotus gene was available in Lotus-DB (http://lotus-db.wbgcas.cn/genome_download/transcript/). We retrieved the FPKM results of all sacred lotus GRAS genes from Lotus-DB and displayed the results using the HemI program (http://hemi.biocuckoo.org/) with the maximum distance similarity metric (Deng et al., 2014). Genome-wide identification of GRAS genes in sacred lotus A total of 38 distinct GRAS transcription factors were identified from the sacred lotus genome (Table S2). We named these GRAS candidates NnuGRAS-01 up to NnuGRAS-38 based on their E-value ranking (Table 1 and Table S2). The length of GRAS proteins varied greatly, ranging from 74 amino acids (aa) to 807 aa. It was noteworthy that the minimum length of a typical GRAS domain is about 350 amino acids (Liu & Widmer, 2014). Based on this criterion, several of the 38 GRAS proteins that we identified in sacred lotus were exceptional. For example, the number of amino acids of NNU_15540-RA, NNU_11453-RA and NNU_26501-RA was 97, 74 and 168, respectively. The average length of coding sequences (CDS) of the GRAS genes (1,544 bp) was longer than the average of all sacred lotus genes (1,135 bp). Analysis revealed that the GRAS genes of sacred lotus contained longer exons and smaller introns. Physical and chemical characteristics of all 38 sacred lotus GRAS proteins, including number of amino acids, molecular weight, theoretical pI, formula, instability index, aliphatic index and GRAVY, were analyzed and summarized in Table S2. The average value of theoretical pI was 5.0, suggesting that most proteins were acidic. Only two proteins, NnuGRAS-35 and NnuGRAS-38, with values of theoretical pI at 9.03 and 9.12, respectively, were considered to be alkaline. Three GRAS proteins with an instability index less than 40 were classified as stable and the rest were classified as unstable. The average aliphatic index of all proteins was 85.8, ranging from 67.68 to 108.11. We found that most sacred lotus GRAS proteins contained numerous aliphatic amino acids. The GRAVY scores of all proteins, except NnuGRAS-29 (0.005), NnuGRAS-37 (0.264) and NnuGRAS-38 (0.482), were less than zero, suggesting that these proteins were hydrophilic. Based on a comparative analysis, the physical and chemical characteristics of the sacred lotus GRAS proteins were in general similar to those of Chinese cabbage (Song et al., 2014). The representative gene families in sacred lotus To shed light to all genes in sacred lotus, the genome-wide identification of all possible genes in gene families were performed. A collection of 16,230 Pfam HMM models was used following abovementioned method. A total of 19,925 sacred lotus genes were identified in 4,032 gene families, and some genes were identified into several families for the boundary of some similar gene families were not totally clear. The Pkinase, PPR (Pentatricopeptide repeat), LRR or RRM (RNA recognition motif), MYB, LRRNT (Leucine rich repeat N-terminal domain), TPR (Tetratricopeptide repeat) and WD40 gene families were identified to contain most members in sacred lotus. Most of these abundant gene families were found to be repeats, transcription factors and zinc-finger containing genes. These Phylogenetic relationship of the GRAS proteins in Planta To shed insight into the evolution of GRAS genes in the plant kingdom, we analyzed the phylogenetic relationship of the GRAS genes from seven plant species, including moss (Physcomitrella patens), fern (Selaginella moellendorffii), a gymnosperm (Picea abies), a monocot (rice, Oryza sativa) and a representative eudicot species (Arabidopsis thaliana) and two basal eudicots (Aquilegia coerulea and Nelumbo nucifera). The Nelumbo genus, which was the solely extant genus in the family Nelumbonaceae, was one of the earliest eudicot clades (Li et al., 2014). The columbine was also included in the analysis because it belongs to another primitive family between gymnosperms and Nelumbonaceae. The number of GRAS genes in sacred lotus was 38, which was more than in Arabidopsis (32-34), but less than in Prunus mume (46), Solanum lycopersicum (53) and Chinese cabbage (48) (Huang et al., 2015;Lu et al., 2015;Song et al., 2014;Tian et al., 2004). A previous study showed that the phylogenetic trees based on the full-length sequences or the conserved C-termini of the GRAS proteins were very similar. In both analyses, the GRAS genes were classified into eight branches (Bolle, 2004). The phylogenetic tree of all GRAS genes generated in this study was generally consistent with previous reports. In addition to eight previously reported branches, the new branches SCL3/28 was classified here based on phylogeny. AtSCL8 and AtSCL26 were not classified into any branches for which they were classified into the PAT1 branch and HAM branch due to relative similarity with other members in these two branches. According to the phylogenetic tree, the sacred lotus GRAS genes were distributed in all nine branches, with 4, 3, 4, 1, 4, 2, 7, 10 and 3 in the DELLA, SCR, SCL4/7, LS, HAM, SCL9, SHR, PAT1 and SCL 3/28 branches, respectively (Fig. 1). The SCR and SHR branches contained three (NunGRAS-14, NunGRAS-19 and NunGRAS-32) and seven sacred lotus GRAS proteins, respectively. GRAS genes of the SHR branch have been previously shown to be crucial for the development of root and shoot (Cui et al., 2007). NunGRAS-16 and NunGRAS-17 were classified into the SCL9 branch, whose members have been previously found to regulate the transcription process during microsporogenesis in Lilium longiflorum (Morohashi et al., 2003). Ten sacred lotus GRAS proteins were clustered into the PAT1 branch. PAT1 has been shown to be potentially involved in phytochrome. A signal transduction and was found to be localized to the cytoplasm in Arabidopsis (Bolle, Koncz & Chua, 2000). The DELLA branch includes four sacred lotus GRAS proteins, including NunGRAS-09, NunGRAS-11, NunGRAS-02 and NunGRAS-08. The Arabidopsis GRAS proteins of this branch were found to be associated with negative regulation of GA signaling (Wen & Chang, 2002). We did not find GRAS genes in the green algae, which in consistent with previous results. The GRAS family transcription factors were found to be first emerged in bacteria and belong to the Rossmann fold methylatransferase superfamily. All bacterial GRAS proteins are likely to function as small-molecule methylases and some of the plant GRAS proteins could have a similar function (Zhang, Iyer & Aravind, 2012). We deduced that the GRAS genes in plants might be originated from bacterial genome. The typical plant GRAS genes first appeared in Figure 1 Phylogenetic tree of the GRAS genes from seven plant species, which was constructed based on the GRAS domains using the maximum-parsimony method. Individual species were distinguished by different shapes with different colors. moss, with 40 members identified in Physcomitrella patens. The total number of the GRAS genes in different species fell in a narrow range of 34 to 60; however, the distribution of GRAS genes in different branches was extremely uneven. In some species, the GRAS genes were mainly found in certain branches. For example, 29 of the total 52 fern GRAS genes and 13 of the total 60 rice GRAS genes were classified into the SCL9 branch, while only one columbine (35 in total) and two sacred lotus (38 in total) GRAS genes were found in the SCL9 branch (Fig. 1). The GRAS gene number of Arabidopsis thaliana in SCL9 was enlarged into six members, this may result from Gamma, Beta and Alpha whole-genome duplication which will need further research. In addition, nine spruce GRAS genes also accumulated in a sub-branch of the PAT1 branch. As columbine and sacred lotus were closely related to each other based on the phylogenetic analysis, every sacred lotus GRAS gene had close columbine GRAS genes, and most columbine GRAS genes had one or two corresponding GRAS members of sacred lotus except for Aquaca_076_00075.1 and four members in DELLA branch. In one sub-branch of the DELLA branch, there were 2, 4 and 3 fern columbine and rice GRAS genes, respectively, but no moss and spruce GRAS genes. Interestingly, most GRAS genes contained only one typical domain, but two rice genes (LOC_Os12g04370 and LOC_Os11g04570) contained two domains. Gene structure analysis The structural divergence of exon/intron regions played a crucial role in the evolution of gene families (Bai et al., 2011). To evaluate the possible evolution and diversity of the sacred lotus GRAS genes, we analyzed the number and location of exons, introns and UTRs of each GRAS gene (Fig. 2). The results showed that up to 73.7% (28/38) of the sacred lotus GRAS genes were intronless, which was lower than in Prunus mume (82.2%) and tomato (77.4%), but higher than in Arabidopsis (67.6%), rice (55%) and Populus (54.7%) (Abarca et al., 2014;Liu & Widmer, 2014;Lu et al., 2015;Tian et al., 2004). Only ten of the 38 sacred lotus GRAS genes had introns, ranging from one to seven. Seven genes had a single intron, two genes (NunGRAS-10 and NunGRAS-35) had three introns and NunGRAS-38 had seven introns (Fig. 2). In addition, it was notable that most GRAS members of the same branch generally showed similar exon-intron structures. Most of the paralogous pairs also shared conserved exon-intron structures. However, there were exceptions in the exon-intron structures for different GRAS members of the same branch, such as . These results suggested that one of the genes in these gene pairs might have experienced intron loss or gain events during its evolution process. The intron phase patterns of the GRAS genes in different branches and even in the same branch were totally different, which implied that the relationship of the GRAS genes among different branches was not as close as that observed in other gene families such as the Squamosa Promoter Binding Protein (SBP) gene family in Brassica rapa and the TIFY gene family in Arabidopsis (Bai et al., 2011;Tan et al., 2015). Interestingly, NnuGRAS-38 contained seven introns, while its homolog NnuGRAS-18 contained only one intron, suggesting that even homologs could have experienced gene structure diversification although the possibility of wrong gene annotation could not be ruled out. In addition, NnuGRAS-07, NnuGRAS-10, NnuGRAS-18 and NnuGRAS-35 contained long introns with a length over 4kb, which needs further validation. Identification of conserved motifs in the sacred lotus GRAS proteins To analyze the conserved features of the sacred lotus GRAS family proteins, we used the MEME program to identify conserved motifs in all GRAS proteins (Fig. 3A). Surprisingly we found that 23 sacred lotus GRAS proteins contained all 10 motifs, the default number of the MEME analysis, and that only six sacred lotus GRAS proteins contained less than eight motifs. The proteins of the SHR branch was the most representative one of the GRAS gene family with all of them containing 10 motifs. In addition, we found that motifs were more likely to be located in the C-terminal than in the N-terminal, supporting the notion that the C-terminal region of the GRAS proteins was more conserved than the N-terminal region (Pysh et al., 1999). The number of amino acids of these motifs ranged from 21 to 29 (Fig. 3B). Identification of orthologous, co-orthologous and paralogous GRAS genes in sacred lotus, Arabidopsis and rice Comparative analysis identified orthologous, co-orthologous, and paralogous GRAS genes among sacred lotus, Arabidopsis and rice using OrthoMCL (Fig. 4). In total, we identified 42 orthologous and nine co-orthologous gene pairs between sacred lotus and Arabidopsis, and 35 orthologous and 22 co-orthologous gene pairs between sacred lotus and rice. Between Arabidopsis and rice, 29 orthologous and 20 co-orthologous gene pairs were found. In addition, a total of 26 (68.4%) sacred lotus GRAS proteins were found to have corresponding paralogous proteins. This ratio was higher than that in Arabidopsis (19, 55.9%) and rice (31, 51.7%) (Table S4). It was generally considered that orthologous genes have similar gene structure and biological function (Chen et al., 2007). Identification of orthologous genes was crucial for phylogenetic analysis since it could play a role in elucidation of gene and plant evolution. The sacred lotus genome underwent a lineage-specific whole genome duplication (WGD) event about 65 million years ago, which separated it from other eudicots prior to the Gamma genome-triplication event. The lack of the triplication event made the sacred lotus genome an excellent research material bridging eudicots and monocots (Ming et al., 2013). We found that the number of orthologous gene pairs between sacred lotus and Arabidopsis were more than that between sacred lotus and rice. In comparison, the number of orthologous GRAS gene pairs between Chinese cabbage and Arabidopsis was 52 (Song et al., 2014), a number more than that between sacred lotus and Arabidopsis. This result suggested that Arabidopsis was genetically close to Chinese cabbage than sacred lotus. Expression pattern of the sacred lotus GRAS genes in different tissues We used publically available RNA-seq data of four tissues (leaf, petiole, rhizome and root) to investigate the expression profiles of the sacred lotus GRAS genes (Wang et al., 2015). Based on the maximum distance similarity metric clustering method on two dimensions, Figure 4 Comparative analysis of the GRAS genes in Arabidopsis, rice and sacred lotus. Yellow, blue and red lines indicate orthologous, co-orthologous and paralogous gene pair relationships, respectively. we found that the GRAS genes shared a similar expression pattern in leaf and petiole as well as in rhizome and root (Table S2 and Fig. 5). The GRAS genes of some branches exhibited a tissue-specific expression pattern. For example, the GRAS genes of the PAT1 branch seemed to be more abundantly expressed in rhizome and root than in leaf and petiole, especially NnuGRAS-05, NnuGRAS-10, and NnuGRAS-25. This suggests that these genes might be associated with development of the edible sacred lotus rhizome. Further functional characterization of these genes would contribute to increase the production of sacred lotus rhizome through molecular-assisted breeding. Six of the seven GRAS genes of the SHR branch (except NnuGRAS-21) showed a higher expression level in leaf and petiole than in rhizome and root. The members in the SCL 3/28 branch had a similar expression level in the four tissues investigated. The expression levels of NnuGRAS-08, NnuGRAS-16, NnuGRAS-33, NnuGRAS-36 and NnuGRAS-37 were very low in all four tissues (Fig. 5).
2018-05-08T18:42:16.037Z
0001-01-01T00:00:00.000
{ "year": 2016, "sha1": "ab25f550e11a6e99973a11b800745a395b91017f", "oa_license": "CCBY", "oa_url": "https://doi.org/10.7717/peerj.2388", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "ab25f550e11a6e99973a11b800745a395b91017f", "s2fieldsofstudy": [ "Biology", "Environmental Science", "Materials Science" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
222169419
pes2o/s2orc
v3-fos-license
Ceftolozane/Tazobactam and Ceftazidime/Avibactam for Multidrug-Resistant Gram-Negative Infections in Immunocompetent Patients: A Single-Center Retrospective Study Complicated infections from multidrug-resistant Gram-negative bacteria (MDR-GNB) represent a serious problem presenting many challenges. Resistance to many classes of antibiotics reduces the probability of an adequate empirical treatment, with unfavorable consequences, increasing morbidity and mortality. Readily available patient medical history and updated information about the local microbiological epidemiology remain critical for defining the baseline risk of MDR-GNB infections and guiding empirical treatment choices, with the aim of avoiding both undertreatment and overtreatment. There are few literature data that report real-life experiences in the use of ceftolozane/tazobactam and ceftazidime/avibactam, with particular reference to microbiological cure. Some studies reported experiences for the treatment of MDR-GNB infections in patients with hematological malignancies or specifically in Pseudomonas aeruginosa infections. We report our clinical single-center experience regarding the real-life use of ceftolozane/tazobactam and ceftazidime/avibactam to treat serious and complicated infections due to MDR-GNB and carbapenem-resistant Enterobacterales (CRE), with particular regard given to intra-abdominal and urinary tract infections and sepsis. Introduction Infections due to multidrug-resistant (MDR) Gram-negative bacteria (GNB) are difficult to treat, representing an actual serious health emergency, particularly in patients who often have comorbidities. These difficult infections are responsible for high direct costs consequent to the use of new antimicrobial drugs and the indirect costs of prolonged hospitalization and healthcare-related expenditure [1,2]. The increase in infections by Pseudomonas aeruginosa, Acinetobacter baumannii, Klebsiella pneumoniae, and extended-spectrum β-lactamase-producing (ESBL) or carbapenem-resistant Enterobacterales (CRE) have hindered the treatment of these infections, with a consequent increase in morbidity and mortality [3][4][5][6][7]. The link between an increased consumption of carbapenems, considered as last-resort antibiotics for the treatment of infections due to MDR Gram-negative bacteria, and the emergence of carbapenemases-producing Enterobacterales (CPE) other than CRE is now well-demonstrated [8,9]. The need for new and effective anti-CRE therapies, together with an adequate infection source control practice, is urgent. Currently, antibiotic options include high doses and strategies of combination therapies with polymyxins, tigecycline, fosfomycin, or aminoglycosides to maximize treatment success. Moreover, the new β-lactam/β-lactamase inhibitors should be considered in severe CRE infections [10]. Ceftazidime/avibactam has recently been approved for the treatment of complicated intraabdominal infections (cIAIs), complicated urinary tract infections (cUTIs), and hospital-acquired and ventilator-associated bacterial pneumonia (HABP and VABP). The European Medicine Agency (EMA) authorized the use of ceftazidime/avibactam also in adult patients with infections due to aerobic GNB with limited therapeutic options. Unlike most β-lactamase inhibitors, avibactam is a novel synthetic non-β-lactam (diazabicyclooctane)/β-lactamase inhibitor that inhibits a wide range of β-lactamases, including Ambler Class A (GEM, SHV, CTX-M, and KPC), Class C (AmpC), and some Class D (OXA-48) β-lactamases [11], expanding the activity spectrum of ceftazidime to MDR Gram-negative bacteria. It does not inhibit Class B MBLs (IMP, VIM, VEB, and NDM) [12,13]. Ceftolozane/tazobactam has recently been approved for the treatment of cIAIs, cUTIs, and HABP and VABP. Tazobactam protects ceftolozane against ESBLs Enterobacterales, as demonstrated by Phase III trials [14]. Ceftolozane is notably active against P. aeruginosa, with minimum inhibitory concentrations (MICs) lower than those of ceftazidime, one of the most active anti-pseudomonal β-lactams. This activity is retained for many strains with derepressed AmpC or up-regulated efflux [15]. Here, we report the clinical experience with ceftazidime/avibactam and ceftolozane/tazobactam to treat or consolidate the treatment of serious Gram-negative infections in the University Hospital of Ferrara, Italy, between 2017 and 2019. The primary objective was to describe the microbiological cure in the study population. The secondary objectives were to describe (i) the appropriateness of ceftolozane/tazobactam and ceftazidime/avibactam antibiotics in GNB infections; (ii) the adverse events related to ceftazidime/avibactam and ceftolozane/tazobactam treatment; (iii) the 30-day mortality. Eighty-eight patients (72%) received ceftolozane/tazobactam as a second-line therapy, with the median time for switching to ceftolozane/tazobactam as 8 days (interquartile range (IQR) 3-10 days). Piperacillin/tazobactam (n = 35), meropenem (n = 14), amoxicillin-clavulanic acid (n = 11), and tigecycline (n = 8) were the most common antimicrobials prescribed prior to the initiation of treatment by ceftolozane/tazobactam. The switch to ceftolozane/tazobactam was mainly due to the documented resistance to previous antibiotics and the lack of clinical response to previous antibiotic treatment. In some cases, patients received ceftolozane/tazobactam treatment in combination with other antibiotics, mainly targeting possible Gram-positive microorganisms, including tigecycline (n = 5), daptomycin (n = 4), vancomycin (n = 3), teicoplanin (n = 3), and linezolid (n = 3). Enterococcus faecalis and E. faecium were isolated in three patients with cIAIs and cUTIs. Five patients received empirical combined ceftolozane/tazobactam treatment, supposing possible Gram-positive bacteria pneumonia or polymicrobial infections. Of these, polymicrobial Gram-negative infections were shown in 24 patients treated with ceftolozane/tazobactam. The 30-day all-cause mortality was 20.5% (25/122). Microbiological analyses on blood, urine, or peritoneal fluid samples obtained at 48 h and 72 h and 7 days after the discontinuation of antibiotic therapy were performed. A microbiological cure was shown in 99/122 (81%) patients, in 81.1% of the patients treated with ceftolozane/tazobactam alone, and in 78.3% of those treated with combination therapy. Particularly, microbiological negativity was shown in 94.4%, 77.4%, and 95.8% for isolates of ESBL-producing P. aeruginosa, K. pneumoniae, and E. coli, respectively. The usefulness of ceftolozane/tazobactam in real practice was evidenced in infections due to ESBL-producing Enterobacterales (p < 0.001) and in meropenem-sensitive infections (p < 0.001) compared with meropenem-resistant infections. Moreover, these results confirmed a possible role of ceftolozane/tazobactam in a carbapenem-sparing policy in the context of an antibiotic stewardship. The appropriateness was confirmed by the use of ceftolozane/tazobactam in infections due to GNB resistant to third-generation cephalosporins and fluoroquinolones, according to specific indications approved by our local regulatory organizations (Table A2). No adverse events related to ceftolozane/tazobactam were observed in the study population. Ceftazidime/avibactam was administered as a combination treatment in 24 patients (51.1%), and the median length of therapy was 12 days. The antimicrobial agents most frequently used in combination with ceftazidime/avibactam included meropenem (N = 5), tigecycline (N = 4), daptomycin (N = 3), and vancomycin (N = 3). Enterococcus faecalis and E. faecium were isolated in four patients with cIAIs and cUTIs. Polymicrobial Gram-negative infections were shown in 13 patients treated with ceftazidime/avibactam. Among the 47 patients treated with ceftazidime/avibactam, the 30-day all-cause mortality was 31.9% (15/47). Microbiological analyses on blood, urine, or peritoneal fluid samples obtained at 48 h 72 h and 7 days after the discontinuation of antibiotic therapy were conducted. A microbiological cure was shown in 31/47 (65.9%) patients, in 67.6% of patients treated with ceftazidime/avibactam alone, and in 69.9% of those treated with combination therapy. A good microbiological cure was shown in all cases where ceftazidime/avibactam was combined with meropenem or daptomycin. Particularly, microbiological negativity was shown in 75% and 53.3%, for isolates of ESBL-producing K. pneumoniae and E. coli, respectively. These results evidenced the usefulness of ceftazidime/avibactam in real practice in carbapenem-resistant GNB infections compared with carbapenem-sensitive GNB infections, as showed by our evidence of E. coli and K. pneumoniae with a low sensitivity to meropenem (52.4%; p < 0.05), ertapenem (45.7%; p < 0.001), and ceftazidime (5.7%; p < 0.001). The appropriateness of ceftazidime/avibactam was confirmed by its use in infections due to carbapenem-resistant bacteria, according to specific indications defined by our regulatory organizations (Table A3). No adverse events related to ceftazidime/avibactam were observed in the study population. Discussion Healthcare-associated infections from GNB are very frequent, and those caused by MDR, XDR, and PDR microorganisms are increasing. Among the infections that do not fall into this classification, those recently defined as DTR are increasingly relevant, representing the real condition of limited availability of first-line drugs in antibiotic therapy. Moreover, empirical therapeutic options are often not adequate for factors such as pharmacokinetic and pharmacodynamic properties and for their toxicity. Tigecycline, fluoroquinolones, and aminoglycosides were described to be the most frequently used antibiotics [16,17]. GNB infections with recognized DTR resistance are related to a high mortality. This creates the need for effective antibiotics with, at the same time, a good tolerability even more urgent. The new combination of cephalosporins/β-lactamase inhibitors can also be considered a resource in the treatment of infections due to DTR microorganisms. This retrospective single-center study evaluated a cohort of patients treated with two new associations of β-lactam/β-lactamase, ceftolozane/tazobactam, and ceftazidime/avibactam for infections due to ESBL-producing and carbapenemases-producing Enterobacterales. In particular, the microbiological response to therapy with these two drugs and the appropriateness of their use on the basis of the determined sensitivities of the isolated microorganisms were studied. The increase in infections due to ESBL-positive Enterobacterales induced an increased use of β-lactams and penicillins associated with β-lactamase inhibitors and, in particular, piperacillin/tazobactam. During the period considered in our study, the increased demand for piperacillin/tazobactam was followed by some periods of unavailability of this antibiotic. Another determining factor in this choice was the containment of the use of meropenem with a view to a carbapenem-sparing policy, also considering the percentage of sensitivity found in P. aeruginosa to meropenem. A good response to therapy has been observed in patients treated with ceftolozane/tazobactam, demonstrated by the high number of negative samples. The high efficacy of ceftolozane/tazobactam in infections with P. aeruginosa but also in infections with ESBL-producing Enterobacterales has been confirmed. The study highlighted the use of ceftolozane/tazobactam also in patients with ESBL-non-producing GNB infections. A careful analysis of the data has shown that this use coincided temporally with the lack of availability of piperacillin/tazobactam in Italy due to the high demand due to the increase in the incidence of ESBL-producing Enterobacterales [18]. Considering the different clinic syndromes included in our study and limited to the few cases described, a particular consideration could be made about patients with cIAIs (58/122) treated with ceftolozane/tazobactam. A total of 7/58 received ceftolozane/tazobactam as a first-line therapy. When it was used as a second-line therapy, the antibiotics more used were piperacillin/tazobactam (19/58), tigecycline (7/58), amoxicillin/clavulanic acid (8/58), and meropenem (6/58). In these patients, we observed a rapid response to ceftolozane/tazobactam therapy, with the microbiological cure time within 7 days of treatment. Biological samples from patients were collected at 48 h and 72 h and 7 days after the end of therapies in all patients as the antibiotic stewardship program of our Hospital. Follow-up was made by cultures of urine and blood samples for cUTIs, blood, and peritoneal fluid samples for cIAIs, blood samples for sepsis and bloodstream infections, and bronchoalveolar lavage or sputum for respiratory tract infections. Therapy duration was defined at 7-10 days for cIAIs, 7-14 days for bloodstream infections, 10-14 days for cUTIs, and 5-7 days for CAP and 7-10 days for HAP. Central or peripheral intravenous catheter was early removed or changed in all cases of bloodstream infections to reduce the duration of antibiotic therapy at 5-7 days. Effective antibiotics against CRE and CPE remain very limited in Italy. Actual therapeutic strategies are to increase the doses of carbapenem, colistin, and tigecycline or to combine these drugs. Double-carbapenem therapy was adopted as a salvage therapy for critically ill patients with CRE or CPE infections [19,20], but the evidence about this therapeutic option is low [21,22]. Ceftazidime/avibactam and ceftolozane/tazobactam could be useful in the management of carbapenem-sparing programs in settings with a high prevalence of ESBL-producing Enterobacterales and DTR Gram-negative infections. Our study has some limitations: it was a retrospective observational non-comparative single-center study with a small sample size. The observational period was short. The cohort we present is quite heterogeneous. The low number of carbapenem-resistant infections did not allow us to evaluate the microbiological response of these cases to treatment with ceftazidime/avibactam. However, we observed that treatment with ceftazidime/avibactam was effective in all 27 infections by microorganisms that showed resistance to meropenem, with the negativity of the microbiological analyses at follow-up. Another limitation of this study design includes the lack of a control group and the potential effects of unmeasured data. Due to its retrospective nature, detailed data on patient characteristics were not systematically collected. It will be important to extend this study to involve more centers to enroll more patients and to be able to analyze microbiological and clinical healing. It will also be important to relate these outcomes to clinical and laboratory parameters to define their effectiveness according to the different clinical conditions and thus improve the appropriateness of the use of ceftolozane/tazobactam and ceftazidime/avibactam within the antibiotic stewardship. Materials and Methods We carried out a retrospective single-center observational study of microbiological isolates in adult patients hospitalized in an Italian hospital who received ceftolozane/tazobactam between May 2017 and December 2019, and ceftazidime/avibactam between May 2018-the date on which the clinical use of the drug in Italy was approved-and December 2019. We considered all the microbiological isolates from blood, urine, and peritoneal fluid cultures in patients with bloodstream infections, cUTIs, and cIAIs. Each isolate was identified by Matrix-Assisted Laser Desorption Ionization Time-of-Flight (MALDI-TOF) by VITEK ® MS (bioMerieux) [23][24][25][26]. Antibiotic susceptibility testing was performed by Card AST-N376 and N397 by the VITEK ® 2 instrument (bioMerieux). Carbapenemase-producer Enterobacterales (CPE) strains were confirmed by microdilution (Sensititre™, Thermo Fisher Scientific) EURGNCOL and DKMGN plates. Phenotypical CPE resistance was confirmed by synergic test diffusion Diatabs™ (Rosco Diagnostica) on Muller-Hinton agar (Vacutest-Kima). The genotyping test for CPE resistance was performed by RT-PCR (GeneXpert ® ) with Xpert ® Carba-R test (Cepheid Inc.). The MIC values were interpreted according to the current European Committee on Antimicrobial Susceptibility Testing (EUCAST) clinical breakpoints. Patients who received antibiotic treatment with ceftolozane/tazobactam and ceftazidime/avibactam for ≥72 h of therapy were included in the study. All the patients were followed up for at least 30 days after the ceftolozane/tazobactam and ceftazidime/avibactam therapies were discontinued. Cultures were made on blood, urine, and peritoneal fluid samples at 48 h and 72 h and 7 days after the end of antibiotic therapies. Microbiological cure was defined as culture analyses that resulted in being negative at 72 h and 7 days after the end of the ceftolozane/tazobactam or ceftazidime/avibactam therapies. Infections were defined as the isolation of GNB classified as MDR, XDR, PDR, or ESBL-positive Enterobacterales infection. We also considered sepsis as a severe infection, defined as life-threatening organ dysfunction caused by a dysregulated host response to infection [27] and blood culture positivity, without a definite organ or system involvement. Ceftolozane/tazobactam was administered at the standard dosage of 1 gm/0.5 gm IV q8 h and ceftazidime/avibactam was administered at the standard dosage of 2 gm/0.5 gm IV q8 h. Dosage adjustments were made according to creatinine clearance. The drugs administered in this observational retrospective study were used according to the technical data sheet and indications of the Italian drug agency (Agenzia Italiana del Farmaco AIFA). In cases of off-label use, the informed consent of the patient was recorded in hospital medical records in writing. All the national laws in force at the time of data collection were respected. The appropriateness of ceftolozane/tazobactam and ceftazidime/avibactam was defined as when these antibiotics were used according to specific indications approved by our local regulatory organizations. The usefulness was defined by the results of microbiological cure according to the indications and dosage. First-line antibiotics were defined as antimicrobial agents other than ceftolozane/tazobactam or ceftazidime/avibactam used in the first step of therapy. Therapeutic failure was defined as the persistence of positive culture analyses after 72 h of antimicrobial treatment. Statistical Analysis Descriptive statistics were applied to the collected data and the related results were reported as numbers and percentages for categorical data and medians and interquartile ranges (IQRs) for continuous data. Categorical variables were compared using the χ 2 or Fisher exact test when appropriate. Conclusions In conclusion, this observational study showed the high microbiological cure rates of ceftazidime/avibactam and ceftolozane/tazobactam for treating severe infections caused by MDR-GNB other than CRE and CPE, in accordance with the literature data [28][29][30][31]. Microbiological stewardship and source control represent other pillars for an appropriate use of antibiotics and to reduce the spread of resistance to antimicrobials. Figure A1. Frequency of the isolated microorganisms from group treated with ceftolozane/tazobactam. Figure A1. Frequency of the isolated microorganisms from group treated with ceftolozane/tazobactam. Figure A1. Frequency of the isolated microorganisms from group treated with ceftolozane/tazobactam. Figure A2. Frequency of the isolated microorganisms from group treated with ceftazidime/avibactam. Figure A2. Frequency of the isolated microorganisms from group treated with ceftazidime/avibactam.
2020-10-06T13:33:22.111Z
2020-09-24T00:00:00.000
{ "year": 2020, "sha1": "293c059e5cf75d0654dd2b921540525d0e655cec", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2079-6382/9/10/640/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "8ec35288c3176c858bde63ad710a93607442c40d", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
18333204
pes2o/s2orc
v3-fos-license
Bone Mineral Density and Osteoporosis after Preterm Birth: The Role of Early Life Factors and Nutrition The effects of preterm birth and perinatal events on bone health in later life remain largely unknown. Bone mineral density (BMD) and osteoporosis risk may be programmed by early life factors. We summarise the existing literature relating to the effects of prematurity on adult BMD and the Developmental Origins of Health and Disease hypothesis and programming of bone growth. Metabolic bone disease of prematurity and the influence of epigenetics on bone metabolism are discussed and current evidence regarding the effects of breastfeeding and aluminium exposure on bone metabolism is summarised. This review highlights the need for further research into modifiable early life factors and their effect on long-term bone health after preterm birth. Introduction Preterm birth accounts for 5-10% of births in the UK. Worldwide almost 10% of babies are born preterm, representing more than 15 million births every year [1]. Preterm birth is defined by the World Health Organisation (WHO) as all live births before 37 completed weeks gestation. Preterm can be further subdivided into extremely preterm (<28 weeks), very preterm (28-32 weeks), and moderate preterm (32-<37 completed weeks) [1]. A preterm baby faces many challenges. Feeding problems are almost inevitable in the very preterm group as a coordinated suck and swallow is not established until around 34 weeks corrected gestation. Extremely preterm infants and those who are unwell may require IV fluids or a period of total parenteral nutrition before full feeds can be established. Many preterms born at less than 32 weeks will have some degree of respiratory distress syndrome (RDS), due to lung immaturity, and may require ventilatory support. Giving antenatal steroids reduces the incidence and severity of respiratory and other complications. The use of supplemental oxygen increases the risk of retinopathy of prematurity and may exacerbate oxidant damage in many organs and tissues but is vital for improved survival. Despite these challenges, survival has improved dramatically in the last few years, especially in developed countries. More than 50% of babies born at 24 weeks gestation regularly survive long term with improved nutrition being one potential factor contributing to these improvements. As this cohort of survivors reaches middle age the impact of preterm birth on long-term metabolic outcomes such as bone mineral density will become increasingly important. Osteoporosis is characterised by the depletion of bone mineral mass, combined with bone microarchitecture deterioration and a resultant increased fracture risk [2]. It is one of the most prevalent skeletal disorders; with estimates that up to 30% of women and 12% of men over the age of 50 are affected [3], it has a similar lifetime risk to coronary heart disease [4]. Bone mineral density in International Journal of Endocrinology adulthood depends predominantly on growth and mineralisation of the skeleton and the resultant peak bone mass achieved and then, to a lesser extent, on the subsequent loss. Longitudinal studies of girls suggests that this peak is reached about 30 years of age [5]. For each standard deviation decrease in bone mineral density, fracture risk doubles in girls, similar to the risk in postmenopausal women [6]. It is estimated that osteoporosis affects 3 million people in the UK and results in 250,000 fractures annually [2]. It has vast public health consequences due to the morbidity and mortality of the resulting fractures and the associated healthcare expenditure. As there is no cure, it is important to identify early life influences on later bone mineral density, which may aid the development of interventions to optimise bone health and reduce osteoporosis risk. We present a review of the current literature regarding early life factors and the impact of nutrition on bone mineral density and bone health after preterm birth, in order to inform further research and highlight current challenges facing the clinicians responsible for this cohort. Bone Mineral Density (BMD) Programming in Term Infants There is strong evidence linking early life exposures and later peak bone mass in childhood, for example, the contributions of physical exercise both in utero and childhood, cigarette smoking during pregnancy, and diet and endocrine status in childhood [7,8]. Bone mineral density shows strong tracking during childhood and adolescence growth and into adulthood. A reduced peak BMD in childhood is associated with increased fracture risk and has been proposed as one of the best predictors of later life fracture risk in females [9]. Gender also influences neonatal bone composition at term with males attaining greater bone area, bone mineral content (BMC), and BMD than females after adjustment for gestation [10]. In addition to factors influencing peak bone mass during childhood and adolescence, evidence is growing that bone mineral density and thus osteoporosis risk can be modulated during intrauterine and infant life [11]. A retrospective study involving term infants demonstrated independent effects of birth weight and weight at one year on bone size and strength during the sixth and seventh decades after adjustment for confounding lifestyle factors [12]. These associations may reflect the intrauterine programming of skeletal development [13] and its subsequent tracking throughout the lifecourse. Research also suggests that some of the predisposition for osteoporosis can be attributed to polygenic genetic inheritance. For example, polymorphisms in vitamin D and oestrogen receptor genes and collagen coding genes have been implicated [14]. It is likely that the genes that determine an increased risk of osteoporosis will vary among people of different ethnic backgrounds. In the future, genomic studies may provide information regarding the susceptibility of osteoporosis and likely treatment response and may become an adjunct to clinical management. The Developmental Origins of Health and Disease (DOHaD) Hypothesis and Programming of Bone Growth The Developmental Origins of Health and Disease (DOHaD) hypothesis suggest that nutritional imbalance during critical windows in early life can permanently influence or "programme" long-term development and disease in later life [15]. Much of the original work was by Barker who reported the relationship with low birthweight, used as a proxy for fetal growth, with coronary heart disease [16,17]. It became apparent, however, that these mechanisms and effects were not restricted to fetal life and that nutrition and growth in infancy (and perhaps in later childhood) were also crucial, leading to the incorporation of elements of evolutionary biology and the adoption of the term DOHaD. Recently, research has linked birth weight, birth length, and placental weight to later osteoporosis risk [18][19][20]. Known predictors of osteoporosis risk comprise genetic predisposition and environmental influences such as diet and exercise. However, a significant portion of BMD variance remains unexplained [19]. It is proposed that this remaining variation results from the programming of systems controlling skeletal growth trajectory during critical growth periods [13]. Epigenetics and Bone Metabolism Many of the programming effects may be modulated by epigenetic mechanisms. Epigenetics is the study of mitotically heritable alterations in gene expression potential that are not caused by changes in DNA sequence. The classic examples are DNA methylation and histone acetylation [21]. These processes do not alter the nucleotide sequence in DNA but result in differences in gene expression and transcription and may also involve post-transcriptional effects on other processes such as protein translation. Early life growth and nutritional exposures appear to affect the "cellular memory" and result in variation in later life phenotypes. Much of this work is still in the early stages but initial data suggest that epigenetic mechanisms may underlie the process of developmental plasticity and its effect on the risk of osteoporosis. One of the models that have been postulated is the role of maternal vitamin D status and postnatal calcium transfer. Calcium and vitamin D are vital nutrients in bone development. Early work concerning methylation and vitamin D receptors and placental calcium transporters suggests that epigenetic regulation might explain how maternal vitamin D levels affect bone mineralisation in the neonate [21]. Much of the current research is in animal models, but if the changes can be replicated in humans, epigenetic or other biomarkers may provide risk assessment tools to enable targeted intervention to those at greatest risk of osteoporosis. Metabolic Bone Disease of Prematurity The preterm population is particularly susceptible to metabolic bone disease for two key reasons: firstly, 80% of fetal bone mineral accumulation occurs during the last International Journal of Endocrinology 3 trimester of pregnancy, with a surge in placental transfer of calcium, magnesium, and phosphorus to the neonate [22]. A preterm infant who spends this period without the placenta and the associated regulatory maternal environment will therefore have lower BMD and significantly lower bone mineral content than an infant born at term. Secondly, providing adequate nutrition to preterm infants can be extremely challenging. Most extremely preterm infants (<32 weeks gestation) require support with parenteral nutrition (PN) because of complex factors including metabolic "immaturity" (that may limit nutrient intake) and a delay in establishing enteral feeds. In addition, solubility issues with PN solutions mean it is impossible to provide sufficient mineral via the parenteral route alone. Maternal breast milk is associated with a range of benefits in the short-term (e.g., reduction in the incidence of necrotising enterocolitis, a potentially fatal illness associated with milk feeds) and longterm (e.g., improved cognitive outcome) but alone will not meet nutrient requirements without fortification. Therefore as well as being born with a mineral deficit, the often stormy neonatal course and nutritional practicalities of providing adequate mineral intake means that many preterm infants develop osteopenia of prematurity. As preterm infants grow, mineral uptake is compromised through the low content in unfortified breast milk (especially phosphate) and inefficient absorption due to an underdeveloped gastrointestinal tract [9]. This results in a greater loss of long bone density than observed in term infants and further increases the risk of metabolic bone disease [9]. Ex utero living conditions also mean it is more difficult for infants to move and stress their bones as they would have done in-utero [23,24]. As well as mineral compromise, lower BMD in preterms is also a consequence of other factors such as steroid use [25], respiratory compromise [25], and infection [18] which may damage bone trabeculae. Although metabolic bone disease of prematurity is often asymptomatic and self-limiting [9], concern remains that under-mineralisation during such a critical period could increase the risk of childhood fracture and cause reduced peak bone mass [26] and therefore an increased risk of future osteoporosis. The Effects of Prematurity on Adult BMD There is conflicting data regarding the long-term consequences of preterm birth on the skeleton and the potential for peak BMD compared to their term counterparts. Preterm infants are known to have a lower bone mass [27], BMD [26] and BMC [25] at the corrected age of term, as well as a lower weight and ponderal index [26]. A study of 7year-old boys showed greater measures of cortical thickness, whole body BMC, and hip BMD in term compared to preterm boys after adjustment for weight, height and age. These differences remained after adjustment for birth weight, length of neonatal hospital stay, and current activity level [28]. A study by Fewtrell et al. in 2000 [29] found former preterm infants who were followed up at around 10 years of age were shorter, lighter, and had lower BMC than controls. These differences continue through childhood and possibly persist until puberty [25,28], although results are difficult to interpret due to the confounding effects of puberty and the interaction with bone size and later BMD. In a study by Backström et al., individuals who were born preterm were assessed with computerized tomography as young adults. Lower bone strength was demonstrated at the distal tibia and radius compared to age and sex matched controls [30]. This effect was more pronounced in males and remained after adjustment for potential confounders. Several studies have failed to demonstrate an association between preterm birth and later bone strength, although all of these [28,31,32] were undertaken in small populations. A possible explanation for the variation in study results may be in the timing of follow-up as catch up in bone mineralisation may occur primarily in late childhood and adolescence. Other studies have found that although preterms were smaller, their BMD was appropriate for size. Adults who were born preterm may be shorter than their term born counterparts. As some studies may not have made appropriate adjustments for current size it is difficult to determine whether BMD is appropriate for current size or not [28]. Early Nutrition and Growth Influences on Bone Metabolism after Preterm Birth Several maternal factors are known to have a negative impact on neonatal growth and skeletal mineralisation in term infants. Although not discussed in detail here, examples are shorter maternal height, low parity, smoking during pregnancy, low fat stores [33,34], and low vitamin D exposure [9,22]. There is conflicting data regarding the influence of birthweight on later BMD. Low birthweight (LBW) is defined by WHO as <2500 g [1]. LBW is usually a consequence of being preterm or small for gestational age (i.e., born with a birthweight on less than the 10th centile). Some studies suggests that very low birth weight (VLBW, <1500 g birthweight) infants, whether preterm or not, attain a suboptimal peak bone mass in part due to their small size and subnormal skeletal mineralisation [31]. A recent study by Callreus et al. highlighted the long-term influence of birthweight on bone mineral content but found an absence of association of birthweight with bone density once adult body weight was also taken into consideration [39]. The Hertfordshire cohort study involving over 600 subjects showed that birthweight was independently associated with bone density at 60-75 years of age. Although another study found no association with preterm birth and peak bone mass [35], an effect of being small for gestational age was apparent, suggesting a proportion of later bone mass is determined by fetal growth. Further research has also shown a significant association between shorter gestation and adverse skeletal outcomes [31]. Several studies in infants have shown the influence of early growth on later bone health in those born preterm. In a study by Cooper et al., those who were lightest at 1 year of age had the lowest BMC [22]. In a further study, weight gain during the first two years of life predicted BMD at age 9-14 [40]. Fewtrell et al. also showed a positive association of body weight and height at both premature birth and 18 months with bone size, BMC, and BMD at aged 8-12 years [36]. It was hypothesised that those with the most substantial increase in height between birth and follow-up showed the greatest bone mass. They also demonstrated that birth length alone was a strong predictor of later bone mass, and it was suggested that optimising linear growth early may be beneficial to later bone health. Although conducted with a large cohort ( = 201), few measurements were taken after discharge and dual-energy X-ray absorptiometry (DXA) analysis was only taken at 8-12 years. As a result, changes in growth and corresponding bone mass at potentially critical epochs of infancy were not measured. Optimising early growth through nutritional interventions generates positive and lasting effects on bone mineralisation [28] and it is hypothesised that this may partially counteract preterm bone deficits. A systematic review by Kuschel and Harding in 2009 showed that fortifying the nutrition of preterm babies improves growth and bone mineral aggregation [41]. Lieben et al. [42] and Kanazawa et al. [43] discuss an interaction between bone and glucose metabolism involving adipocyte-originated leptin and osteoblast-derived osteocalcin. They postulate that healthy bone matrix protein increases insulin sensitivity in other tissues and that people with metabolic syndrome who are insulin resistant also have poorer bone quality and increased risk of osteoporotic fracture. The "metabolic syndrome" involves many biological systems, but insulin sensitivity or resistance is perhaps the area subject to the most detail study in later life. This interaction is potentially a very important one; those who were born preterm appear to be at a higher risk for metabolic syndrome in later life and studies examining the influence of birth weight on later health consistently show that in low birth weight born adults, there is decreased insulin sensitivity. The critical period in the preterm determining later insulin resistance is unclear at present. Bazaes et al. found that low preterm birthweight was associated with impaired insulin sensitivity [44], which supports Barker's hypothesis. Singhal et al. [45] showed that preterm infants who received higher nutrient intakes during the first 2 weeks of life had higher levels of insulin resistance in adolescence. These studies may be showing a potential adipocyte-osteocalcin interaction and suggest that the relationship between nutrition and later bone and metabolic health is complex, and this is an area that clearly warrants further research. Aluminium and Bone Mineral Density Aluminium has no active role in the human body but is inadvertently ingested in the preterm for several reasons. Firstly preterms are exposed to high levels of aluminium in standard parenteral nutrition (PN) regimes. The current trend is for early PN to optimise early growth and associated neurocognitive function. Most aluminium is accumulated through unavoidable contamination via calcium gluconate stored in glass vials. In adults, this aluminium load is probably adequately dealt with by the kidneys, but the premature infant's renal system is relatively immature so accumulation occurs. Adverse effects of aluminium in bone have been seen in uraemic adults and there are now studies showing that infants who received aluminium-depleted PN had significantly higher BMC of the lumbar spine [46]. A direct effect on bone structure is unlikely as bone will have been remodeled several times by adulthood, but it is thought that the presence of aluminium may modify the response of bone cells to stimuli such as when loading forces are applied through exercise. Effects of Breastfeeding on Bone Metabolism There is conflicting evidence as to whether breastfeeding has a protective role in the primary prevention of osteoporosis. In some studies, such as that of Fewtrell et al., breast milk consumption was found to result in higher adult BMD [37] despite the milk being unfortified and having a lower mineral content than formula. This suggests a possible role for beneficial non-nutrient components such as growth factors. In another study, bone mass at follow-up age of approximately 10 years was positively associated with the duration of breastfeeding [47], yet other studies have shown no benefits at a similar age [48,49]. Other studies have not demonstrated an ongoing relationship in adulthood between breastfeeding and bone mass [22]. Given the known benefits of breastfeeding and the lack of proven negative association, it seems prudent to strongly encourage breastfeeding, despite slower infant growth trajectories. Vitamin D and Bone Mineral Density It is difficult for the preterm infant to match the in-utero accretion of minerals. Calcium absorption depends on calcium and vitamin D intakes and phosphorus levels, which affect calcium retention. In clinical practice, very few babies need calcium supplementation if they receive either a preterm formula or breastmilk along with breastmilk fortifier [50]. Suboptimal maternal vitamin D levels have been reported from many sources [51]. There are few studies in the preterm population but data from term infants clearly show maternal vitamin D insufficiency to be associated with adverse BMD both in infancy and later follow-up [52]. Considering the prevalence of vitamin D deficiency in pregnant mothers, the European Society of Paediatric Gastroenterology Hepatology and Nutrition (ESPGHAN) committee recommends vitamin D supplementation in the region of 800-1000 IU per day to preterm infants to rapidly correct low fetal plasma levels and that they should be continued through infancy [53]. Limitations of the Current Evidence Little is known concerning the early life control mechanisms for bone development [26] and the lack of prospective research in this area has been highlighted [30]. The potential for confounding in observational studies is also an important consideration. Poor nutrition is often an inevitable consequence in the sickest neonate who in turn will be more likely International Journal of Endocrinology 5 Rigo et al. [9] 2 0 0 7P r e t e r ma n dt e r m Review Greater loss of BMD in preterms than in terms during neonatal period. Maternal vitamin D exposure affects bone health in the newborn. Bowden et al. [25] 1999 Preterm and term Retrospective cross-sectional Preterm infants have reduced bone mineral mass in conjunction with reduced growth and hip BMD aged 8 years. Hovi et al. [31] 2009 LBW infants Cohort VLBW young adults have reduced peak BMD than their term peers. Ahmad et al. [26] 2010 Pretermandterm Prospective Preterms had lower body weight, length and BMD at term compared to term-born infants. Abou Samra et al. [28]2009 Preterm and term Cross-sectional Term males have greater bone size and mass than preterm males at follow-up aged 7 years. Backström et al. [30] to have a poorer metabolic outcome. A 2011 meta-analysis stated that research from a variety of populations may clarify inconsistencies concerning the relationship between early life events and subsequent bone health [19], and there are few studies relating gestational length to adolescent BMD [9]. There is a need for longitudinal studies utilising randomised controlled trials of preterm infants where possible, and providing detailed information on early life exposures as well as bone measurement data. One of the greatest challenges of longitudinal cohort studies, especially in children, is the attritional losses over time. In addition, much of the current data available is from preterm infants recruited to studies in the 1980s, an era predating the widespread use of antenatal steroids and surfactant therapy; two of the key practices that have had the most dramatic effects on long term survival. As cohorts of preterm born adults reach middle age, their risk of osteoporosis and their antecedent risks factors will become increasingly apparent. Table 1 summarises some of the key research on BMD and osteoporosis after preterm birth. Conclusion As survival rates continue to improve, the long term effects of premature birth become increasingly important. Only decades of future follow-up will truly ascertain the risk of osteoporosis and fracture after preterm birth. Because there is no cure for osteoporosis, preventative measures are important to minimise risk in this susceptible population. Genetic and intrauterine environmental factors that influence fetal growth trajectory have long-term consequences on body composition. Clearer identification of risk factors and refinement of biomarkers for later bone health will enable earlier preventative strategies. Reduction of the exposure of preterm infants to aluminium is an urgent research and clinical priority. Breastfeeding along with appropriately formulated breastmilk fortifiers to ensure adequate mineral intake and optimal growth should be strongly encouraged. As early mineral deficiency and metabolic bone disease are often asymptomatic during neonatal period, careful followup is required to identify at risk groups. Targeted prevention, early diagnosis and appropriate timely treatment may then significantly reduce the individual, health service, and societal burden of osteoporosis in the future. Key learning points are as follows. (i) There are conflicting data regarding the effects of preterm birth and/or LBW on later BMD. (ii) Need for further research into modifiable early life factors and long-term bone health. (iii) Breastfeeding (with appropriate fortifiers) and vitamin D supplementation may have long term benefits on BMD in preterm infants. (iv) Reduction of aluminium exposure in preterm infants is an urgent priority.
2018-04-03T02:47:16.050Z
2013-04-10T00:00:00.000
{ "year": 2013, "sha1": "c3f70051da30ec4a6f9686aab4f1c24ccea3bb7a", "oa_license": "CCBY", "oa_url": "http://downloads.hindawi.com/journals/ije/2013/902513.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "f2658dd468cf855d917b7e9c4471c5b4d2ed18a6", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
231202502
pes2o/s2orc
v3-fos-license
Long-Term Improvement in Precautions for Flood Risk Mitigation: A Case Study in the Low-Lying Area of Central Vietnam Local actors appear as inseparable components of the integrated flood risk mitigation strategy in Vietnam. Recognizing this fact, this study examined the long-term improvement in precautions taken by commune authorities and households between two major floods in 1999 and 2017 by applying both quantitative and qualitative methods. Two flood-prone villages were selected for a survey; one in a rural area and the other in a suburban area of Thua Thien Hue Province, central Vietnam. The findings indicate that most villagers doubted the structural works’ efficacy and were dissatisfied with the current efforts of local authorities. Households’ self-preparation thus became the decisive factor in mitigating risk. While most households have paid greater attention to flood precautions in 2017, others seem to be lagging. Poverty-related barriers were the root causes restraining households in both rural and suburban villages. The suburban riverine residents were further identified as vulnerable by their limitations in upgrading structural measures, which was ascribed to the inconsistency in the ancient town’s preservation policy. This multidimensional comparison, in terms of vulnerability, emphasized the importance of space-function links in the suburb and the contradictions of different policy initiatives, such as landscape rehabilitation, disaster prevention, and livelihood maintenance. Introduction It is widely recognized that natural hazard-related disasters have been steadily increasing across the planet over the last several decades (IPCC 2014). Notably, the frequency of floods has accelerated faster than any other type of disaster, leading to a tripling of major flood-related calamities since the 1980s (ADB 2013). Asia is the most disaster-afflicted region, accounting for nearly 62% of those who have experienced disasters and bearing approximately 70% of the total deaths and 50% of the world's disaster-related economic losses (ADRC 2016). Of these, floods consisted of 31% of the total number of disasters, followed by 28% for cyclones and typhoons. Therefore, hydrometeorological-related disasters are seen as the most prominent in Asia (Shaw 2006). Being considerably susceptible to climate change, Vietnam was ranked the eighth among 10 countries in the world most harshly influenced by extreme weather events in the last two decades (Kreft et al. 2016;Nguyen Duc et al. 2019), and was also listed as one of five countries with the highest population proportion exposed to riverflood risks worldwide (Luo et al. 2015). Over the past 20 years, natural hazards and disasters have resulted in 650 deaths, affected 340,000 ha of paddy fields, and destroyed 36000 houses on average each year. The country has lost 1 to 1.5% of its annual gross domestic product (GDP) in the aforementioned period due to natural hazards and disasters, and this significantly hinders the socioeconomic reorganization process (WB 2010). Over an extended period, conventional approaches to flood control have strongly relied on engineering works such as building dams and reservoirs. These approaches, however, were sometimes incapable of preventing flooding and associated losses (Pilarczyk and Nuoi 2002;Musiake 2003). It might be argued that flood threats continue to lurk regardless of the governments' ongoing efforts toward improving engineering works (Sayers et al. 2015;Chen and Lin 2018). This fact also implies the importance of being prepared to deal proactively with unforeseen events instead of putting full faith in the structural works. Shifting from a purely technical-oriented defense toward a more integrated flood risk management system has been recommended in various countries (Hartmann and Albrecht 2014). In Vietnam, the responsibility of local authorities and households in alleviating flood distress has been embedded into the national strategy for disaster prevention. It was further emphasized, especially after fateful disasters such as the 1999 historic flood in Thua Thien Hue Province, which flooded 90% of lowland areas in a week, and induced up to USD 120 million tangible losses (Tran and Shaw 2007). Since the hierarchical structure of the administrative system reveals limitations in risk management, there is a transition underway to more resilient approaches such as bottom-up initiatives (Zevenbergen et al. 2008) and community-based systems (Aalst et al. 2008). In such approaches, precautions implemented by local actors such as local authorities and private households are highly encouraged (Duc et al. 2012;Chinh et al. 2016;Atreya et al. 2017;Luu et al. 2018). In Vietnam, for instance, Luu et al. (2018) suggested that since communes have a deeper understanding of their local conditions, empowering them in the planning and decision-making processes is necessary to improve the effectiveness of flood risk management activities. Duc et al. (2012), meanwhile, further underlined the crucial role of preventive measures at the household level in mitigating the adverse effects of natural hazardrelated disasters. These aforementioned studies, however, merely looked at the preparedness of the local actors only at the time the survey was conducted, overlooking its progression over time. In addition, there are relatively few empirical studies, especially in central Vietnam, that consider how villagers perceive the changes in flood characteristics, how local authorities and private households have improved precautions over the long term, and how coping capacities differ among social groups. Furthermore, studies examining barrier differences in enhancing protective measures between rural and suburban areas are still rare despite compelling evidence of the link between their socioeconomic disparities and the implementation of precautions. To fill this gap of knowledge, this study, therefore, aimed to examine the long-term changes in precautionary measures for flood risk mitigation by focusing specifically on: (1) the changes in flood characteristics under the influence of the dam systems and the attitude of floodplain residents toward implementing precautions; (2) the precautions taken by the commune authorities and their effectiveness in supporting flood victims; (3) the long-term improvement in precautions of private households and their efficacy in reducing flood risks; and (4) the differences in coping capacities and constraints of social groups in both rural and suburban areas. These issues were examined by comparing the two major floods that occurred in 1999 and 2017 in Thua Thien Hue Province, central Vietnam. Since floodwater depth is considered one of the determining factors of the damage extent (Thieken et al. 2005), the comparable water level caused by these two floods is crucial to ensure achieving the objectives of this study. This study may be useful to identify social groups lagging in the struggle against floods and recognize shortcomings in the design and implementation of flood-related policies. Study Site and Methods Thua Thien Hue Province, located in the north-central coastal region of Vietnam, was selected as the target region of this study based on its geographical features and flood history. A survey of local authorities and private households and quantitative and qualitative methods were used to collect and analyze the data. Study Site With a 128 km long coastline and interlaced river systems, Thua Thien Hue Province has experienced many hydrometeorology-related hazards, such as floods and storms. It is considered among the most disaster-prone areas of Vietnam (Tran et al. 2008). Flood-associated losses have trapped many households in the vicious circle of poverty. The 1999 flood was deeply implanted in the minds of many people as a fearful memory, which took hundreds of lives and caused economic losses estimated up to millions of dollars (Valeriano et al. 2009). Flood-related concerns appear to have been somewhat liberated since the construction of massive hydroelectric dams. However, these fears were again triggered by the 2017 major flood, which occurred after storm No. 12 named Damrey with wind speeds up to 135 km per hour. This was the most powerful storm impacting Vietnam since 2001 (UNDP 2018). Hydropower dams in Thua Thien Hue Province, under the influence of heavy rain due to the circulation of storm, had to release water, and thus caused widespread flooding in the downstream area. Although this led to a comparable floodwater depth, the damage was much lower than in the 1999 flood (Fig. 1). This outcome may be due to various factors, but highly likely includes the improvement in precautions. The villages Trieu Son Nam and Bao Vinh in Huong Vinh Commune, Huong Tra District were selected for the survey (Fig. 2). The population of these villages was 1925 and 2032, respectively, corresponding to 402 and 515 households. Although drawn from the same commune, the first village represents rural characteristics with more than half of households involved in agriculture, while the second village possesses suburban features where retail and handicrafts are more dominant. Both villages are located near the intersection of the Huong and Bo Rivers where they merge before emptying into the sea, which causes the currents to be more powerful and dangerous during floods. This terrain, combined with its low-lying topography, makes these villages more susceptible to flood-related hazards. In reality, the communities in this area have suffered heavy losses due to floods, especially in 1999. However, annual reports affirmed a favorable indication that flood-related damages had decreased despite severe floods similar to the 1999 event occurring on occasion. This implies that greater attention was given to flood risk management in this locality. This site, therefore, was selected to explore the factors contributing to such achievement. Along with the comparison of rural and suburban villages, which have similar administrative systems but different socioeconomic characteristics, an indication of the limitations of social groups in coping with floods and their actual barriers is presented. Methods A pilot survey was performed to search for an appropriate study site before conducting the official survey in April and October 2018. The data were gathered from both local authorities and private households. We obtained data on local socioeconomic characteristics and flood trends from the commune authorities. Flood prevention activities were also explored through direct interviews of those responsible for flood and storm control. These activities were further discussed with the village leaders since they are considered an essential link in the implementation process. By using the face-to-face interview method, the crosssectional data of 120 households, divided equally between the two selected villages, were collected. To facilitate the survey, a semistructured questionnaire was developed after a thorough process of document review in combination with the consultation of specialists who are well-versed in both floods and village life. From the household lists provided by the village leaders, homeowners who met the following requirements were randomly selected for an interview: over 30 years old; experience of both the 1999 and 2017 floods; and direct involvement in the preparation of the flood defenses. The interviews usually began with information regarding household demographic characteristics, followed by participants' perceptions of changes in flood risks, the potential causes for these changes, and their attitude toward precautions. As the beneficiaries, interviewees were then This study adopted both quantitative comparative and qualitative descriptive approaches. The quantitative data were processed to observe the trends and cohesion between variables, while the qualitative information was used to further interpret the implications. We first examined residents' perceptions of the changes in flood risks, potential reasons for these changes, and their protection behavior. In addition, preventive activities operated by local authorities and their reliability were revealed through practical experiences of the local people. We then focused on analyzing household measures and their effectiveness in reducing risks. In this study, the 1999 historical flood was seen as a powerful catalyst to drive households toward creating better defenses. Therefore, we expected significant positive differences in households' preparation and flood-related damages. Further to this, the indoor flood level between the two events was compared. From this, we expected lower levels of flooding as a direct result of upgrading structural measures. Cross-tabulation analysis was chosen to explore the association between household characteristics and improvement in precautions. Popular protective measures were considered as dependent variables. Meanwhile, based on the literature review, income level, housing location, and external reliant psychology were included as explanatory variables. Higher-income households were reported to be better prepared for natural hazards (Hosseini et al. 2013;Ashenefe et al. 2017). Living in higher risk areas was found to be positively associated with households' preparedness (Hoffmann and Muttarak 2017; Wu Reliance on external support was reported to be negatively correlated with protective responses (Grothmann and Reusswig 2006;Chen et al. 2019). In this study, household income was ranked by referring to the 2016-2020 living standard (Vietnamese Government 2015) and the 2017 report on the average income (GSO 2017). Households with a per capita income of less than VND 1.3 million were considered to be in the low economic class, while those that exceeded VND 4.5 million were regarded as a higher socioeconomic class. Location was classified based on distance to the riverbank with a delimitation of around 30 m. Households' external reliant psychology was decided by taking into account their belief in emergency support. Those who had high trust in the availability of external support in urgent cases were viewed as the external reliant group. We also included some additional variables, such as the number of laborers, occupational stability, dependency ratio, and living with disabilities, to gain insight into the constraints of the households. To better reflect household resources, the occupation that contributed the most to a household's income was chosen rather than that of interviewees and was further classified based on its stability. Dependency ratio was calculated as a fraction of the total number of individuals who are not working divided by those who contribute to the income, regardless of their age. A higher value of this ratio implies a greater strain imposed on the household. Similarly, living with disabled family members was also added as a barrier to the implementation of protective measures. Results and Discussion In this section, the data obtained through household interviews are analyzed in the following order: sociodemographic characteristics of the surveyed households; changes in flood characteristics and household protective behavior; precautions of local authorities; changes in household precautions and damage-reducing effectiveness; and household characteristics and improvement of preventive measures. Sociodemographic Characteristics of the Surveyed Households Male household heads accounted for more than two-thirds of the samples. The majority of respondents were middleaged (43.3%) and older (41.7%). Farming was the most dominant occupation in the rural village, while retails and handicrafts were the most well known in the suburban area. Only around 40% of the study population had stable occupations, while up to one-third of families relied largely on precarious livelihoods. More than half of households were classified as middle level, while around a quarter were defined as poor. With regard to human resources, 43.3% of households in the suburban area had more than two laborers, a little higher than that in the rural area. Semipermanent houses were the most typical, accounting for over 60% of the total, and only one-third had been upgraded to permanent housing, while 6.7% remained temporary. Moreover, 27.5% of those dwellings were erected adjacent to the riverbank. Regarding the dependency ratio, more than half of the households had more laborers than dependents. Additionally, a quarter of those interviewed were living with elderly or disabled family members. Changes in Flood Characteristics and Household Protective Behavior The extent of damages depends on how people prepare for encounters with potential risks (CMCC 2018). Their preparation is driven by the comprehension of changes in flood trends and risks (Reynaud et al. 2013). Therefore, the interviewees were asked to express their views on the changing nature of floods (Fig. 3). Most respondents asserted that floods are currently progressing in a favorable direction. In particular, less flooding was indicated by 59.2% of the informants, while only 3.3% signified the opposite. Nearly 67% of the informants confirmed less severity and faster drainage of floods, while its delayed appearance was clarified by almost 62%. In contrast, the majority of the interviewees (75.8%) complained about the increasing irregularity and unpredictability of floods. These changes were attributed to the increasing prevalence of dams. Currently, some 62 reservoirs for both irrigation and hydropower are being operated in the province. Alongside visible contributions, these massive structural works conceal many unanticipated dangers (Blöschl et al. 2015). This fact inspired us to further explore residents' attitudes toward implementing precautions. Maintaining similar preparations to previously was the response of onethird of the interviewees. Meanwhile, more than half (54.2%) suspected the stability of these structural solutions and were concerned about abrupt calamities. Hence, they believed that more attention should be paid toward precautions. In contrast, a small minority (12.5%) underestimated precautions, believing they were safeguarded by massive dams. This subjectivity may stem from the fact that low-probability natural hazard-related incidents are often misjudged systematically (Faure 2007). Although the government has made great efforts to erect structural works, they are just one part of the integrated flood risk management strategy. It should be recognized that flood damages can never be wholly mitigated by relying solely on these public defenses (Hoffmann and Muttarak 2017). Strengthening the role of local actors, therefore, is critical to compensate for the limitations of the engineering approach. Precautions of Local Authorities Two months before the flood season, the flood prevention committees are reorganized at both commune and village levels. These are permanent groups of nearly 20 members from various political, social, and economic organizations. They are established to support local communities to effectively work against floods. Support from these committees primarily relies on local resources and can be split into structured and non-structured forms. Structural measures aim to improve the quality of roads, strengthen riverbanks, and most importantly, prepare shelters for emergency evacuation. Schools and healthcare centers, which have already been solidified, are used as refuges during severe inundation. Meanwhile, non-structural measures include providing early warnings, emergency necessities, and means of escape. Warnings are spread through two main channels: a loudspeaker system operated by the local authority, and portable-speakers run by village teams. At the beginning stage of floods, villages are responsible for other supports, such as food and early evacuation, but due to a lack of resources, this applies only to some households living on perilous terrain. These supports are intensified by utilizing the commune's stockpile whenever extreme floods occur. Nevertheless, it is neglectful to ignore the impressions of the population who directly benefited. Therefore, interviewees were asked to express their opinions on these supports (Fig. 4). Of the study population, nearly three-quarters appreciated the improvements to the early warning system, a quarter were doubtful, and only 2.5% were flatly unsatisfied. Conversely, 76.7%, 82.5%, and 69.2% of the respondents were dissatisfied with the support of basic necessities, emergency evacuation, and post-flood recovery, respectively. Besides the above-mentioned immediate supports, the current transportation infrastructure was much improved after several upgrades, which facilitated the flood coping process. The riverbanks, however, have not yet been fortified, leaving riverside households more susceptible during floods. The beneficiaries were also required to disclose the main reasons governing their estimations. Absence of the village communicators was reflected by nearly 22% of those surveyed, while others disparaged the local broadcast system quality. Usually, the village board directly broadcasts warnings to every alley, and this process is repeated several times. However, this may be futile if residents are absent from their homes during the transmission. Regarding the loudspeaker, even with a wider range, this system is ineffective for those living far from radio stations, especially in tumultuous weather conditions. Likewise, the broadcast may be interrupted or deactivated due to power outages during heavy floods, which was mentioned by nearly 16% of the samples. The support of necessities was underrated due to negligible quantity (86.7%) or late distribution (75.8%). Food reserves were only deployed at the commune but disregarded within the villages due to financial constraints. Normally, relief goods were only distributed to victims after the floods had begun to recede. This, in addition to the small supported amount, led to the ineffectiveness of this effort. Perception of failure in supporting emergency evacuation was due to the sparsity of shelters (78.3%) and suspicion about the authority's capacity ( emergency shelters and transport is obligatory to assure speed, timeliness, and safety in an evacuation. Recognizing these requirements, public infrastructures were reinforced by local authorities. These infrastructures, however, are scattered and far from most residential clusters. Approaching these shelters, therefore, becomes extremely arduous and perilous due to high floodwater level and swift-flowing currents. Most respondents were also skeptical of the authorities' ability to mobilize sufficient manpower and necessary facilities for evacuating an enormous number of households simultaneously. Besides the above precautions, preparation for postflood recovery is also important because neutralizing all flood impacts is impossible. However, once again the respondents lacked faith in the local authorities because of their limited financial resources (90%) and delays in implementation (55.8%). In fact, post-disaster supports were usually distributed to only a few households that had suffered major losses and took a long time to implement. In addition, the compensation amount was insubstantial compared to the losses and thus less meaningful for flood victims. It is undeniable that almost all damages suffered by residents tend to be neglected after disasters. Changes in Household Precautions and Damage-Reducing Effectiveness This section focuses on analyzing the improvement of preventive measures at the household level and their effectiveness in reducing damage between the 1999 and 2017 floods. Changes in Household Precautions Most residents were aware of potential threats despite the protection of the engineering works and they distrusted the support measures offered by local authorities. These findings raise the question of how floodplain inhabitants have improved preventive measures and how effective those improvements are during floods. To clarify these matters, the main measures implemented by households in the 1999 and 2017 floods were recalled (Table 1). Obviously, all measures have vastly improved after nearly 20 years. While almost all measures were implemented by less than 50% of the respondents in 1999, they were mostly greater than 70% in 2017 (p \ 0.01). Households tend to solidify housing by using waterproof materials instead of fragile materials such as bamboo and thatch. Raising ground was also a widespread solution to reduce indoor flooding. Both of these solutions were deployed by over 90% of those surveyed in 2017, much higher than those in 1999 (below 30%). Additionally, the proportion of houses with additional floors doubled from 35.8 to 74.2%. Similarly, the official warning was disseminated to almost all residents in 2017 (97.5%), as opposed to 1999 (17.5%). This may be principally due to the authorities' efforts, but could also be achieved through proactive seeking of warning information by households. Storing sufficient food, though a simple task, is essential to uplift human endurance, especially for abnormally long-lasting floods. In 1999, nearly 60% of households stored food for at least three days. Meanwhile, the remainder believed that foodstuff could be easily replenished from peddlers when the floods retreated. However, this could be a dangerous misconception that could turn into unpredictable risk, since the 1999 flood lasted for a week, which led many households into food scarcity. In 2017, therefore, up to 95.8% of households accumulated enough food for a normal-length flood. Furthermore, up to 92.5% of households prepared alternative energy sources such as gas stoves or In economic terms, floods are one of the costliest natural hazards and disasters (Saeed et al. 2018). In fact, 77.5% of households lost most valuable assets in the 1999 flood. Asset protection, however, was much better implemented by 2017 as most households (71.7%) successfully preserved their assets after the flood. Most of the respondents actively prepared more items such as bricks, wooden planks to lift up movables, or suspension systems to fasten water-sensitive objects to the ceiling. Highly mobile assets, such as motorbikes or agricultural machines, were shifted to safe places whenever imminent flooding was announced. Drawing up a detailed evacuation plan is an effective way to minimize both human casualties and property damage. While most households were passive in the evacuation in 1999, this was greatly improved in 2017. The proportion of households who prepared an evacuation plan doubled from 22.5 to 56.7%. Frail members are often evacuated before flooding, especially those living in hazardous areas, while others stay to protect belongings from thieves. Although currently less useful for livelihoods, some households still retain boats as a transportation means during floods. There was one boat for every five households, which was often used as common property in cases of emergency. However, this was mostly in the rural area and rare in the suburb. This was due to narrow living spaces and the lack of connection between suburban livelihoods. Those in better-off households simply moved to the second solid floor. Others planned to reach to and shelter in adjacent solid houses. Although dangerous, this may be feasible due to the dense housing. Although there has been a significant improvement, 43.3% of the households were still inactive in outlining evacuation plans, and mainly looked for external help. Damage-Reducing Effectiveness We first compared the actual indoor flood level, which is different from the land-surface flood depth, to understand the effectiveness of raising the ground (Fig. 5). The 2017 indoor flood depth (mean 2017 = 0.776 m; median 2017 = 0.7 m) was much lower than that in 1999 (mean 1999 = 1.952 m; median 1999 = 2.0 m). The interquartile ranges further indicate that 75% of shelters were flooded to at least 1.7 m in 1999, while 75% of those were submerged below 1.2 m in 2017. This result is further affirmed by the equivalent flood peaks of the events in 1999 (518 cm) and 2017 (505 cm) (Fig. 1). Such comparison demonstrates the effectiveness of raising housing ground for reducing the indoor flood depth. Fewer damages were anticipated due to lower indoor floods experienced by most households in 2017. Due to the long interval between the two events, damages were recorded by category instead of the currency unit. Moreover, certain types of damage relating to injuries, food shortages, or evacuation may be impossible to convert into monetary units. Table 2 shows that damages in 2017 were significantly lower than those in 1999. The percentage of people injured in 2017 (1.7%) was much lower than in 1999 (16.7%). People were normally wounded during evacuation due to the lack of supportive equipment. Therefore, the decline in the evacuation rate from 58.3 in 1999 to 8.3% in 2017 may have contributed to reducing the injured rate. Regarding physical losses, 65.9% of houses were damaged to varying degrees in 1999, but this dropped sharply to just 15% in 2017. This improvement may be partly due to a less severe flooding in 2017 but more likely stems from households' efforts to upgrade housing. Over 98% of the households reported that most of their assets were submerged by floodwaters in 1999. However, this rate dropped noticeably to 33.3% in 2017. Food security was also significantly improved, with only 3.3% of households suffering from food shortages, compared with 76.7% in 1999. It can be assumed that households' efforts in improving precautions contributed significantly to reducing indoor flooding, thereby mitigating the associated damages. Household Characteristics and Improvement of Preventive Measures Improving precautions has shown positive effects on flood risk mitigation, as found by Kreibich et al. (2011), Poussin et al. (2015, and Atreya et al. (2017). Who is plodding in the struggle against floods? And what factors are hindering them? These matters still need to be investigated for identifying the characteristics of the least improved groups. The overall sample, therefore, was split into rural and suburban groups based on their socioeconomic disparity. The popular actions taken by the residents were also divided into different levels and regarded as dependent variables (Table 3). As mentioned in the method section, three characteristics of households, including income level, housing location, and external reliant psychology, are used as predictors. Some other demographic characteristics of households are also used to further explain their constraints. Table 4 shows that the improvement in precautionary measures in the rural village is distinctly divided by income levels. Income creates significant differences in house reinforcement (p = 0.00), ground raised (p = 0.00), and subfloor setting (p = 0.00). Further inspection of the Cramer's V coefficients suggests that the strength of these associations is moderately strong (V = 0.562; V = 0.546; V = 0.593). Accordingly, the lower-income group appears lower with regard to upgrading housing. This result is consistent with the finding of Kreibich et al. (2011), which underscores the crucial role of financial-related factors in implementing structural measures. Furthermore, lower-income earners are also less likely to improve non-structural measures, including accessing warnings, storing food, and planning evacuation. Cramer's V values indicate a strong cohesion between these relationships (V = 0.414; V = 0.610; V = 0.588). It is easy to comprehend the connection between implementing costly measures and households in the lower-income bracket since it is closely related to their financial capacity. However, access to official warnings is also restrained by income, while it should be equally delivered to all residents as a free public service. Nearly half (44%) of the population who were characterized by meager income generally receive official warnings from a single provider rather than in a multidimensional manner. The wide rural residential area associated with inadequate equipment may partly result in the low coverage capacity of the warning transmission. Additionally, this may be ineffective if a household's members are at work and thus absent from their home when warnings are being announced. These hypothetical situations may apply to the low-income group, who are regularly away from home for livelihood. Missing warnings is, therefore, possible. Characteristics Associated with Improvements in the Rural Village Besides indicating the links between low income and inferior improvement of measures, exploring the root causes of poverty is necessary if policy-related solutions are to be designed. We found that poverty in rural areas is strongly related to labor shortages (v 2 = 15.27; p = 0.00), high dependency ratio (v 2 = 19.44; p = 0.00), living with disabilities (v 2 = 17.94; p = 0.00), and precarious careers (v 2 = 35.77; p = 0.00). In this regard, sharing earnings that are already insufficient with more people, especially disabled members, may further restrict household budgets for investing in protective measures. This result, however, differs from previous findings that households with disabled or unhealthy members are more likely to participate in disaster preparation (Ablah et al. 2009;Eisenman et al. 2009). The healthier financial condition of residents in developed countries compared to those in developing countries is probably the reason for this difference. In addition to finance, the improvement of precautions is also influenced by external reliant psychology, but to a lesser extent. Households with external reliant thinking appear to be less active in improvement measures. For example, 56.5% of households in such groups have made no improvements in setting sub-floors compared to just 21.6% of those in the non-reliant group (p = 0.01). In this regard, financial pressure should be seen as the primary cause since almost half (47.8%) of the members in the external reliant group were those living in poverty. Others argued that it is unnecessary to erect sub-floors because they trust their neighbors' support in the case of severe floods. The links between this psychological trait and the implementation of the remaining measures, however, are statistically insignificant. Housing location, which is frequently discussed in studies related to preparedness, is entirely separate from rural households' ability to improve preventive measures (p [ 0.05). Characteristics Associated with Improvements in the Suburban Village With disparities in socioeconomic conditions, exploring the suburban village promised to reveal other valuable links. The results are summarized in Table 5. The role of financial-related factors is further highlighted by their relationship with protective actions in the suburb. Similarly, fewer improvements in both structural and non-structural measures tended to be attached to the low-income group (p \ 0.05). In this area, uncertain = phi (applicable for 2 9 2 tables), V = Cramer's V (applicable for tables larger than 2 9 2) a Chi-square test for independence. *Statistically significant at 95% level **Statistically significant at 99% level (1) U = phi (applicable for 2 9 2 tables), V = Cramer's V (applicable for tables larger than 2 9 2) a Chi-square test for independence. *Statistically significant at 95% level **Statistically significant at 99% level occupations (v 2 = 29.19; p = 0.00) and burdens from disabled members (v 2 = 8.81; p = 0.01) were found to be the causes of poverty. Different from the rural village, housing location in the suburb had a significant effect on the improvement of structural measures. Families living adjacent to the riverbank appeared to be obstructed in reinforcing housing (p = 0.04), raising ground (p = 0.00), and setting sub-floors (p = 0.00). Only 34.8% and 26.1% of households belonging to the riverbank group reached considerable levels in reinforcing housing and setting additional floors, respectively, while none achieved that level in raising ground. This group is mostly small-scale retailers with a middle-income level. For most, housing upgrades are probably not a big challenge after saving for a long time. The field survey revealed that these poor improvements primarily stem from the provincial policy for restoration of the ancient city. Accordingly, households located in the planning area will be restricted from major upgrades so as to control the clearance compensation cost afterward. Although major reconstructions are still possible without an official license, those upgraded sections will not be compensated if the project is implemented. Furthermore, it should be stressed that, although initiated in the early 1990s, there has been no official decision made. This is the root cause constraining riverine households from upgrading structural measures. Some households were prevented from raising the ground due to ancient housing architecture. Raising the ground excessively will make their habitable space significantly narrower. These families may also be more susceptible during the flood season since most have illegally encroached on the riverbed to broaden their residential space. The cramped housing has also prevented riverine households from purchasing boats for proactive evacuation. This is one of the reasons why nearly 40% have been unable to improve the evacuation plan. However, the relationship between housing location and making emergency evacuation plan is statistically insignificant (p [ 0.05). The influence of psychological factors is again emphasized by its linkage with non-structural measures. People who have trust in various external supports appeared to pay less attention to food storage (p = 0.02) and evacuation planning (p = 0.00). The strength of these connections ranges from moderate to fairly strong (V = 0.369; U = 0.585). As an example, 37.5% of households in the external reliant group stored food for only three days, while up to 87.5% underestimated planning for an emergency evacuation. In contrast to the rural village, no connection between reliant thinking, poverty, and location was found. One possible explanation for the external reliant thinking of this type of residents relates to market functions in the area. Some residents believed that their shortcomings in preparation could easily be compensated by their neighbors, traders, as well as spontaneous services in the flood season. Living in the vicinity of the local market, where small retailers characteristically accumulate plenty of foodstuff for their daily business and there is a quite large number of boats passing during the flood season, may be the root of these passive ways of thinking. Comparison of Constraints between Rural and Suburban Villages Despite long-term efforts, some social groups remained less than impressive in improving preventive measures. There were similarities and differences in the constraints leading to these disquieting outcomes between rural and suburban villages. Poverty is an obvious obstruction in both areas since it is intricately linked to the improvement of all structural and non-structural measures. Labor shortages, high dependency ratio, living with disabilities, and precarious careers were the root causes of poverty in the study sites. This result strengthens the conclusion of Ahmad and Afzal (2020) that low financial status is the major obstacle of households to the implementation of risk reduction strategies. Other studies have also insisted that the poor are the most vulnerable to disasters and usually suffer greater damage than the wealthy (Fothergill and Peek 2004;Skoufias et al. 2011). This originates from their modest capacity to anticipate, respond, and recover from natural hazards (Wisner et al. 2003). The relationship between poverty and flooding is often seen as a vicious cycle. While floods cause or exacerbate poverty, poverty increases flood vulnerability (Dube et al. 2018;Kawasaki et al. 2020). Reducing poverty, therefore, should be a prerequisite in designing strategies related to flood risk mitigation. Putting strong faith in external supports, besides implying strong cohesion within the community, should be considered a psychological barrier that leads to certain subjectivity. In particular, rural residents who had external reliant thinking were less focused on setting sub-floors while for those in the suburb it led to an underestimation of the importance of storing food and planning for evacuation. This result concurs with the findings of Chen et al. (Chen et al. 2019) and Grothmann and Reusswig (2006) that relying on external support leads households to disregard self-protective measures. However, it is likely that the causes for this association between the rural and suburban areas are different. In the rural area, this may reflect a more abundant social capital compared to urban areas as found by Sørensen (2014). This tight connection in rural society allows residents to firmly trust in their neighbors to convey them to safe places such as permanent houses and a well-built church, as well as sharing a small amount of food in case of severe floods. It is also possible that the poor have their own strategy to mitigate flood risks since almost half of those in the external reliant group were classified as poor. Due to lack of financial capacity, the poor may prefer to invest in social capital to provide support in emergency cases. However, this reciprocal relationship can turn into external reliance since greater stocks of social capital were found to be associated with lower levels of risk perception and overestimation of self-capacity, which often undermine the intention to implement protective measures, making nonprotective responses such as wishful thinking, denial, or fatalism more likely (Babcicky and Seebauer 2017). In the suburb, on the other hand, the residents' external reliant thinking may be driven by specific functions in the area. It is said that urban areas are composed of different functionally diversified units, and these are interrelated through complex systems (Wilson 1984), of which market/commercial functions are an essential part. As the center of urban functions, the local market provides diverse commodities and services, including essential ones in the case of emergency. This may induce inhabitants within these systems to believe that it is less necessary to undertake precautionary actions. Further, the improvement of protective measures in the suburb was restricted by housing locations. Households living along the riverbank were less likely to improve structural measures. This unexpected outcome, besides being directly influenced by the susceptible location, concurrently resulted from the ancient house architectures and the ancient town's preserving policy. For ancient houses, raising the floor was generally limited to a certain extent to ensure sufficient space for daily activities. Besides this, the policy of preserving the ancient town significantly affected the implementation of structural measures. Under the regulation of this policy, major housing-related upgrading will not be compensated when undertaking the project. This placed riverside residents, therefore, in a dilemma between ensuring safety and maintaining livelihoods. Although initiated in the early 1990s, this project is currently pending approval so residents were unsure when it would be implemented and whether the project implementation will be better for them or not. In this regard, project implementation will probably release riverside households from the potential dangers of flooding by relocating them to more secure areas. However, resettlement to another area may be a serious threat to their livelihoods. As retailers, their families' livelihood heavily depends on the current location, which is close to both the local market and the main road. Therefore, a change to housing location will potentially disturb livelihoods. Most households, meanwhile, may not be ready to face this change due to a lack of occupational skills. From this result, we first reassert the previous findings of Tyas (2018) and Askman et al. (2018) on the binding between maintaining livelihoods and living in flood-prone areas. We also expose the inconsistency in designing and implementing development policies that may push residents into unfavorable and sometimes dangerous situations. This overall comparison between rural and suburban villages underlines a more complex link between space, functions, and flood exposure in urban areas, as discussed by Sato (2006) and Ahmad and Simonovic (2013). One of the urban spatial characteristics diversified and segregated settlements and its functions, affects the long-term precautionary measures taken by individual households. It further emphasizes the additional complexity of coordinating projects in urban areas and exposes the adverse effects of incoherent policy mixture on improving protective measures of suburban residents, which could also be regarded as a product of the above-mentioned urban spatial characteristics. Concluding Remarks and Policy Implications Mitigating flooding threats remains a major challenge for the Vietnamese government despite interminable efforts to expand engineering works. That context requires an integrated management strategy by incorporating bottom-up approaches. The main aim of this study, therefore, was to examine the extent of long-term improvements to flood precautions taken by local authorities and private households between the two major floods of 1999 and 2017. The findings first emphasize the increasing irregularity and unpredictability of floods, thus making it more difficult to anticipate through conventional approaches since the increasing popularity of the dam system in Thua Thien Hue Province. The majority of residents, therefore, doubted the efficacy of this top-down approach in reducing risks and suggested that more attention should be paid toward flood preparedness. Several public measures were implemented to enhance the defenses of local inhabitants. However, except for the early warning systems being regarded as much more effective operations after nearly two decades, other measures relating to food reserves, emergency shelters, evacuation supports, and resources for post-flood recovery were still considered scant and distrusted by most residents. The inadequate investment in preventive measures due to limited financial resources and the authorities' delayed response to emergencies, as is often seen in developing countries, was one of the primary reasons for the inefficiency of the above-mentioned efforts. Household selfpreparation, therefore, became the decisive factor in flood risk protection. Nearly 20 years after the disastrous flood of 1999, most households paid greater attention to both structural and non-structural protective measures to actively deal with flood hazards. The damage caused by floods since has noticeably diminished. This confirms the central role of individual households in the bottom-up flood risk management strategy, especially in developing countries. Some social groups, nonetheless, appear to be lagging behind in this struggle against nature, which was evidently expressed through their underperformance in the long-term improvement of precautionary measures. Poverty-related barriers were the root causes restraining the improvement of households in both rural and suburban villages. Hence, poverty eradication is a prerequisite to mitigating risks and should be integrated into flood risk management strategy as a foremost priority. Furthermore, households' external reliant psychology, though less common in the communities, also created subjective attitudes toward improving some types of personal precautions. While external reliant psychology in rural areas was attached to residents' abundant social capital, in the suburbs this was likely to be attributed to specific functions such as local markets. Breaking down this psychological barrier is essential to improve precautions further, but this should be done by thoroughly considering this difference between areas. The riverine suburb in the study area was further identified as vulnerable based on their limitations in upgrading structural measures. This matter, apart from being affected by the precarious location itself, was strictly regulated by the policy for preserving and restoring the ancient landscape along the riverbank. Their vulnerability to floods, meanwhile, potentially can turn into livelihood vulnerability if the project was implemented. This, on the one hand, indicates the additional complexity of coordinating projects in urban areas, while on the other, requires a smoother blend of development policies to limit adverse implications. Besides revealing the connections between poverty, external reliant psychology, and potential flood risk susceptibility, this study, by comparing two villages in terms of vulnerability, emphasized the importance of spacefunction links in the suburban area and the contradictions of different policy initiatives, such as landscape rehabilitation, disaster prevention, and livelihood maintenance. Therefore, besides considering social and geography differences, supportive solutions targeting vulnerable groups should be combined with effective public measures and a coherent mix of policies for further improvements.
2021-01-08T21:41:55.195Z
2021-01-07T00:00:00.000
{ "year": 2021, "sha1": "c0ac0dfb0b60845aff77fbd6b573da50aefbdd7e", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s13753-020-00326-2.pdf", "oa_status": "GOLD", "pdf_src": "Springer", "pdf_hash": "c0ac0dfb0b60845aff77fbd6b573da50aefbdd7e", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Geography" ] }
57057007
pes2o/s2orc
v3-fos-license
The study of epidemiology of Tuberculosis in Bane (Kurdistan) between 2003 and 2010 Background and purpose: Tuberculosis is a bacterial infection that is commonly caused by Mycobacterium Tuberculosis. As Kurdistan province is close to Iraq (risky due to political instability in recent years), the Bane city had high interaction with Iraq, so the aim of this study was to evaluate the prevalence of tuberculosis in Bane (Kurdistan) between 2003 and 2010. Materials and Methods: This descriptive longitudinal study was done from 2003 to 2010. The number of people with TB was 94 cases taken from the registry. The data included age, sex, type of disease, age and place of registration. The data were analyzed using descriptive and inferential statistical methods through SPSS 20 software. Results: Among the 94 cases recorded in this analysis, the women had higher percentage than men (41.49 percent and 58.51 for men and women, respectively) and the incidence rates were found to be 7.93 in men and 11.64 in women 100000 persons; therefore, there were significant differences between men and women (a p-value <0.01) in all patients in this study. Fifty five percent were from the urban and some others the rural areas and 98.9 percent of them were from Iran. Conclusion: The incidence rate of tuberculosis in females is higher than males. In addition, the average delay time of symptoms to diagnosis was 191 days, so this time, it is relatively high because TB is an infectious disease. Introduction Tuberculosis (TB) is a life threatening infectious disease which represents a wide range of clinical diseases mainly caused by mycobacterium tuberculosis (1). This disease could be pulmonary(85%) or extra pulmonary (15%) which involves vertebral column ,kidney, skin, gastrointestinal system, lymph nodes, genitourinary system and but the pulmonary involvement is more important, cause it is considered as a reservoir of diseases in the population. Approximately one-third of global population (about 2 billion person) are infected by tuberculosis but it doesn't definitely progresses to an active disease because the human immune system protecting against it. In immune suppression conditions like aging and immune suppressant drug usage, 5 to 10% of infected people progress to an active disease, which if left untreated can transmits the disease to 10 to 15 people. In spite of medical science improvements tuberculosis is still one of the most common causes of death in low and intermediate income countries (1)(2)(3). Because of increased incidence rate of HIV infection and drug resistant tuberculosis during recent years, WHO declare TB as a global emergency in 1994 (1,(4)(5)(6)(7). TB is the main cause of death in HIV positive patients so it is still considered as a health problem. Tuberculosis causes one death per 10 seconds in global population. Based on WHO reports about 80% of tuberculosis patients live in 22 countries which two of them are Pakistan and Afghanistan, so in our country, tuberculosis is still considered a serious public health issue because of its' adjacency with Pakistan and Afghanistan in the east, and also adjacency with Asian countries in the north because of persistence of drug resistant TB. Furthermore, Iraq in the west is another issue with progressive incidence rate of TB because of its' political revolutions (6-9, 7, 14). According to the reports of disease management branch of Health ministry of Iran, there is 1000 new TB cases per year in our country which half of them is in the 15-45 y/o age group whom are productive age group of the country. Incidence rate of tuberculosis in our country is 13 per 100000 people, and this rate is the most in Sistan and Golestan cities. Incidence rate of tuberculosis in Iraq is increasing because of its' political changes in recent years, so in Kordistan because of its' adjacency with Iraq TB should be considered a critical issue especially in Baneh city, considering high amount of transportation to this country. So, we performed this study to verify the incidence rate of TB in this city, during years 2003-2010. Materials and Methods This is a descriptive and longitudinal study, based on available data during years 2003-2010 in Baneh city. All tuberculosis patients referring to health care centers of Baneh during years 1382-89 were detected by their archived documents and included in this study. These patients were diagnosed by pathological studies done by health care center or private medical centers. Finally 94 patients' data were categorized based on age, gender, type of disease (pulmonary or extra pulmonary), and time and location of diagnosis. In the process of analysis we use descriptive statistical methods including mean, standard deviation for age and mean period of delay in diagnosis. We also use frequency schedules for descriptive variables like gender, incidence rate of age and sex and ratio test. Results Overall, 39 of 94 patients (41.49%) are male and 55(58.51%) are female. 54.25% of patients live in the urban areas and 47.75% live in rural areas. Totally 61 patients are in the age range years old (70.21%) and 27(25%) are more than 65 years old, 4 are less than 15 years old. The incidence rate of TB in Baneh is 8.9 per 100000 which is the most in the pulmonary type (6.5) and is the least in the extra pulmonary type (2.4). As shown in table 1, TB incidence rate in male is 7.93 per 100000, and 11.64 per 100000 in female. The incidence rate in female is more than male (P value<100). 10% of patients were dead because of TB. The average weight of patients is 54±10. As shown in diagram 1 TB incidence rate is the most in the age group 80-84 y/o (104.4) and furthermore the risk is increased by aging. Age mean in male is 45.44 and in female are 53.33 which are not meaningful. Also the incidence rate in city and rural places is not meaningful. The most common form of extra pulmonary type of TB is Gastro intestinal involvement. The mean delay period in diagnosis is 191±41. Discussion Accumulative incidence rate of TB in Baneh (8.96) is less than this rate in whole country (13.4) (8,7). Incidence rate of smear positive TB in Baneh is less than the whole country .diagnostic rate of TB in Iran is 61% whereas this rate should be at least 70% according to WHO reports. Based on this diagnostic rate, expected incidence rate in Iran are 22. Low diagnostic rate and delay in diagnosis are considered as main causes of spread of TB and considered characteristics for evaluation of the quality of health care services. Any untreated smear positive tuberculosis patient could spread to [10][11][12][13][14][15] other people during a year, so early onset diagnosis of smear positive tuberculosis and effective treatment are the best ways to prevent and control tuberculosis in the general population (15). Overall, the amount of incidence rate of TB and incidence rate of smear positive TB in Baneh shows that diagnostic rate of smear positive tuberculosis patients is low according to expected diagnostic rate (13 per 100000) especially considering Baneh 's adjacency to Iraq. Additionally the delay rate in diagnosis of smear positive patients in Baneh is more than its' expected rate in the whole country (127 day) (18,1). In comparison with other studies in Italy and England, the delay rate in diagnosis in our country is more. This rate in Saudi Arabia is 60 days (18). Furthermore high incidence rate of extra pulmonary tuberculosis in comparison with global rate (15%) could be because of the failure in diagnosis of pulmonary tuberculosis, false positive diagnostic cases and increased incidence of HIV infection (15). Extra pulmonary tuberculosis in Baneh involves abdomen more than the other organs which is similar to whole country. According to significant gap between the incidence rate of TB in Baneh and its' expected rate, it is important to identify epidemiological differences and at risk points of the city. Also diagnostic interventions especially in at risk age groups, assessment of diagnostic procedures in health care centers, enrichment of related frameworks, people training, enforcement of health care workers playing role in control of TB, and following up people whom in close contact with tubercular patients is necessary. The most incidence rate of TB in Baneh is in age group 65 and above. This pattern is similar to industrial countries, but in these countries this result is because of tight control and suitable diagnostic interventions, but in developing countries the risk of TB infection has still remained high such that more than 75% of these patients are younger than 50 y/o and the most mortality rate related to TB is in the productive age groups (15-59 y/o), so the most complications of this disease is related to this age group (15,2,1). The incidence rate of TB in female is more than male in Baneh, according to studies done in Iran, this rate is more in female than male in all age groups. The incidence rate of TB before adolescence in female is 5.2 times more than male, which is similar to Golestan and Ardebil, but a study in Mazandran shows no meaningful differences between male and female (16,17). Some studies show significant differences between female and male, for example in some studies in India and other Asian countries the incidence rate of TB in female is more than male (18). HIV co-infection with Tuberculosis is a public health issue and its' control need epidemiological studies and the first step is assessment the prevalence of HIV infection in tubercular patients. more efforts are needed to reach to the expected smear positive TB diagnostic rate (70%) and also to decrease the prevalence rate and mortality rate of TB to 50 until 2015 and eradication of TB until 2050 ( decrease the incidence rate to 1 per 1000000) (19)(20)(21).
2019-01-23T12:15:25.745Z
2013-06-15T00:00:00.000
{ "year": 2013, "sha1": "ed06759423715cb7eff61d6382803d3af5758a98", "oa_license": "CCBYNC", "oa_url": "http://jhs.mazums.ac.ir/files/site1/user_files_d258bd/rostamifaridehhsr-A-10-25-22-2ee62dc.pdf", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "01644a4835d4e65766ced5db688bcee0302362e8", "s2fieldsofstudy": [ "Medicine", "Environmental Science" ], "extfieldsofstudy": [ "Medicine" ] }
10799779
pes2o/s2orc
v3-fos-license
A secondary benefit: the reproductive impact of carrier results from newborn screening for cystic fibrosis Purpose Newborn screening (NBS) for cystic fibrosis (CF) can identify carriers, which is considered a benefit that enables reproductive planning. We examined the reproductive impact of carrier result disclosure from NBS for CF. Methods We surveyed mothers of carrier infants after NBS (Time-1) and one-year later (Time-2) to ascertain intended and reported communication of their infants’ carrier results to relatives, carrier testing for themselves/other children and reproductive decisions. A sub-sample of mothers was also interviewed at Time-1 and Time-2. Results Response rate was 54%. Just over half (55%) of mothers carrier tested at Time-1; a further 40% of those who intended to test at Time-1 tested at Time-2. Carrier result communication to relatives was high (92%), but a majority of participants did not expect the results to influence family planning (65%). All interviewed mothers valued learning their infants’ carrier results. Some had carrier testing and shared results with family. Others did not use the results or used them in unintended ways. Conclusion While mothers valued learning carrier results from NBS, they reported moderate uptake of carrier testing and limited influence on family planning. Our study highlights the secondary nature of the benefit from disclosing carrier results from NBS. INTRODUCTION Newborn screening (NBS) aims to reduce childhood morbidity and mortality through early identification and treatment of affected infants. 1 This is considered the primary benefit of NBS programs and encourages universal screening of infants for many disorders worldwide. Secondary benefits may also accrue through NBS, such as benefits to the family and society, whereby NBS can inform family planning ("reproductive benefit") or advance the understanding of disease. Traditionally, these secondary benefits have not been sufficient to justify NBS. 2 However, recent scholarly discourse has highlighted reproductive benefits as an increasingly prominent goal of NBS, particularly when the clinical goal of identifying a treatable condition may not be assured. [3][4][5][6] One way that reproductive benefits arise through NBS is through the generation of carrier results. NBS for cystic fibrosis (CF) using typical testing protocols identifies many unaffected carriers of one CF mutation in addition to affected infants; indeed, the majority of infants with false-positive CF NBS results are identified as CF carriers during confirmatory testing. 7 In Ontario, CF NBS involves a two-step process of measuring immunoreactive trypsinogen (IRT) followed by screening the CF transmembrane regulator (CFTR) gene for 39 mutations. 8 Confirmatory sweat chloride testing is then performed on screen-positive infants who are subsequently classified as having true-positive, false-positive, or inconclusive (i.e., genetic variants of uncertain significance with or without borderline sweat chloride; Table 1) results. Both parents of carrier newborns are offered genetic counseling and carrier testing at no charge as part of the disclosure protocol. There is long-standing debate about incidental carrier status identification through NBS. First, from an ethical perspective, this information is not typically available without informed consent, 9 and carrier testing is not usually pursued in minors. [10][11][12] Second, there is inconsistent evidence about the benefits and harms of CF carrier identification through NBS Submitted Purpose: Newborn screening (NBS) for cystic fibrosis (CF) can identify carriers, which is considered a benefit that enables reproductive planning. We examined the reproductive impact of carrier result disclosure of NBS for CF. Methods: We surveyed mothers of carrier infants after NBS (Time 1) and 1 year later (Time 2) to ascertain intended and reported communication of their infants' carrier results to relatives, carrier testing for themselves/other children, and reproductive decisions. A sub-sample of mothers was also interviewed at Time 1 and Time 2. Results: The response rate was 54%. A little more than half (55%) of mothers underwent carrier testing at Time 1; another 40% of those who intended to undergo testing at Time 1 underwent testing at Time 2. Carrier result communication to relatives was high (92%), but a majority of participants did not expect the results to influence family planning (65%). All interviewed mothers valued learning their infants' carrier results. Some underwent carrier testing and then shared results with family. Others did not use the results or used them in unintended ways. for parents, including the potential to inform family planning and the risk of psychosocial harms. 13 Studies have shown that communicating carrier results to parents of infants with falsepositive results may cause anxiety. 14,15 Although this state tends to be short-lived, [15][16][17][18][19] anxiety about the carrier status of their infant persists for a minority of parents and leads to concerns about stigma and the physical health of their carrier child. 18,20,21 Third, parents typically state that they want infant carrier information to know reproductive risks, 19,21-24 yet evidence regarding parental use of that information is equivocal. A minority of parents avoided pregnancy after the disclosure of an infant's carrier status through NBS. 22 (Table 2) Uptake of prenatal diagnosis in subsequent pregnancies ranges from 14% to 66%, with higher termination rates of 69% to 100%. [25][26][27] Although the majority of parents share their infants' carrier results with relatives, 18,21 evidence of parents' pursuit of their own carrier testing is inconsistent, varying from 30% to 85% among parents of carrier infants, 18,21,22,24 with reduced uptake among next-degree relatives. 28 (Table 2) Several studies have documented the reproductive attitudes and behaviors of parents following CF identification, 21,25,27,[29][30][31][32][33] but these findings are not specific to NBS and are not longitudinal. Thus, there is limited evidence about the nature or extent of the "reproductive benefits" of sharing carrier results of NBS. We examined the uptake of carrier testing, carrier result communication, and impact of carrier results on family planning among parents of a prospective cohort of infants with false-positive CF results identified through NBS. We also examined factors associated with these outcomes. Based on existing evidence, 21,25,[33][34][35] we hypothesized that carrier status of infants and parity, education, and income levels of mothers would be associated with increased uptake and intentions to pursue carrier testing for themselves, share the results with relatives, and use the results to inform family planning. We also explored participants' attitudes regarding these behaviors to elucidate the variation in uptake of these behaviors. MATeRIALs AND MeTHODs study design This study is part of a prospective, longitudinal, mixed-methods cohort study designed to investigate the impact of NBS for CF on families, health-care providers, and health services in Ontario. We received research ethics board approval from the University of Toronto, the Children's Hospital of Eastern Ontario, and the Hospital for Sick Children. sample We recruited all mothers of infants confirmed to have falsepositive results for CF after NBS follow-up at the Hospital for Sick Children during the 18-month data collection period. We excluded infants known to be deceased or in the neonatal intensive care unit, those adopted or involved with child welfare, and based on clinical judgment of inappropriateness (e.g., extreme distress, catastrophic events, significant language barriers). Data collection Surveys. We collected structured data using self-administered questionnaires with a modified Dillman approach; 36 up to three contacts were made. The Time 1 survey was sent to mothers with infants with false-positive results 4-8 weeks after confirmatory testing; confirmatory testing typically occurred when infants were approximately 4 weeks old. The Time 2 survey occurred 1 year later. Completion and return of the questionnaire package constituted consent to participate. Questionnaire design. The questionnaire was developed by a multidisciplinary team based on literature review. 18,[20][21][22] We pretested the questionnaire with new parents recruited from the greater Toronto area (N = 11) through an online mothers' group to assess comprehension, face, and content validity. Interviews. We conducted semi-structured, open-ended interviews with the subsample of mothers of infants with false-positive results from the Time 1 and Time 2 cohorts who indicated their willingness on questionnaires and provided informed consent for in-person or telephone interviews. Interviews explored experiences regarding uptake of carrier testing, sharing carrier results with relatives and with their carrier infant, and family planning. Measures The measures specific to reproduction included the following: (i) carrier status of the infant; (ii) uptake of carrier testing by mothers and their partners; (iii) communication of infants' carrier results to relatives; (iv) influence of carrier results on family planning; and (v) carrier testing of other children. Each Analysis Surveys. We calculated the proportion of respondents who indicated yes/no to the test measures. We used chi-square and Fisher's exact tests to test the hypotheses of associations between participant characteristics and these measures. We considered two-sided P ≤0.05 to indicate statistical significance. Data were managed and analyzed using SPSS 16.0.0 (SPSS, Chicago, IL). We report cross-sectional analyses for Time 1 and longitudinal results for the subsample of respondents who completed both Time 1 and Time 2 questionnaires. Time 1 questionnaires compare across several study groups (carrier, noncarrier, other/ uncertain); Time 2 results are restricted to mothers of carriers because skip patterns prompted noncarrier/other mothers to skip sections pertaining to reproductive risk. Interviews. Interviews were recorded, transcribed, and coded. We used a thematic approach and applied and modified preexisting codes from the interview guide pertaining to carrier testing, family communication, and family planning; we allowed new themes to emerge from the data using constant comparison. 37 We used Time 1 interviews to identify themes and then searched for confirming/disconfirming evidence in Time 2 interviews. No new themes occurred during Time 2 interviews; therefore, these data are not shown. Response rate We received completed questionnaires from 134 of 246 mothers (54%) during Time 1 and 95 of 216 mothers (44%) during Time 2 (30 fewer mothers were approached at Time 2 because they declined participation during Time 1 and were not approached again during Time 2 or their survey was returned undelivered). We report data for a total of 131 mothers (Time 1), of whom 74 (Time 2) completed Time 1 and Time 2 surveys and responded to the carrier status question during both Time 1 and Time 2. Participant characteristics The characteristics of our survey samples are reported in Table 3 and appear similar to those of the CF carrier population reported in other studies, except that individuals in our samples had higher education levels. 18,21,22 There were no significant differences in characteristics between mothers of carriers and other mothers during Time 1 and Time 2, except that at Time 1 mothers of carriers had higher incomes compared to other mothers. We interviewed 22 mothers of carriers during Time 1 and 25 at Time 2 (7 were interviewed twice). The majority of Time 1 participants were older than 30 years (18/22; 82%), lived in larger cities (16/22; 73%), and earned more than $80,000 (18/21; 86%). survey results Carrier testing uptake. During Time 1, carrier testing uptake was reported by 55% (42/77) of mothers of carriers ( Table 4). Four of 30 (13%) mothers of noncarriers and 10/24 (42%) mothers in the "other/do not know" category also reported they had been tested. Mothers also reported fathers' testing uptake; carrier testing uptake and intentions reported for fathers were similar to mothers' reported uptake and intentions (data not shown). Follow-through on intentions to undergo carrier testing. Fifteen mothers of carriers who had not undergone carrier testing during Time 1 intended to and, of those, six (40%) had undergone testing by Time 2 (Figure 1). Of the remaining nine, Figure 1). The majority (>65%) of mothers of carriers did not expect or were unsure whether to expect the results to influence family planning, which remained consistent at follow-up. Follow-through on intentions to communicate results to family. Only a minority (3/57; 5%) of mothers of carriers did not tell relatives that they may be carriers during Time 1. Of those, all three had told relatives by the time of follow-up (Figure 1). Carrier testing among other children Few mothers had their other children undergo carrier testing during Time 1: 8% (3/37) of mothers of carriers, 6% (1/16) of noncarriers, and none of the unsure/other mothers ( Table 4). Factors associated with reproductive behaviors During Time 1, mothers of carriers were significantly more likely to undergo carrier testing themselves, express an intention to undergo testing, and notify relatives of their infant's carrier results compared to other mothers (P < 0.001) ( Table 4). Primipara mothers were more likely than other mothers to undergo carrier testing and to expect that the results would influence their family planning (P < 0.01; Table 4 and Supplementary Table S1 online). Qualitative results. Our qualitative analysis extends our survey data, suggesting the existence of two groups of mothers. Although all mothers identified value in learning their infants' carrier information, some used it in a targeted fashion to inform carrier testing and family communication and others either did not use it or used it in unintended ways. Reproductive benefit of carrier information For some mothers, their infants' carrier results were influential in informing family planning for themselves, which motivated parents to pursue their own carrier testing: "If we were both carriers, we actually kinda decided we wouldn't have another kid" (ID #253). Carrier results were also perceived as important for relatives' family planning, which was another motivation to pursue their own carrier testing: "Once we figured out who was the carrier then we discussed [results] with that side of the family...knowledge is power in this case" (ID #50). Mothers reported informing first-degree, second-degree, and thirddegree relatives, particularly those planning to have children. Many described the nature of the communication as "talking to [relatives] about it, " whereas others forwarded letters provided by the clinics to the family members who were at risk: "I have plans to send an email around with the information I got from the hospital" (ID #281). These mothers also valued learning their infants' carrier results for their children's future reproductive planning and partner selection. In most cases, parents planned to discuss the results with their children in the future when they were adults and did not perceive this to be "too big of an issue" (ID #198). However, some were concerned that their children would misunderstand the implications or worry: "I would just want to make sure that she's at a mature enough level that alarm bells don't go off in her head and she starts stressing about, is this a disease I'm going to get" (ID #141). Learning about infant carrier status sometimes represented an opportunity to learn whether other children are also carriers, which motivated some mothers to undergo carrier testing; however, few also had their other children tested. Lack of reproductive benefit of carrier information Other mothers appreciated receiving their infants' carrier results but did not use the information or used it in an unintended manner. In some instances, this was because they did not plan to have more children. In other instances, the reproductive value of the information was not a focus. This was evident when parents indicated that they pursued carrier testing for themselves out of curiosity or convenience: "I was just curious" (ID #282). Mothers in this group also shared the carrier results of their infants with both sides of the family without attending to which parent was the carrier, and thus which side of the family was specifically at risk: "We're letting everybody know, but we didn't feel like we needed to do the testing ourselves to narrow down exactly who to give the information to" (ID #70). These mothers were also uncertain about sharing the carrier results of their infants with their children in the future as adults or gave it little consideration. Finally, these mothers did not expect the results to inform reproductive planning, noting that they would "continue on just like we have" (ID #182). DIsCUssION Our study provides the first prospective, longitudinal, mixedmethods data on the reproductive impact of carrier results of NBS for CF. Our results suggest that the reproductive benefits of CF carrier disclosure through NBS among mothers of CF carrier infants are not uniform or consistent. Carrier result communication to relatives was high (92%), and some found the information influential in informing their family planning. However, there was moderate carrier testing uptake (55%), and the majority of participants did not expect the results to influence family planning (65%). Interviews also identified a lack of utility in family planning for some because of life stage. Finally, carrier results were sometimes used in unintended ways: some parents tested their other children and noncarriers informed their relatives that they may be carriers of CF. In addition to challenging international guidelines on carrier testing of minors, these actions may prompt unnecessary use of health-care services or lead to concern among relatives. Given the moderate levels of utility and unintended consequences, our study highlights the secondary nature of the benefits arising from the generation of carrier results from NBS. Our results are consistent with literature reporting that the majority of parents share the carrier results of their infants with relatives 18,21 and that the minority indicate that the results influence decisions to have more children. 21,22,25,26 Our maternal carrier testing uptake rate reflects the median reported among parents of carrier infants (30-85%; Table 2); this was associated with parity of mothers and the carrier status of their infants. Only one other study compared hypothetical and reported reproductive behaviors, but these were among parents of children with CF and were restricted to use of prenatal testing and termination of pregnancy. 26 Thus, our study provides novel results for the intended versus actual uptake of carrier testing and for communication and influence on family planning among parents of carrier infants identified through NBS, the associated factors, and qualitative insights into the variation in behaviors reported in the literature (summarized in Table 2). Our study also revealed some unintended consequences of disclosure of CF carrier results. First, some mothers "told everyone" in their families that they may be CF carriers without confirming which side was at risk. This may create more carrier testing and use of counseling services than would otherwise be necessary. Second, there appears to be evidence of some confusion because mothers of noncarrier infants also reported pursuing carrier testing (13%) and telling relatives they may be carriers (32%), which is inconsistent with guidelines. 38 These results indicate a need for improved parental understanding of the implications of noncarrier results. Equipping primary care providers with detailed information about NBS, carrier results, and their reproductive implications are additional avenues to ensure patient understanding because primary care providers often support patients with positive NBS results. Third, a minority of mothers appear to be testing their other children for carrier status. This reveals fundamental challenges of carrier identification through NBS, which may create disparities in access to carrier information between newborns and their older siblings 39 and conflicts with existing policies and norms that advise against carrier testing of asymptomatic children. 9 Our results provide timely contributions to the evidence base regarding the reproductive impact of sharing carrier results identified through NBS in light of ongoing expansions of NBS worldwide. Our findings raise broader questions about the potential reproductive benefit derived from carrier disclosure as a result of NBS and warrant consideration in policy decisions supporting expanded screening programs. Although mothers valued learning the carrier results of their infants, our study demonstrates moderate intended and reported testing uptake and limited influence on family planning. These results suggest that carrier results should be considered a secondary benefit and thus support previous calls to consider them "additional" or "limited secondary" benefits. 21,25 There are several limitations to our study. One year may not be sufficient time to assess maternal carrier testing uptake. We did not assess the mothers' understanding of carrier status. However, virtually all parents with false-positive results in Ontario are offered free genetic counseling and carrier testing. Therefore, access to genetic testing and counseling or a lack of understanding of the reproductive implications should not represent major confounders in our study. Further, the interviews provide additional support of the mothers' understanding of the implications of their infants' carrier status. Finally, although our response rate was modest, it is consistent with, if not higher than, those of similar population-based surveys among CF cohorts (e.g., 37%, 22 45%, 18 53% 40 ). Furthermore, our study provides the first prospective, longitudinal, mixed-methods results regarding the reproductive impact of carrier results of newborn screening for CF. Limitations notwithstanding, our study provides a timely contribution to the evidence base for the "reproductive benefits" of sharing carrier results to inform NBS policy. Carrier identification through NBS may motivate carrier testing and family communication, but it does not necessarily influence family planning or reproductive behaviors. If the influence of carrier status on family planning is a proxy for the value of carrier information obtained through NBS, then this benefit is achieved for only a minority of individuals, thus suggesting that it should maintain its historic status as a secondary benefit of NBS. SUPPLEMENTARY MATERIAL Supplementary material is linked to the online version of the paper at http://www.nature.com/gim
2017-11-08T18:50:34.591Z
2016-08-19T00:00:00.000
{ "year": 2016, "sha1": "39795a2deed095a58188de7d4cff8b02a3676990", "oa_license": "publisher-specific-oa", "oa_url": "https://www.nature.com/articles/gim2016125.pdf", "oa_status": "BRONZE", "pdf_src": "SpringerNature", "pdf_hash": "0afffc1108f75ced50a53a7a11cf8f3476a034af", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
54720567
pes2o/s2orc
v3-fos-license
Mountain birch – potentially large source of sesquiterpenes into high latitude atmosphere Mountain birch – potentially large source of sesquiterpenes into high latitude atmosphere S. Haapanala, A. Ekberg, H. Hakola, V. Tarvainen, J. Rinne, H. Hellén, and A. Arneth Department of Physics, University of Helsinki, Finland Department of Physical Geography and Ecosystems Analysis, Lund University, Sweden Air Chemistry Laboratory, Finnish Meteorological Institute, Finland Received: 29 April 2009 – Accepted: 20 May 2009 – Published: 2 June 2009 Correspondence to: S. Haapanala (sami.haapanala@helsinki.fi) Published by Copernicus Publications on behalf of the European Geosciences Union. Introduction Volatile organic compounds (VOCs) are a diverse group of hydrocarbons emitted to the atmosphere from both biogenic and anthropogenic sources (Guenther et al., 1995;Simpson Correspondence to: S. Haapanala (sami.haapanala@helsinki.fi) et al., 1999;Piccot et al., 1992).Their double bonds give rise to high reactivities towards the hydroxyl radical (OH), ozone (O 3 ), and the nitrate radical (NO 3 ).Therefore the volatile organic compounds play an essential role in the regulation of the oxidizing capacity of the atmosphere.The VOCs also contribute to the formation and growth of secondary organic aerosols (Kulmala et al., 2004;Tunved et al., 2006), and they may form long-lived oxidation products, with the potential of affecting atmospheric chemistry in remote regions (Law and Stohl, 2007). Biogenic VOC emissions in the tropical, temperate and boreal vegetation zones have been extensively studied while the high-latitude, e.g.subarctic, ecosystems have gained much less attention (Tiiva et al., 2008;Bäckstrand et al., 2008;Ekberg et al., 2009).However, the subarctic vegetation zone covers large areas on the globe, and despite its relatively low biomass density it can have a significant impact on atmospheric VOC concentrations, especially on local to regional scales.Largely as a result of the prevalence of short and cool summers, biogenic VOC (BVOC) emissions from the northern regions are likely to be relatively small compared to the global emissions.However, atmospheric reactions such as particle formation from biogenic pre-cursors and ozone destruction/formation occur on regional rather than global spatial scales (e.g.Tunved et al., 2006;Svenningsson et al., 2008). Mountain birch (Betula pubescens ssp.czerepanovii (Orlova) Hämet-Ahti), a subspecies of downy birch, covers an area of almost 600 000 ha in the Scandinavian subarctic.The birch species in boreal and temperate regions have been found to emit substantial amounts of a wide range of C 5 to C 15 VOCs (Isidorov et al., 1985;König et al., 1995;Hakola et al., 1998Hakola et al., , 2001;;Vuorinen et al., 2005).BVOCs emitted from the mountain birch in the subarctic may thus be of significant regional importance for the complex relationship S. Haapanala et al.: Mountain birch -potentially large source of sesquiterpenes between the ecosystem carbon flux, atmospheric chemistry and climate, but at the present there is only one study available of their emissions (Steinbrecher et al., 1999). The subarctic mountain birch forests are exposed to herbivory by caterpillars of the autumnal moth (Epirrita autumnata), and severe outbreaks occur on a regular basis with intervals of about 9-10 years.In some cases, large forest areas have suffered from mortality of most of the trees (Trägårdh, 1939;Haukioja et al., 1988).Mechanical damage of leaf tissue is known to induce an immediately enhanced production and emission of volatiles (Juuti et al., 1990).However, it has also been suggested that an immunological memory effect of herbivore attacks, potentially affecting the composition of the emitted VOC mixture, may persist for several years after the actual defoliation event (Ruuhola et al., 2007). The aim of the present study is to characterize the VOC emissions from a natural mountain birch forest.To obtain the emission parameters that could be further exploited in emission inventories and emission model development, the temperature and the photosynthetic photon flux density (PPFD) were carefully recorded during the measurements.In addition to the VOC emission measurements, ecosystem scale photosynthesis was measured simultaneously at the same site.From these data we also managed to study the link between delayed responses of herbivory damage and the atmospheric VOC emissions of plants. Measurement site The measurements took place in the Stordalen Nature Reserve, located near Abisko in northern Sweden (68 • 20 N, 19 • 03 E, 360 m a.s.l.).The long-term annual mean temperature at the Abisko climate station (68 • 21 N, 18 • 49 E, 388 m a.s.l.) is −0.8 • C with the warmest month being July (mean temperature 11.0 • C) and the coldest January (mean temperature −11.9 • C).The sheltered valley has a relatively dry microclimate with a mean annual precipitation of 304 mm.The highest precipitation occurs in July (mean rainfall 54 mm) and the lowest in April (mean rainfall 12 mm) (Alexandersson et al., 1991).Snow accounts for about half of the precipitation. The woody vegetation at the measurement site is dominated by mountain birch.The forest is limited by a mire in the north and continues in the other directions for hundreds of meters.There is a road with little traffic to the south of the measurement site.During the growing season of 2004 the area was affected by a massive outbreak of the autumnal moth.By the growing season of 2006, the trees had mostly recovered from the damage caused by the outbreak.The net CO 2 exchange in 2006 was of the same order of magnitude as in the growing seasons preceding the outbreak in 2004 (T. Johansson, Torbjörn Johansson, Lund University, Lund, Sweden, personal communication, 2009). In summer 2006 we conducted VOC emission measurements on four individual trees (numbered as 1-4) and, as a complementary measurement, one of them (number 4) was measured again in summer 2007.The measurements took place during 28 June-5 August 2006 and during 16-17 July 2007.The dataset of the year 2006 consists of 40 chamber closure measurements and that of 2007 consists of 16 chamber closure measurements.During both of the campaigns the leaves of the studied trees were mature.Due to the fact that all the measurements were conducted in the middle of the growing season, the seasonal variation of the emissions was not studied. Sampling For the VOC emission measurements a branch growing about 2 m above the ground level was placed in a transparent chamber made of Teflon film.The canopy is open and at that height, about half of total tree-height, the branch is sunexposed.The enclosure installation took place at least one day before the measurements to avoid any effects of rough handling of the plant on the emission which has been shown to cause increased emissions (e.g.Juuti et al., 1990;König et al., 1995;Hakola et al., 2001).The enclosure was appropriately ventilated until just prior to sampling initiation.The volume of the cylindrical enclosure was about 15 l.Inflowing air was pumped to the enclosure at a rate of about 5 l min −1 .Ozone was removed from the inflow air using MnO 2 coated copper nets.Samples from both the inflow and outflow air were collected by trapping C 5 -C 15 hydrocarbons into cartridges filled with Tenax-TA and Carbopack-B/Carbograph 1TD adsorbents.The samples were taken using a constant flow rate of about 0.1 to 0.2 litres per minute and sampling times of 55 to 120 minutes resulting in a 6 to 12 liter sample volume.The adsorbent samples were analyzed using an automatic thermodesorption device (Perkin-Elmer ATD-400) connected to a gas chromatograph (HP-5890), with a massselective detector (HP-5972).The precision of the repeated adsorbent calibration sample analysis was estimated to be about 6% for each compound.This value was used as an estimate for the repeatability of the concentration measurements.The emission rates were determined based on differences in concentration between inlet and outlet air.For further description of the enclosure system and chemical analysis, see Hakola et al. (2001Hakola et al. ( , 2006)). Despite the relatively high air flow through the enclosure, the temperature inside the enclosures increased during periods of strong solar radiation.The highest recorded temperature inside the enclosures was 31.increase being as high as 7.1 • C.This should be taken into account when interpreting the regional emission values. Measured emissions Mountain birches were found to emit large amounts of monoand sesquiterpenes.In addition, linalool emission was found to be substantial, about half of the monoterpene emission.Isoprene emission was negligible.Examples of the emissions of the different VOC groups are shown in Fig. 1 together with records of temperature, light intensity and net ecosystem CO 2 exchange, obtained from an eddy covariance flux tower at the same site.From these results it is apparent that the emissions are strongly dependent on the temperature inside the chamber. The average emission spectra of the most abundant compounds during the measurements in 2006 and 2007 are shown in Fig. 2. When considering single monoterpene species, the monoterpene emission was dominated by sabinene.Other abundant monoterpenes were ocimene, trans-ocimene, terpinolene and α-pinene.The same com-pounds were identified by Hakola et al. (2001) from the emissions of downy birch in southern Finland.The total monoterpene emission averaged over all trees in 2006 was 1100 ng g −1 dw h −1 (2.2 pmol g −1 dw s −1 ), ranging from 0 to 14 000 ng g −1 dw h −1 .In 2007 the average sum was almost equal, 1200 ng g −1 dw h −1 (2.5 pmol g −1 dw s −1 ), and varied between 0 and 4000 ng g −1 dw h −1 .In 2006 sesquiterpenes were emitted in high amounts, the average total emission being 2700 ng g −1 dw h −1 (3.7 pmol g −1 dw s −1 ).The sesquiterpene emission varied between 0 and 31 000 ng g −1 dw h −1 .The dominant sesquiterpene compound was α-farnesene, followed by β-caryophyllene.In 2007 the sesquiterpene emission was reduced to a fraction of the emission in 2006.In 2007 the average total emission was only 16 ng g −1 dw h −1 (0.022 pmol g −1 dw s −1 ), which is less than 1% of that in the previous year.The dominating compound was β-caryophyllene while α-farnesene was not observed at all. The tree-to-tree variation of the emissions was high (see Table 1).Trees 1 and 4 had almost equal emissions of mono-and sesquiterpenes.Trees 2 and 3 had somewhat smaller monoterpene emissions but significantly higher sesquiterpene emissions than the other trees.Mountain Table 1.The results of the nonlinear regression analysis of the monoterpene and sesquiterpene emission rates using both the temperature dependent TEMP algorithm (Guenther et al., 1993) and the temperature and light dependent G97 algorithm (Guenther et al., 1993;Guenther 1997).E 0 (ng g −1 dw h −1 ) are the emission potentials at temperatures of 20 • C and 30 • C and incident PPFD of 1000 µmol photons m −2 s −1 in the G97 algorithm.β ( • C −1 ) is the coefficient describing the strength of the temperature dependence in the TEMP algorithm.R 2 is the regression statistic and the P values for fitted β are indicated using asterisks.The values in the parentheses are the standard errors.N indicates the number of chamber closures in each subset.17 TEMP, MT = 0.09 ºC -1 , SQT = 0.18 ºC -1 TEMP, variable birch has several phenotypes, from small polycormic (multistemmed) shrubs to large monocormic (single-stemmed) trees (Vaarama and Valanne, 1973).Hybridization with dwarf birch (Betula nana L.) is common and it is one of the factors affecting the growth form.This variation may be one of the factors explaining the differences in the emissions between the trees.Also Hakola et al. (2001) point out the large variation in the emissions between individual downy birch trees. Temperature and light dependence of emission rates To study the temperature and light dependence of the VOC emission rates we performed nonlinear regression analysis using two widely applied emission algorithms.TEMPalgorithm (Guenther et al., 1993) where E is the emission rate at temperature T , E 0 the emission potential at temperature T 0 , and β is an empirical coefficient describing the slope of the temperature dependence.It describes emissions from storage pools inside the plants and it is usually considered as the monoterpene emission algorithm.G97-algorithm (Guenther et al., 1993;Guenther, 1997) is a both temperature and light dependent description of emission rate given by where E is the emission rate at temperature T and PPFD L, E 0 the emission potential at temperature T 0 and PPFD L 0 .The temperature dependence factor C T is an Arrheniustype description of enzymatic activity, having its maximum above 35 • C. C L is a light dependence factor, saturating at 1000 µmol m −2 s −1 .This algorithm is applied to isoprene and monoterpene emissions that are not stored but emitted rather directly from production (e.g.Rinne et al., 2002;Kuhn et al., 2004).The regression analysis was performed separately for each of the four studied birches and both years.The regression with the TEMP algorithm was performed both with a fixed and a variable strength of the temperature dependence.The selection of fixed β value was based on commonly used values for the emission rates.For monoterpenes, β=0.09 • C −1 was used, following the proposal of Guenther et al. (1993) and confirmed by several investigators thereafter for various plant species.For sesquiterpenes, a larger range of values has been reported.The average value from the studies listed by Duhl et al. (2008) is β=0.18 • C −1 , which was adopted in this study.A summary of the resulting regression parameters is given in Table 1. In Fig. 3a we show the monoterpene emissions of birch 4 as a function of the G97 activity factor C L C T .There was a clear difference between the years in these patterns.In 2006, the emissions increased only when C L C T >0.5, whereas in 2007 some monoterpene release took place already at lower values of the activity factor (C L C T >0.2).For sesquiterpenes (Fig. 3b) the observed pattern was possibly reversed, with emissions becoming measurable at lower values of C L C T in 2006 compared to 2007. Both mono-and sesquiterpene emissions were slightly better explained by the solely temperature dependent algorithm (TEMP).In most cases, allowing the β coefficient to vary produced somewhat better regression statistics, especially for the monoterpenes.This method yielded correlation coefficients R 2 of 0.72 for monoterpenes and 0.65 for sesquiterpenes, averaged over all four trees (Table 1).Forcing the temperature coefficients to be constants, adopted from the literature, decreased the correlation coefficients R 2 to 0.48 and 0.63 for mono-and sesquiterpenes, respectively. Between single trees the temperature dependence coefficient β varied from 0.04 • C −1 to 0.93 • C −1 for the monoterpene emission and from 0.15 • C −1 to 0.25 • C −1 for the sesquiterpene emission.The temperature coefficients for mono-and sesquiterpene emissions, averaged over all four trees, were 0.32 • C −1 and 0.21 • C −1 , respectively.Neglecting the non significant regression results (P value>0.05,marked n.s. in Table 1) the temperature coefficients became 0.39 • C −1 and 0.23 • C −1 for mono-and sesquiterpene emissions, respectively.For comparison, Hakola et al. (2001) reported a β value of 0.11 • C −1 for monoterpenes and 0.14-0.22• C −1 for sesquiterpenes from their downy birch measurements.This temperature coefficient for the sesquiterpene emission rate is in the same range with our results.The monoterpene emission rate, however, seems to be significantly more sensitive to temperature in mountain birch than in downy birch.The temperature and light dependent algorithm (G97) was able to reproduce the measurement data reasonably well in most of the cases.This algorithm explicitly sets the emission to zero in total darkness.Similar behaviour was also seen in the present data during night-time for both mono-and sesquiterpenes (see Figs. 1 and 4).This is easy to understand since birches do not have resin ducts where terpenes could be stored in large amounts, and hence most of the emission must come almost directly from synthesis.This was also recently shown for silver birch (Betula pendula L.) by Ghirardo et al. (2009), and emerges as the more general pattern of monoterpene emissions from deciduous trees (Schurgers et al., 2009). To test whether the terpenes from mountain birch would have a mixed emissions pattern (some from storage, some directly after production), we tried to reproduce the emissions using a linear combination of the TEMP and G97 algorithms, following the approach used by e.g.Steinbrecher et al. (1999).This exercise was conducted only for the data from 2007 when the trees were supposed to function normally, without special resistance to herbivory.The hybrid algorithm slightly improved the fit, as expected.Holzke et al. (2006) used a hybrid algorithm, where they made use of the temperature dependence part of G97 algorithm as the pool emission, instead of the TEMP algorithm.In practice, the difference between these two approaches is small when applied in a narrow temperature range.However, the use of the TEMP algorithm allows setting the temperature dependence factor β to a value obtained from the current dataset. Figure 4 compares the measured monoterpene and sesquiterpene emission rates during the 2007 campaign together with the values obtained from fitting the algorithms.The differences between the measured and predicted emissions are also displayed. Emission potentials Emission potentials were calculated for each tree using both TEMP and G97 algorithms (see Table 1).The emission potentials were calculated at the temperatures of 20 • C and 30 • C, the former temperature giving the correct order of magnitude of the emission rates at the typical maximum temperatures that really occur in the area.The values at 30 • C are shown for easy comparison to other studies, and these values are discussed below.In all cases, tree-to-tree variations in the emission potentials were high. Best fits to the data were obtained using the variable temperature coefficients in the TEMP algorithm.The resulting emission potentials for monoterpenes varied from 620 to 12 000 ng g −1 dw h −1 , the average value being 5300 ng g −1 dw h −1 (10.8 pmol g −1 dw s −1 ).Emission potentials for sesquiterpenes varied between 260 and 16 000 ng g −1 dw h −1 , the average value being 6500 ng g −1 dw h −1 (8.8 pmol g −1 dw s −1 ).The lowest sesquiterpene emission potential was obtained in 2007, when the sesquiterpene emission was clearly different from the emission in 2006. For comparison, Hakola et al. (2001) reported late summer monoterpene emission potentials of downy birch ranging from 170 to 5490 ng g −1 dw h −1 .For sesquiterpenes, they measured emission potentials in the range 310-6940 ng g −1 dw h −1 .The emission potentials of both mono-and sesquiterpenes obtained from our measurements are somewhat higher than those measured earlier for downy birch.If we force the temperature dependence to a fixed value β=0.09 • C −1 , the resulting emission potentials strongly decrease.For example birch number 4, measured in 2007, had the monoterpene emission potential of 10 000 ng g −1 dw h −1 (20 pmol g −1 dw s −1 ) when applying the temperature dependence coefficient obtained from the measurements whereas the emission potential was decreased to 4200 ng g −1 dw h −1 (8.6 pmol g −1 dw s −1 ) when β was set to 0.09 • C −1 .For the G97 algorithm, the monoterpene emission potentials ranged from 490 to 11 000 ng g −1 dw h −1 , with an average of 3900 ng g −1 dw h −1 (8.0 pmol g −1 dw s −1 ).For sesquiterpene emissions the emission potential varied between 98 and 14 000 ng g −1 dw h −1 , averaging at 5600 ng g −1 dw h −1 (7.6 pmol g −1 dw s −1 ).The lowest sesquiterpene emission po-tentials were derived from the 2007 data, and these significantly differed from those of the 2006 data, similar to results obtained by applying the TEMP algorithm. Possible reason for high sesquiterpene emission The change in the sesquiterpene emission between the years was dramatic although climatic conditions did not differ heavily between the years.While sampling birch number 4, the effective temperature sum (+5 • C threshold, i.e. the sum of the positive differences between diurnal mean temperatures and +5 Paré and Tumlinson, 1999;Holopainen, 2004).Stress factors include for instance high temperature, drought and mechanical or biological damage.One of the most important stress factors is insect herbivore damage.Herbivore induced VOC emissions may help plants to defend themselves by repelling the insects, disturbing their growth and breeding, or by attracting the natural enemies of the insects.Staudt et al. (2007) studied the effects of gypsy moth (Lymantria dispar L.) feeding on the VOC emission from holm oak (Quercus ilex L.).Feeding induced the emissions of new VOC compounds, consisting of sesquiterpenes, a homoterpene and a monoterpene alcohol.Also undamaged leaves of infested trees emitted new VOCs, but with a different composition and at lower rates.We thus speculate that one possible reason for the very high sesquiterpene emission in 2006 might be related to herbivory damage through the autumnal moth that occurred at peak outbreak rates in the area in 2004 since, in addition to the instant responses, plants can also have delayed responses.Immunological memory of mountain birch after herbivory by autumnal moth is discussed by Ruuhola et al. (2007).They found out that delayed induced resistance lasted as long as five years.The trees exposed to herbivory five years earlier maintained increased resistance against moth larvae.In addition, some changes in the chemical composition of the leaves was observed.The quercetin to kaemferol ratio was increased whereas phenolic compounds were not significantly affected.The general features of plant memory were recently reviewed by Bruce et al. (2007).They define a plant memory, or stress imprint, as a genetic or biochemical modification of a plant that occurs after stress exposure.These changes in gene expression or plant metabolism cause the plant to respond in a different way to the future stress factors. Our results provide support from field measurements that changes in the VOC emissions have the potential to last for several years.If a single insect outbreak affects the emissions for about three years and outbreaks occur about once a decade, the mountain birch forest acts as high sesquiterpene emitter during about one third of the years.As sesquiterpenes are known to be important for the formation and growth of secondary organic aerosols (e.g.Bonn and Moortgat, 2003) there may be a link between the occurrence of herbivores and the aerosol particle formation events. Conclusions We measured the branch scale emissions of various VOC compounds from mountain birch, a dominant tree species of the European subarctic ecosystems.The mountain birch leaves were found to emit a large quantity of VOCs, with rates higher than those from downy birch.The data of Tarvainen et al. (2005) suggests that also in case of Scots pine, trees growing in the north may emit more sesquiterpenes compared to trees measured in a mid-latitude growth environment.If that were to emerge as a general feature, it may be, at least partly, explicable by the short but intensive growing season in the north. We speculate that herbivory damage might be one of the reasons for the dramatic change in the sesquiterpene emissions between the years.Other possible reasons include temperature and drought stress, among several others.However, our dataset is too small to draw any firm conclusions.Herbivory damage is supposed to significantly affect the sesquiterpene emission.Ideally, this variability between the years should be taken into account in emission inventories since even small changes in the emissions of sesquiterpenes have the potential to influence local atmospheric chemistry strongly due to the fact that sesquiterpenes are chemically very reactive.These interactions also raise questions for future climate change impacts of insect outbreaks and herbivory, and their interactions with atmospheric processes.However, today the available data is far too limited to comment on whether such changes will be substantial.While our data indicates possible substantial effects in response to insect attacks, further research is needed for more accurate predictions. From the present data it is obvious that both monoterpene and sesquiterpene emission of mountain birch approaches zero in the darkness.This suggests that the majority of the emission originates directly from synthesis rather than from storage pools.However, the exact behaviour of the emission is difficult to characterize from field data, as the temperature and light are strongly correlated. Figure 1.A time series of the meteorological variables together with the ecosystem scale CO 2 flux and the emissions of different VOC groups from birch number 1.The upper panels (a) show the temperature measured inside the chamber (red dots) and in ambient air (red line).The middle panels (b) show the photosynthetic photon flux density (blue line) and the ecosystem scale CO 2 flux (green line).The lower panels (c) show the sum of the emissions of monoterpenes (green dots), sesquiterpenes (red dots) and linalool (blue dots).The values on the x-axis are day-of-year in 2006. Fig. 1 . Fig. 1.A time series of the meteorological variables together with the ecosystem scale CO 2 flux and the emissions of different VOC groups from birch number 1.The upper panels (a) show the temperature measured inside the chamber (red dots) and in ambient air (red line).The middle panels (b) show the photosynthetic photon flux density (blue line) and the ecosystem scale CO 2 flux (green line).The lower panels (c) show the sum of the emissions of monoterpenes (green dots), sesquiterpenes (red dots) and linalool (blue dots).The values on the x-axis are day-of-year in 2006. Figure 2. Mass based emission spectra of VOCs from birch number 4 in (a) 2006 and (b) 2007. Figure 3. (a) Monoterpene emissions from the birch number 4 in 2006 and 2007.Black dots indicate the sum of the emissions of all monoterpenes analysed.The red and green dots indicate the emissions of the two most abundant monoterpenes, -pinene and sabinene, respectively.(b) Sesquiterpene emissions from the birch number 4 in 2006 and 2007.Black dots indicate the sum of the emissions of all sesquiterpenes analysed.The red and green dots i n d i c a t e t h e e m i s s i o n s o f t h e t w o m o s t a b u n d a n t s e s q u i t e r p e n e s , -f a r n e s e n e a n dcaryophyllene, respectively. Fig. 3 . Fig. 3. (a) Monoterpene emissions from the birch number 4 in 2006 and 2007.Black dots indicate the sum of the emissions of all monoterpenes analysed.The red and green dots indicate the emissions of the two most abundant monoterpenes, α-pinene and sabinene, respectively.(b) Sesquiterpene emissions from the birch number 4 in 2006 and 2007.Black dots indicate the sum of the emissions of all sesquiterpenes analysed.The red and green dots indicate the emissions of the two most abundant sesquiterpenes, αfarnesene and β-caryophyllene, respectively. Figure 4 . Figure 4.The measured and predicted emissions of (a) monoterpenes and (b) sesquiterpenes from birch number 4 on July 16-17 2007.The lower panels show the differences between the measured and predicted emissions.Blue dots are the measured emissions with error bars showing the measurement uncertainty.Red dots display the emission according to the TEMP algorithm, green dots the G97 algorithm, and black dots the hybrid algorithm. Fig. 4 . Fig. 4. The measured and predicted emissions of (a) monoterpenes and (b) sesquiterpenes from birch number 4 on 16-17 July 2007.The lower panels show the differences between the measured and predicted emissions.Blue dots are the measured emissions with error bars showing the measurement uncertainty.Red dots display the emission according to the TEMP algorithm, green dots the G97 algorithm, and black dots the hybrid algorithm. It is well known that stress induced terpenoid emissions can differ from normal emissions in both magnitude and composition.A wealth of literature has demonstrated a change in the emission pattern, particularly increased emissions of the sesquiterpenes α-farnesene and β-caryophyllene, linked to various stress factors of the plants (e.g. • C) was between 150 and 320 d.d.(degree days) in 2006 and 270 d.d. in 2007.The effective temperature sum was calculated using the air temperature data from Stordalen mire, located a few hundred meters from www.biogeosciences.net/6/2709/2009/Biogeosciences, 6, 2709-2718, 2009 the birch measurement site.
2018-12-14T20:16:37.289Z
2009-11-27T00:00:00.000
{ "year": 2009, "sha1": "3d07388e875591ddcf9089eae3fcbb3fccc745af", "oa_license": "CCBY", "oa_url": "https://bg.copernicus.org/articles/6/2709/2009/bg-6-2709-2009.pdf", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "a3b51cd8767595c5a2ae45b95d55a328d86645fa", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Environmental Science" ] }
15983648
pes2o/s2orc
v3-fos-license
Microporous membrane-based liver tissue engineering for the reconstruction of three-dimensional functional liver tissues in vitro To meet the increasing demand for liver tissue engineering, various three-dimensional (3D) liver cell culture techniques have been developed. Nevertheless, conventional liver cell culture techniques involving the suspending cells in extracellular matrix (ECM) components and the seeding of cells into 3D biodegradable scaffolds have an intrinsic shortcoming, low cell-scaffold ratios. We have developed a microporous membrane-based liver cell culture technique. Cell behaviors and tissue organization can be controlled by membrane geometry, and cell-dense thick tissues can be reconstructed by layering cells cultured on biodegradable microporous membranes. Applications extend from liver parenchymal cell monoculture to multi-cell type cultures for the reconstruction of 3D functional liver tissue. This review focuses on the expanding role for microporous membranes in liver tissue engineering, primarily from our research. To meet the increasing demand for liver tissue engineering, various three-dimensional (3D) liver cell culture techniques have been developed. Nevertheless, conventional liver cell culture techniques involving the suspending cells in extracellular matrix (ECM) components and the seeding of cells into 3D biodegradable scaffolds have an intrinsic shortcoming, low cell-scaffold ratios. We have developed a microporous membrane-based liver cell culture technique. Cell behaviors and tissue organization can be controlled by membrane geometry, and cell-dense thick tissues can be reconstructed by layering cells cultured on biodegradable microporous membranes. Applications extend from liver parenchymal cell monoculture to multi-cell type cultures for the reconstruction of 3D functional liver tissue. This review focuses on the expanding role for microporous membranes in liver tissue engineering, primarily from our research. Current Status of Therapies for Liver Diseases The liver is the largest internal organ in the body, accounting for 2% of the weight of an adult (~1.5 kg), 1 and responsible for more than 500 functions, such as the metabolism of sugars, proteins/ amino acids, and lipids, detoxification of exogenous chemicals, production of bile acids, and the storage of various other essential chemicals, such as vitamins and iron. 2 Thus, its failure has potentially fatal consequences. Indeed, loss of liver function causes 40,000 deaths/year 3 and is the twelfth most frequent cause of death in the US. 4 To date, orthotopic liver transplantation is the only clinically accepted therapy for patients with end-stage liver disease or acute liver failure. Although about 10,000 patients are added to the waiting list annually, fewer than 7,000 undergo transplantations. It is estimated that the high prevalence of hepatitis C (~3%) will increase demand significantly over the next decade. 5 Additionally, orthotopic liver transplantation requires life-long immunosuppression. Thus, there is a continuing need for alternative therapies to the liver transplantation. One such alternative is hepatocyte-based extracorporeal bioartificial livers (BALs). The liver is unique in its capacity to regenerate from even massive injuries, able to restore its original mass even if less than 20% of the cells remain undamaged. [6][7][8] Thus, BALs could provide temporary support for patients who receive a partial hepatectomy due to acute liver failure or are awaiting orthotopic liver transplantation. Demetriou et al. have developed a BAL loaded with microcarrier-attached porcine hepatocytes and tested it in clinical trials. 9 However, clinical success has yet to be achieved despite research extending over more than two decades. 10 Another possible alternative to liver transplantation is hepatocyte transplantation. Because the procedure is less invasive than orthotopic liver transplantation and can be performed repeatedly, it could also be used in patients who are severely ill and unable to tolerate organ transplantation. Some studies have demonstrated its efficacy, experimentally and clinically. 11 However, poor engraftment of transplanted hepatocytes remains a major barrier to the successful expansion of hepatocyte transplantation therapy. 12, 13 Tissue engineering is an attractive approach to improvement of cell engraftment. Enhancing cell-cell contact and providing nonimmunogenic matrices before transplantation has been shown to improve cell engraftment in animal models. 14 Furthermore, tissue-engineered products, such as skin substitutes and cartilage replacement, have already helped thousands of patients 15,16 and other artificial tissues, such as bladder, cornea, bronchial tubes and blood vessels, are in clinical trials. 15,16 Thus, liver tissue engineering is considered a potentially valuable new therapeutic modality for liver disease. Liver Tissue Engineering for Reconstructing 3D Liver Tissues In a common approach to liver tissue engineering, parenchymal cells alone and/or a mixed population of parenchymal and nonparenchymal cells (NPCs) are combined with various forms of three-dimensional (3D) scaffolds and appropriate signaling molecules, such as cytokines or growth factors, that facilitate cell growth, organization, and differentiation. These processes can be classified into two categories. The first involves suspending hepatocytes in extracellular matrix (ECM) components (Fig. 1A). Hepatocytes grown on collagen-coated polystyrene beads in roller bottle cultures with NPCs were allowed to form cell clusters and were implanted in Matrigel, self-organizing into hepatic plate-like architectures three-dimensionally. 17 Fetal liver progenitor cells were co-cultured in 3D fibrin gel with endothelial cells, resulting in the formation of vascular structures by the endothelial cells and increased proliferation and function of liver progenitor cells. 18 Hepatocytes were transplanted into the subcutaneous space, where new vascular network formation was induced in advance by transplanting a polyethylene terephthalate mesh device coated with poly(vinylalcohol) that allowed for the gradual release of basic fibroblast growth factor (bFGF), resulting in persistent survival for up to 120 d. 19 Photo-polymerization of a hepatocytesuspended poly(ethylene glycol) (PEG)-based hydrogel has been developed for the reconstruction of 3D liver architectures. 20 Hepatocytes suspended in a pre-polymer solution were photoimmobilized locally within a 3D cell-hydrogel network, thus forming a functional 3D liver construct with complex internal architectures. Three-dimensional scaffolds composed of biodegradable materials can provide platforms for hepatocyte attachment (Fig. 1B). Fetal liver cells seeded in poly-L-lactic acid (PLLA) 3D macroporous scaffolds formed small clusters and showed higher levels of hepatic function, comparable with those of adult hepatocytes. 21 Similarly, colonies of small hepatocytes (SHs), hepatic progenitor cells, placed on a collagen sponge with NPCs proliferated and expanded to form a hepatic organoid with highly differentiated functions. 22 Hepatocytes seeded on PLLA and/or poly(D,L-lactide-co-glycolide) (PLGA) sponges were engrafted when they were implanted at a site associated with abundant vascular networks with appropriate surgical stimulation. 23,24 Both approaches for liver tissue reconstruction thus seems efficacious, since cell behavior can be controlled using materials with various structural and functional properties. However, these earlier studies using ECM or scaffold-based designs to engineer tissues face a major drawback, poor cell density. In native liver tissue, cell density is significantly higher, compared with other tissues, such as bone and cartilage. Accordingly, hepatocytes within native liver tightly interconnect to form layered structures, termed hepatic plates. Additionally, there is only a slight gap between hepatocytes and liver sinusoids, liver-specific microvessels, facilitating rapid exchange of macromolecules between plasma and hepatocytes. Thus, cell-sparse constructs engineered with those scaffolds often do not closely resemble the native liver architecture. In contrast to earlier studies using ECM or biodegradable materials, scaffold-less cell-sheet engineering has been proposed for construction of 3D cell-dense liver tissue (Fig. 1C). For example, culture dishes, the surfaces of which were modified with a temperature-responsive polymer, have been used. Using such temperature-responsive culture surfaces, hepatocytes can be harvested as intact sheets and cell-dense thick tissues can be constructed by layering these cell sheets. 25,26 However, a highly complex fabrication process is needed to covalently graft the temperature-responsive polymer onto dish surfaces 27 and it also takes more than 30 min to harvest a cell sheet. 28 Magnetite cationic liposomes have been also used to label cells and to form multilayered sheet architectures. A magnetic field is then used to accumulate the magnetically-labeled cells onto ultralow attachment culture surfaces and form multilayered sheets. 29 Cells can be harvested readily as intact cell sheets by pipetting. However, when this method was applied to hepatocytes, the sheets were not sufficiently strong for recovery. 30 Furthermore, because cells have to be harvested as an intact sheet in the two methods above, it is difficult to construct the complex 3D liver architectures that are made from smaller tissue units. In the liver, hepatocytes form hepatic plates while sinusoidal endothelial cells (SECs) form sinusoids. Additionally, biliary epithelial cells (BECs) form bile ducts, tube structures that carry bile secreted by hepatocytes. The liver is generated by repeating these functional tissue units. Thus, it is difficult to reconstruct the complex architecture of the liver in vitro using only cell-sheet engineering-based approaches. Microporous Membrane-Based Liver Tissue Engineering To overcome the drawbacks of the culture techniques discussed above, we developed a novel approach for constructing complex 3D liver tissues. In our approach, two-dimensional (2D) tissues, such as hepatic plate-like tissues, 31 microvascular networks and intrahepatic bile ducts 32 constructed on biodegradable microporous membranes are stacked and allowed to form cell-dense 3D tissues by degradation of the membranes in vitro or in vivo after their implantation (Fig. 1D). Our approach, stacking two-dimensional (2D) tissues cultured on biodegradable microporous membranes to create complex 3D liver tissues, has advantages over scaffold-less cell-sheet engineering. First, PLGA was used to fabricate microporous membranes. PLGA is a biodegradable material that has been approved by the US Food and Drug Administration (FDA) for use in drug delivery, diagnostics, and other applications in clinical and basic science research. PLGA has already been used in tissue engineering as a 3D cell scaffold in various foams, 33 fibers 34 and sponges. 35 To fabricate microporous membranes, PLGA, dissolved in dioxane with various levels of moisture content, was spin-coated on a polyethylene sheet and then dried to generate micropores by the dioxane-water phase separation. 36 These membranes can be readily peeled off from the sheets with tweezers and cut into the desired shapes, which is relatively easy, compared with other approaches based on cell-sheet engineering. Second, damage to cells can be minimized because the cells are harvested with the membranes with no direct manipulation, such as enzymatic, thermal, or electric treatments. 37 Third, although cells in scaffoldless cell sheet engineering have to be layered as intact contiguous sheets to construct 3D tissues, 2D tissues with various structures containing not only sheets but also network structures and duct structures can be stacked using our technique, enabling stepwise construction of complex 3D liver tissues from smaller 2D tissue units. Importantly, a previous study demonstrated that kidneylike tissues could be reconstructed from a combination of epithelial tissues and mesenchymal tissues, 38 suggesting that tissue-by-tissue assembly can be used to reconstruct complex tissues from cultured cells in vitro. We also have demonstrated that the stepwise assembly of hepatic plate-like tissues and microvascular networks is necessary for reconstruction of liver sinusoidal structures in vitro. 39 Using the microporous membrane-based approach we can thus overcome the problem associated with scaffold-less cell-sheet engineering approaches, the applicability of which to liver tissue engineering is limited. Functional Liver Tissue Engineering with Microporous Membranes To fabricate functional 3D liver tissues in vitro, our group developed a bottom-up approach that assembles smaller functional 2D tissue units using microporous membranes. This approach mimics much of the native liver architecture, in which functional 2D tissue units such as the hepatic plates, the sinusoids and bile ducts are the repeating structures. In this chapter, we review the progress in functional 2D tissue unit reconstruction using microporous membranes. Hepatic plates. We first explored the efficacy of the microporous membrane-based tissue assembling approach in reconstructing hepatic plates using polycarbonate (PC) microporous membranes. 40 Pairs of membranes were prepared and rat SHs were separately cultured on each. After the SH colonies had developed, one membrane was inverted on top of the other to form an SH bilayer. In the stacked-up structures, the SHs of the upper and lower layers adhered to one another, and that bile canaliculi (BC) formed between them, resulting in the formation of native hepatic platelike tissues. The stacked-up structures were maintained for more than a month. The cells within the tissues exhibited mRNA transcription of hepatic-differentiation markers such as albumin, multidrug-resistance associated protein (MRP), hepatocyte nuclear factor 4 (HNF-4), tyrosine aminotransferase (TAT), tryptophan-2, 3-dioxygenase (TO) and maintained a relatively high level of albumin secretion for more than a month. Thus, hepatic plate-like tissues with highly differentiated functions, including functional BC, can be reconstructed by stacking layers of SHs. However, the membranes remained in the reconstructed structures permanently, because the PC membrane was not biodegradable, suggesting that stacking more than two layers of SHs would be problematic. Thus, we next explored the possibility of using PLGA microporous membranes in the stacking culture method for the construction of hepatic plate-like structures to overcome the problems described above. 36 SHs were cultured on a membrane to allow them to proliferate and form colonies and then stacked on top of those in the dish so that the membrane was sandwiched between two SH layers. More than two layers can be stacked if the membranes disappear by biodegradation after stacking of the cell layers. The membranes degraded gradually and disappeared almost completely by 14 d after stacking, resulting in the reorganization of the cells into hepatic plate-like tissues. As in the case when two SH layers were stacked to attach to one another, 40 the cells in the constructed structures formed BC. We confirmed that these hepatic plate-like structures could be maintained at least more than 2 weeks after stacking of the cell layers. In addition, the cells exhibited a relatively high level of mRNA transcription of hepatic-differentiation markers such as albumin, MRP2, bile salt export pump (BSEP), TAT and TO. This upregulation of hepatic differentiated function was also confirmed by continuous secretion of albumin and urea into the culture medium for at least 20 d. Liver sinusoids. Unlike the microvasculature in other tissue beds, liver sinusoids have highly specialized structures, where hepatocytes and sinusoidal lining cells, including SECs and hepatic stellate cells (HSCs), intimately associate with each other. 41 Based on these anatomical characteristics, the concept of a hepatocyte-HSC-EC complex that functions as a unit for transduction between the bloodstream and the hepatic parenchyma has been proposed. 41,42 Recent studies support the concept that HSCs serve as a bridge that mediates bidirectional metabolic interactions between sinusoids and hepatocytes, using prostanoids and/or gaseous mediators, such as nitric oxide (NO) and carbon monoxide (CO), as signaling molecules. 42,43 Thus, reconstruction of the liver sinusoidal architecture is essential in constructing functional liver tissues in vitro. We first established a SH-HSC-EC tri-culture system in which these cells form an in vivo-like physiological complex. 44 SHs and HSCs were first isolated from adult rat livers and cultured on polyethylene terephthalate (PET) microporous membranes. The SHs formed single-layered colonies on the membranes while HSCs resided in the micropores. Then, ECs were inoculated on the opposite side of the membranes, resulting in a formation of the HSC-mediated structures. To obtain these structures, spatial control of HSC behavior by changing the pore size was important, suggesting that the membranes can be used as not only carriers but also modulator of cellular morphogenesis. Furthermore, HSCs were confirmed to mediate the SH-EC communication, in terms of EC morphogenesis. These results indicated that the SH-HSC-EC physiological complex could be achieved in the reconstruction of HSC-mediated structures. Using the above tri-culture system, the effect of direct contacts between HSCs and ECs on EC capillary formation was then determined. 39 The HSC-EC contacts are increasingly recognized for their roles in EC capillary morphogenesis. 45,46 However, the hypothetical role of HSC-EC contacts in morphogenesis remained unclear in the tri-culture. HSC-EC contacts were shown to inhibit EC capillary morphogenesis, suggesting that the HSC-EC contacts may be an important factor in EC capillary formation. Additionally, ECs responded to the induction of capillary morphogenesis before the formation of HSC-EC contacts, suggesting that both spatially and temporally, HSC behavior is a key engineering strategy for the reconstruction of sinusoidal tissue in vitro. Finally, we demonstrated reconstruction of HSC-incorporated sinusoidal structures. 47 In the sinusoids, HSCs surrounded the outer surface of EC capillary structures. To generate these structures, the heterotypic cell-cell interactions across the membranes need to be improved. Thus, PLGA microporous membranes with higher porosity and reduced thickness were used. When the pore size and porosity of the membranes were optimized, HSCs migrated toward the EC capillary structures by passing through the membrane's pores and then surrounded them, resulting in the reorganization of sinusoidal-like structures. These structures were maintained more than 20 d. The HSCincorporated sinusoidal-like tissues retained higher levels of albumin secretion and hepatocyte-differentiated markers such as MRP2, BSEP, TAT and TO compared with SH-HSC organoids. Bile ducts. One problem remaining in the constructed hepatic palate-like tissues mentioned above is the accumulation of bile, which is known to be toxic to hepatocytes. 48 To reconstruct hepatic tissues with a bile drainage system, formation of bile ducts during culture is important. We demonstrated formation of bile ductular networks when rat BECs were cultured between two layers of collagen gel, with stimulation by dimethylsulfoxide (DMSO) in the culture medium. 32 These bile ductular networks were found to possess apical domain markers such as Cl -/HCO 3 anion exchanger 2 and cystic fibrosis transmembrane regulator (CFTR), and well developed microvilli on their luminal surfaces and also expressed apical [aquaporin (AQP) 1, MRP2 and CFTR] and basal (AQP4 and MRP3) domain markers of BECs. Furthermore, the cells in the bile ductular networks responded to secretin stimulation and transported metabolized fluorescein from the basal side to the luminal space, demonstrating that the reconstructed LBDs were functionally and morphologically similar to the bile ducts in vivo. However, the thick collagen gel layers prevented co-culturing of bile ductular networks with hepatic plate-like structures in close proximity for the formation of hepatic tissues with a bile drainage system. To overcome this, we have explored the efficacy of the PLGA microporous membranes as alternative cell scaffolds to collagen gel (unpublished data). Bile ductular networks can be co-cultured with hepatic plate-like structures in close proximity if the membranes are biodegraded after formation of the networks. We preliminarily confirmed formation of bile ductular networks when BEC colonies cultured on collagen gel were overlaid with microporous membranes, and their morphologies could be controlled by changing the pore-size of the membranes, again suggesting that the membranes can be used as not only carriers but also modulators of cellular morphogenesis. Furthermore, the ductular networks could be maintained for more than 90 d even after the membranes were degraded. Conclusions We have described a novel liver tissue engineering approach using microporous membranes. Although the approach has been used only in construction of 2D tissue units, we are currently working on assembling these 2D tissue units into functional 3D liver tissues in vitro. We believe that effective application of microporous membrane-based liver tissue engineering will provide new possibilities in the field of liver regenerative medicine. Disclosure of Potential Conflicts of Interest No potential conflicts of interest were disclosed.
2018-04-03T05:34:18.085Z
2012-10-01T00:00:00.000
{ "year": 2012, "sha1": "bf848970c7af268919d1d9a25f1e90d5aef02b2e", "oa_license": "CCBYNC", "oa_url": "https://europepmc.org/articles/pmc3568113?pdf=render", "oa_status": "GREEN", "pdf_src": "PubMedCentral", "pdf_hash": "bf848970c7af268919d1d9a25f1e90d5aef02b2e", "s2fieldsofstudy": [ "Engineering", "Medicine" ], "extfieldsofstudy": [ "Materials Science", "Medicine" ] }
18961991
pes2o/s2orc
v3-fos-license
Drift-Compensated Adaptive Filtering for Improving Speech Intelligibility in Cases with Asynchronous Inputs In general, it is di ffi cult for conventional adaptive interference cancellation schemes to improve speech intelligibility in the presence of interference whose source is obtained asynchronously with the corrupted target speech. This is because there are inevitable timing drifts between the two inputs to the system. To address this problem, a drift-compensated adaptive filtering (DCAF) scheme is proposed in this paper. It extends the conventional schemes by adopting a timing drift identification and compensation algorithm which, together with an advanced adaptive filtering algorithm, makes it possible to reduce the interference even if the magnitude of the timing drift rate is as big as one or two percent. This range is large enough to cover timing accuracy variations of most audio recording and playing devices nowadays. Background An example of the conventional adaptive interference cancellation (a.k.a.noise cancellation, or "reference canceler filter" in [1]) system is shown in Figure 1.A broadcast signal played by a TV or radio receiver in the same room as the target speech interferes with the latter and makes it less intelligible in the digitized microphone output d(n).The goal is to reduce the interference u(n) contained in d(n) so as to improve the intelligibility of the target speech s(n). To achieve this, a reference x(n), being the original signal sent to the interfering loudspeaker, is filtered by an adaptive filter that automatically learns the electro-acoustic transfer function from the original to the microphone output and produces an output y(n) that resembles u(n).This y(n) is subtracted from d(n) to reduce u(n) so that s(n) in the output e(n) is enhanced.In other words, the signal-to-interference ratio is increased. Note that an adaptive interference cancellation system in Figure 1 or any of the others discussed in this paper is not able to reduce ambient noise uncorrelated with x(n); it regards the noise as part of s(n).Details about the conventional adaptive interference cancellation technology and adaptation algorithms in general can be found in [2]. With both d(n) and x(n) acquired synchronously-an assumption conventional schemes are based on-the system in Figure 1 may reduce the interference quite effectively. However, in some cases, it is not easy or even possible to obtain x(n) at the same time when d(n) is recorded.For example, there may be restrictions so that it is only possible to place one surveillance microphone on-site and it is impossible to tap the interfering signal sent to the loudspeaker when the recording for d(n) is done. It is then suggested in Section 4.6 of [1] that one obtains the original broadcast material separately, for example, from the broadcaster, and uses it as the reference input x(n).The block diagram in Figure 2 illustrates this principle.Material obtained separately may differ from the actual source of interference due to, for example, alterations or distortions during the broadcast process.As in [1], we assume in this paper that there are no such differences.In Figure 2, the broadcast material is independently played back twice-once for the interfering loudspeaker and another time when x(n) is acquired.In addition, there may be more independent playback or recording operations involved during the acquisition of d(n) (two more in the example of Figure 2).These operations are performed at different times and most likely by different devices.It is understood that each audio recording and playback device, be it a CD player, a cassette tape recorder/player, a VHS tape recorder/player, and so forth, (i) records or plays at an average speed different from that of others, because of their different timing accuracies, (ii) has an average speed that drifts over time, (iii) may have irregularities in the recording/playback speed, called wow-and-flutter.This is true primarily with analog recording/playback devices. For example, our comparison between three devices revealed that the playback speed of a consumer portable CD player is 0.066% slower than the timing provided by the sound card digitizer in a personal computer, and a higherend DVD surround receiver plays 0.0035% slower than the sound card.The wow-and-flutter with analog devices also varies across different recorders/players and from time to time with the same recorder/player.For example, the wow-and-flutter of an analog telephone answering system is allowed to be as large as 0.3% [3].Table 1 of [1] indicates that the speed error of an analog recording device can be as large as 3.0% and the wow-and-flutter of it 1% rms. As a result of these factors, interference components in d(n), which are supposed to be correlated with x(n), are in general not synchronous with x(n) in the system in Figure 2-there are varying timing drifts between them due to the differences in speeds of their respective recording and playing operations and possible timing jitters resulting from wow-and-flutter during those operations. Note that we use l and m (instead of n) as time indices for sampled signals in the on-site data acquisition part of Figure 2.This is to emphasize the fact that they are in general played back or acquired with sampling frequencies that can differ, though slightly, from those of {x(n)} and {d(n)} in the adaptive filtering unit. The asynchronous nature of the problem, together with the fact that (i) a misalignment-due to the timing drift-of a small fraction of a sampling interval can render a converged adaptive filter useless; (ii) existing adaptation algorithms usually converge much slower than these timing variations, makes it difficult to achieve an appreciable interference reduction using just an adaptive filter in the configuration Figure 2 illustrates.In an attempt to alleviate the adverse impact of the timing variations discussed above, it is suggested in Section 4.6 of [1] that the inputs x(n) and d(n) in Figure 2 be manually aligned.In practice, one may be able to compensate for a timing drift with a constant rate (a.k.a.linear drift) by using an interpolation/decimation means to stretch or compress the time scale of {x(n)} or {d(n)} according to an estimation of the drift rate, but it is a laborious process to manually estimate such a rate.Furthermore, it would take even more effort to manually look after the more general case of a timing drift with a time-varying rate (a.k.a.nonlinear drift).This is because x(n) and d(n) would first have to be partitioned into segments small enough that the drift rate during each of them can be regarded as approximately constant.Thus, manual alignment as suggested in [1] is not an effective or efficient solution to the problem.It is then necessary to find a way of automatically identifying and compensating for timing drifts regardless of whether the rates are constant or time varying. In the application of the echo cancellation techniques to voice-over-IP networks and as software implementation on personal computers, there can be similar problemsalso caused by timing variations.Examples of a software speakerphone implemented on a personal computer are in [4,5].The signal samples received from the far-end of a voice link are delivered to the loudspeaker(s) at a rate maybe slightly different from the rate at which the microphone signal is sampled-although these two rates are the same nominally.This situation is similar to that in Figure 2.For the acoustic echo canceller to do a decent job, it is necessary to identify the difference and compensate for it.These two algorithms focus on circumstances where the two sampling frequencies are slightly different but constant, that is, constant rate or linear drift as mentioned above. There was extensive research in the 1980s [6,7] on a related topic: making the echo canceller for data modems immune to certain echo-path variations.These variations were caused by a frequency shift due to slightly different carrier frequencies and by timing jitters due to coarse adjustments made by a digital phase-locked loop.It is quite effective and popular to use a phase-locked loop to estimate and compensate for the frequency shift [6], and it is possible to eliminate the adverse effect of timing jitters that happen at known time instances [7].However, these well-developed approaches cannot be readily applied to the case in Figure 2 because the timing jitters caused by wow-and-flutter are random and unpredictable. Thus, how to do interference cancellation in the configuration of Figure 2, with a significant and possibly timevarying timing drift between the two inputs and without any explicit information about the drift, has been an open issue.The goal of this research is to develop a scheme that is effective in this circumstance, with the expectation that it may also be applied to other applications such as those studied in [4,5]. The rest of this paper is organized as follows: the proposed scheme is detailed in Section 2, Section 3 presents some experiment results, and Section 4 is a summary.In addition, there are three appendices that provide details of certain proofs and derivations. The Proposed Scheme In overview, the proposed drift-compensated adaptive filtering (DCAF) scheme dynamically aligns the sequence {d(n)} with {x(n)} by (i) upsampling {d(n)} to obtain a new sequence {d I (n )}, with a much higher time resolution; (ii) finding the differences (errors) between {d I (n )} and the adaptive filter's output; (iii) evaluating the errors to determine the nature of the timing drift; (iv) downsampling {d I (n )} accordingly to produce a sequence {d r (n)} in which the interference components are synchronous with those in {x(n)}. The DCAF is shown in Figure 3, which is to replace the adaptive filter and the summation node in the system in Figure 2. The scheme has been briefly reported at a conference [8], and more details are provided in this paper.As illustrated, there are three major components in Figure 3: (A) timing drift estimation and compensation, which is the essence of the proposed scheme and looks after the time alignment between the two inputs; (B) Ratchet fast affine projection (FAP) adaptive filter, for fast convergence and low complexity; and (C) peak position adjustment, which is indispensable for such a time-drifting application of adaptive filtering.These three components will be discussed separately below.In this paper, we only discuss the time-domain approach for ease of understanding the concepts.In practice, the DCAF could also be implemented in the frequency domain for improved efficiency. Timing Drift Estimation and Compensation. The term "timing drift" will henceforth refer to the aggregated net effect of timing variations resulting from all playback and recording operations involved, such as those in Figure 2. In the DCAF scheme, the timing drift is dynamically estimated by evaluating certain time averages and then compensated for by properly resampling the primary input sequence {d(n)} to form a new sequence {d r (n)} in which the interference components are synchronous with the reference input sequence {x(n)}.In other words, the sampling frequency for {d(n)} is dynamically adjusted so that the resultant {d r (n)} has the same sampling frequency as that of {x(n)}as if {d r (n)} and {x(n)} were acquired synchronously.That being done, the adaptive filter is able to make a reliable estimate of the interference in {d r (n)}.We now look at how the resampling is implemented, how the timing drift is estimated, and how the resampling is controlled to compensate for the timing drift. To resample {d(n)}, it is first upsampled by a factor I (I = 100 in this paper), resulting in an interpolated sequence {d I (n )}: whose sampling frequency F SI is I times that of {d(n)}.This is illustrated in Figure 4. The upsampling is performed by first padding I − 1 zeros between each pair of adjacent samples in {d(n)} then passing the resultant sequence through a low-pass filter.In the case used in our experiments, I = 100, and the FIR low-pass filter has 10208 coefficients, which are symmetric so that the filter has a frequency-independent group delay of (10208 − 1)/2 = 5103.5interpolated samples.The passband ripple and stopband attenuation are 0.5 dB and 50 dB, respectively.The passband and stopband edges are located at 0.0048125 F SI and 0.005 F SI , respectively.Details about upsampling techniques can be found in a text book on digital signal processing, for example, [9]. Then, {d I (n )} is decimated by a time-varying factor D(n) ≈ I to arrive at the resampled sequence {d r (n)}, whose sampling frequency approximately equals that of {d(n)}.This is achieved by where In (3), Δ is an integer, [•] denotes the rounding operation, and 0 ≤ offset(n If offset(n) has a constant value, then D(n) ≡ I; that is, {d r (n)} and {d(n)} have the same sampling frequency but may have a constant offset in time.However, a time-varying offset(n) may result in D(n) deviating from I. The key to timing drift compensation is to dynamically adjust D(n) by modifying offset(n) in (3) so that the interference components in {d r (n)} stay synchronous with {x(n)}.To do so, we update offset(n) adaptively using where the updating term offset inc(n) stands for "offset increment."When the right-hand side of (4) goes beyond the range [0, I − 1], wraparound is performed as follows Else if offset(n + 1) < 0, then so that offset(n + 1) remains in the range [0, Based on ( 2)-( 4), the decimation factor is where δ is a zero-mean noise resulting from rounding; therefore, its rms value is 1/(2 3).In a steady state, for example, the timing drift rate is constant (the case considered in [4,5]), and D(n) is expected to wobble around a constant defined by D(n) = I + offset inc(n) , where • is the time-averaging operator.It follows that, in that case, the ratio between the sampling frequencies of the original and the resampled sequences is The remaining issue is to estimate the timing drift so as to control offset inc(n).We begin with a (2K + 1)-element (K < I/2) subsequence: of (1).In (8), K typically equals 15 in our experiments, and wraparound adjustments as per (5) are made if any offset(n) + k becomes out of [0, I − 1].Note that the element in the middle of ( 8) is (2).As illustrated in Figure 3, the adaptive filter's output y(n) is subtracted from (8) to produce 2K + 1 error values (9) with the main error value in the middle at k = 0.This enables us to examine the output error with an I-times finer time resolution-to facilitate timing drift estimation. Let us consider the expectations It is henceforth assumed that the adaptive filter has mostly converged and there exists a unique It is proven in Appendix A that elements in (10) We then need to control offset inc(n) in ( 4) for consecutive sampling intervals in order for the main (middle) error e I (n ) to remain at the minimum in (11); that is, k opt = 0. Thus, it is necessary to monitor (10) and keep track of the actual position of its minimum.Since it is impossible to find ensemble means in practice, (10) has to be approximated, for example, by time averages.What we adopt is (12), with firstorder smoothing over time: where β ∈ (0, 1) is close to 1.Note that the relation between the time indices n and n in ( 12) is defined by (3).Next, a parabola f (n, k) that fits the elements in (12) in the leastsquares sense is found.If f (n, k) is convex as expected, then a finite minimum inc inst(n) of it exists, as illustrated in Figure 5. (13) and, in that case, This is a candidate for offset inc(n).Due to the presence of the target signal s(n), the ambient noise, and uncancelable interference, (i) equation ( 14) may be too noisy to be used as offset inc(n) in ( 4); (ii) it is possible for f (n, k) to be nonconvex-indicated by (13) as being not satisfied.If so, (14) is not meaningful. Thus, the offset inc(n) is found by using a smoothing operation over many sampling intervals: where μ is a small positive step size. Finally, the interference-reduced system output is the main error in (9); that is, We now address the issue of selecting the interpolation factor I. As seen, the resolution of the timing drift compensation is 1/I of a sampling interval.For the sake of reducing implementation complexity, a small value for I is beneficial.It is then necessary to find a smallest I without sacrificing the perceptible cancellation performance.Through some manipulations, Appendix C gives the following guideline: where TR is the wanted ratio (in dB) of the level of d(n) to the level of tolerable adjustment errors; that is, the errors should be TR dB lower in level than the primary input.Experiments suggest that TR = 30 dB, which results in I = 100, gives an adequate tradeoff between performance and complexity.Note that, although 2K + 1 errors are calculated in ( 9), the added complexity is quite small since there is only one adaptive filter.Another remark is that the upsampling of {d(n)} by a seemingly large factor of I = 100 is mainly conceptual.In reality, only 2K +1 interpolated values in (8)as opposed to all those in (1)-need to be calculated and, for each of them, 99% (for I = 100) of the input samples to the 10208-coefficient FIR interpolation filter are zeros.Thus, the polyphase filtering technique [9] is adopted so that the computation load is minimized. Ratchet FAP. Although any adaptive filter could potentially be used in Figure 3, one adopting the Ratchet FAP algorithm [10] is chosen.This is because (a) a FAP can converge an order of magnitude faster than the most commonly used NLMS and is only marginally more complex; and (b) the Ratchet FAP is superior to other FAP algorithms in terms of performance and stability.In addition to adaptive interference cancellation, Ratchet FAP can also find applications in echo cancellation, source separation [11], hearing aids, and other areas in communications and medical signal processing. The Ratchet FAP used in this application incorporates an algorithm that dynamically optimizes the regularization factor so that it is just large enough to assure stability of the implicit matrix inversion process associated with the FAP.See [12] for further information. Peak Position Adjustment.An important issue with such a time-drifting application of adaptive filtering is that the coefficients of the adaptive filter may drift over time, even after convergence.Corresponding approximately to the filter's group delay, the main part of the coefficients that needs to be considered is typically a small, contiguous set of coefficients with large magnitudes.If this part moves close to the beginning or end of the range spanned by the adaptive filter, the interference reduction performance may significantly degrade. To circumvent this, the position of the main part of the coefficients is constantly monitored and adjustments are performed when necessary.This position is estimated by in a manner similar to how "center of gravity" is estimated.In (18), the subscript m stands for "main," and {w 0 (n), w 1 (n), . . ., w L−1 (n)} are the L coefficients of the Ratchet FAP adaptive filter in Figure 3. Equation (18) with the parameter q = 1 gives the position of the center of magnitudes (center of mass), with q = 2 gives the center of energy (moment of inertia) or the filter's group delay, and with q = ∞ gives the index of the coefficient with the largest magnitude.In our experiments, q = 4 is used in order to take into account both the group delay and large peaks. Next, ( 18) is compared against a target range of values that can be determined heuristically.If the deviation is significant enough, then realignment adjustments, with a step of one sample every preset number of sampling intervals, are made until the deviation lies within the target range.The realignment adjustments require changes to (i) the read pointer for x(n) (Figure 3); (ii) the coefficients of the adaptive filter-they are shifted one sample to the left or right (depending on the need) with a zero appended to the opposite end; (iii) the autocorrelation matrix estimate of the Ratchet FAP adaptive filter-the sums therein need also to be shifted and properly appended accordingly. Further incidental implementation details are needed but these are omitted here for brevity. A remark about the read-pointer adjustment mentioned above is that, in a real-time implementation, such adjustments may result in serious consequences as over-or underflow of the input buffers can occur.This problem is common in telecommunications (see Section 1), and there are techniques to circumvent it.However, this topic is beyond the scope of this paper; our purpose is to propose an algorithm's framework, and all processings presented in Section 3 have been done offline so that the over-or underflow issue is avoided. About Adaptation Control. It is normally necessary for an adaptive system such as the DCAF to have an adaptation control to prevent the adaptive systems from potentially diverging when the target signal s is active.This could be done by nullifying the two step sizes, for example, μ in (15) and that for the Ratchet FAP.The detection of this condition is called "double-talk detection" in literature on echo cancellation. Contrary to this, no adaptation control is implemented in the current DCAF scheme because, in this application (see Section 1), the interference and target can be active simultaneously most of the time.This leaves very little "single-talk" (no target) time in which the adaptation systems could adapt quickly and reliably.Indeed, the system the DCAF tries to approximate is expected to change only slowly, and so the adaptation is allowed to take place fulltime (i.e., even during double talk) but with very small step sizes.The resultant DCAF scheme is a compromise between convergence speed and immunity to the target signal.It could be a future research topic to find a way of optimally controlling the step sizes in conjunction with double-talk detection. Experiments The proposed DCAF scheme has been evaluated with realroom signals combined under simulated conditions.The real-room signals use recording and playback devices having different timing accuracies.The sampling frequencies used are (nominally) 8, 16, 44.1, and 48 kHz.Subjective evaluation to characterize the intelligibility improvement has been performed.Its process and results are reported in Section 3.3. Simulated Conditions. Test cases are prepared using recorded radio broadcast signals filtered with 740 ms long room impulse responses which were measured in a large meeting room.The timing drifts are created by properly controlled resampling and delaying of the primary or reference input. Table 1 lists several test cases, all with a 16 kHz sampling frequency, a 120-second duration, and a signal-tointerference ratio in {d(n)}, before processing, of −1.4 dB.In the DCAF scheme, the Ratchet FAP adaptive filter has L = 2000 coefficients (125 ms) and an affine projection order N = 5.The normalized step size α of the adaptive filter starts with a relatively large value of 0.050-0.100and diminishes to 0.005-0.010after initial convergence.In the drift compensation part, the interpolation factor is I = 100, the parameter K = 15, and the step size μ in (15) is either equal to 0 or in the approximate range of 5 × 10 −6 ∼ 10 −5 .When μ = 0, the drift compensation part (Section 2.1) is disabled so that the DCAF falls back to a conventional adaptive interference cancellation scheme.Note that, in order to estimate the amount of interference reduction accurately, the energy (sum of squares of all samples over the entire test case period) of the target signal (which is known since simulated conditions are dealt with) is subtracted from energies of {d r (n)} and {e(n)} before figures in Table 1 are calculated. Table 1 indicates that the DCAF scheme can reduce the interference by 7-11 dB.When the drift compensation part is disabled, the DCAF falls back to a conventional algorithm.In that case, it is not capable of handling these timing drifts.Consequently, little interference reduction is observed, as shown in Table 1. Consider Test Case 3 in Table 1 as an example.The rate of the timing drift between the two inputs goes linearly from 0 to 1% in 60 seconds and back to 0, again linearly, in the next 60 seconds.Figure 6 shows that the DCAF has correctly estimated that rate. In Test Case 5, another example, the rate of the timing drift between the two inputs varies according to a sinusoidal pattern.It can be seen in Figure 7 that it takes some time for the DCAF to initially catch up to the timing drift.Once the initial alignment has been achieved, the algorithm stays in synchronization. It is clearly seen in Figures 6 and 7 that the offset inc(n) is still quite noisy despite the smoothing operations ( 12) and (15).This phenomenon has also been observed in other test cases in Table 1.This is believed to be attributed to the presence of the strong target signal plus ambient noise (only 1.4 dB below the interference) and uncancelable interference-as discussed in Section 2.4.This will be verified by the next test case in Section 3.2. Real Room with Real Recording and Playback Devices. With the primary input recorded in real rooms by real recording and playback devices having different speeds, these tests aim at verifying the performance of the DCAF in real life.Figure 8 illustrates the recording setup in an ordinary office room.The portable CD player plays the digitally stored interfering speech x(n) at a slightly lower sampling rate than that of the PC sound card used to digitize the primary input to get d(n).In this test scenario, the target signal s is the steady ambient noise, resulting mostly from equipment and ventilation fans in the room.It has a level 19 dB below that of the interference x introduced by the loudspeaker.The primary input d(n) is sampled at 8 kHz and has a duration of 900 seconds.In the DCAF, the Ratchet FAP adaptive filter has L = 1000 coefficients (125 ms) and a step size α = 0.05 throughout the entire period.Other parameters are the same as those used in Section 3.1.It is observed that the interference reduction is only 2.1 dB if μ = 0 (drift compensation disabled) and reaches 19.3 dB if μ = 5 × 10 −6 .Figure 9 shows that after a few seconds of initial learning the DCAF estimates a timing drift rate of around 0.066%, and this value rises slightly to around 0.07% towards the end of the run.This rising is thought to correspond to the variation of the actual timing drift rate over the 900-second period.In this test case, the target signal plus the ambient noise and the uncancelable interference are much lower in level than was the case in Section 3.1.This explains why the estimate for offset inc(n) is much less noisy.With other real-life signals, recorded in rooms and by devices different from those used for Figure 8, the interference reduction is consistent with the cases with simulated conditions (Section 3.1) when the magnitude of the rate of the timing drift is not very large, for example, no more than 0.5%. When an analog cassette audio recorder/player is used, the observed magnitude of the varying timing drift rate can be as large as 3%.It has been observed (but not reported in detail here) that, although the DCAF still converges and tracks the drift, the interference-reduction performance degrades when the timing drift rate reaches such a large magnitude.For example, the interference-reduction can be only around 1 or 2 dB and is barely perceivable by human ears.It is believed that the relatively severe wowand-flutter of the particular analog device used, not just the large magnitude of the timing drift rate, may likely have contributed to the performance degradation.Fortunately, wow-and-flutter is virtually nonexistent with modern digital devices. Subjective Evaluation. To assess the performance of the proposed DCAF scheme in terms of improved intelligibility, subjective tests were conducted with 25 individuals.The intelligibility of test signals is compared for three processing conditions: (a) no processing, (b) processing with the DCAF, and (c) processing conducted by an acoustic forensic expert using conventional methodologies. The test signals consist of target male-spoken English sentences (the IEEE "Harvard sentences" [13]) with interfering speech babble.The target and interfering signals are processed through room impulse responses from different locations within the same room and then mixed to a specified signal-to-interference ratio (SIR).A time-varying timing drift is applied to the mixed signals using two drift patterns: a sinusoidal variation with a period of 60s and peak change in sampling rate of 0.04% and a pseudorandom variation with peaks of about 0.025%.These timing drifts are imperceptible to normal listening but have a significant impact on conventional interference cancellation. The leading and trailing portions of the processed test signals are discarded to ensure algorithm convergence and avoid any possible end effects.To examine the variety of test conditions, each subject is presented with 100 randomized test sentences.Each test sentence is padded with interference to a fixed duration of 4.5 s.After listening to each sentence, the subject repeats back the words that were understood and the fraction of words correct is recorded.The intelligibility is shown in Figure 10 as a percentage of words correctly understood, for the selected SIR values and the three processing conditions.Error bars indicate the standard deviation of observed data.At all tested SIR, the proposed DCAF scheme provided very good intelligibility even though the conventional processing provided little or no intelligibility improvement at lower SIR. Some Discussions. The DCAF algorithm can, in principle, accommodate any timing variation between the reference and primary inputs as long as it is relatively slow.Therefore, there should be a limit on the rate of acceleration or deceleration of the timing drift (i.e., rate at which the timing drift rate varies) that the DCAF can track.Although there are no comprehensive characterization data available at this time, observations suggest that the DCAF can achieve noticeable interference reduction for acceleration rates as large as ±1% per 60 seconds at a 16 kHz sampling rate, as seen in Test Cases 3 and 4 in Table 1.In other words, the timing drift rate changes by 1% over a period of 60 × 16000 samples.A way of expressing the magnitude of this acceleration of the timing drift (in "units" of "offset in samples"/sample 2 ) is 1% 60 Increasing the step size μ in (15) to a value beyond that used in our experiments, which is 5×10 −6 , may improve the above tracking performance index, but at the expense of reduced noise immunity of the DCAF. Summary By adopting a unique estimation and compensation mechanism, a drift-compensated adaptive filtering (DCAF) scheme is proposed.The scheme makes it possible for an adaptive interference canceller to survive time-varying timing drifts between the two inputs to a degree large enough to accommodate timing accuracy variations of most audio recording and playing devices nowadays.On the contrary, conventional schemes typically fail completely under conditions of even small timing drifts.The DCAF scheme is suitable for applications in which the reference and primary inputs may be asynchronous with each other.Example applications include certain surveillance scenarios, network echo cancellation for voice-over-IP networks, and software acoustic echo cancellation implemented on personal computers. B. Least Squares Curve Fitting Here, we prove the validity of ( 13) and ( 14).The parabolic curve f (n, k) illustrated in Figure 5 can be defined by parameters {a To find the parameters that make (B.1) approximate the 2K + 1 estimates in (12) in a least-squares sense, we minimize the nonnegative cost function by letting its partial derivatives with respect to the three parameters {a(n), b(n), c(n)} be zeros.This leads to a system of linear equations The antisymmetry property makes S m = 0 , for all m odd; therefore, (B.3) simplifies to Given that S 2 = K(K + 1)(2K + 1)/3, where The fact that (B. C. Choosing Interpolation Factor We now study how to choose the interpolation factor I based on how adjustment errors resulting from it degrade the noise performance of the DCAF scheme.The resolution of the timing drift compensation is 1/I of a sampling interval, so we must choose I to be large enough that k fluctuating by ±1 in the vicinity of k = k opt does not lead to a perceptibly significant performance degradation.This is expressed as where σ 2 T is the tolerable power of the adjustment errors.For example, if σ 2 T is below a just-noticeable threshold, (C.1) assures that a ±1 error in k around k opt is not audible. Given This results in a choice of I = 100. Figure 6 :Figure 7 : Figure 6: Actual and estimated rates of timing drift for Test Case 3. Figure 9 : Figure 9: Estimated rate of timing drift for room recording with ambient noise but no target signal. 7) and (B.8) (which is equivalent to (13)) are positive indicates that (B.1) is convex.If so, a finite minimum of (B.1) exists and is at inc inst(n) Table 1 : DCAF's performance without and with timing drift compensation-simulated conditions.
2014-10-01T00:00:00.000Z
2010-02-01T00:00:00.000
{ "year": 2010, "sha1": "0036acfc6bff781136f40cd5c56699eb43588237", "oa_license": "CCBY", "oa_url": "https://asp-eurasipjournals.springeropen.com/counter/pdf/10.1155/2010/621064", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "0036acfc6bff781136f40cd5c56699eb43588237", "s2fieldsofstudy": [ "Computer Science", "Engineering" ], "extfieldsofstudy": [ "Computer Science" ] }
42465245
pes2o/s2orc
v3-fos-license
Fertility, Microbial Biomass and Edaphic Fauna Under Forestry and Agroforestry Systems in the Eastern Amazon In many countries the rate of deforestation is accelerating. For example, many forest areas of Bangladesh, India, Philippines, Sri Lanka and parts of the rainforest in Brazil could disappear by the end of the century (GLOBAL CHANGE, 2010). The primary forest, especially in the tropics like the Philippines, Malaysia and Thailand such as in Brazil began to be destroyed, because the growth of the agricultural expansion caused a significant decrease in natural resources. Over the past 50 years, the Philippines, there was a loss of 2.4 acres of vegetation every minute, which is attributed to two factors: growth of agriculture and illegal logging Introduction In many countries the rate of deforestation is accelerating. For example, many forest areas of Bangladesh, India, Philippines, Sri Lanka and parts of the rainforest in Brazil could disappear by the end of the century (GLOBAL CHANGE, 2010). The primary forest, especially in the tropics like the Philippines, Malaysia and Thailand such as in Brazil began to be destroyed, because the growth of the agricultural expansion caused a significant decrease in natural resources. Over the past 50 years, the Philippines, there was a loss of 2.4 acres of vegetation every minute, which is attributed to two factors: growth of agriculture and illegal logging The model of agriculture practiced in Brazil contributes significantly to the expansion of agricultural frontier, increase the production, the productivity and agriculture and the national livestock. However, this performance has led to great reduction of the cover native forest and, consequently, the supply of products of forest origin, besides exposing the lands to loss of fertility, erosion process and water pollution. In the northeast of Pará state, as in other regions of the Amazon, the intense agricultural activity, with emphasis on removal the primary forest for pasture establishment, agriculture of overthrow and burn, indiscriminate deforestation caused by human activity in terms of economic activities and disorderly logging has been a major factor to accelerate the process of soil alteration. For the reduction of land degradation is necessary the use of conservataive techniques to identify the most profitable activities in the region allowing for a harmonious environment coexistence for agricultural economically viable and environmentally sustainable (SOUSA et al., 2007). The challenge is to identify the correct combinations of species to establish synergistic relationships ideals, so that ensure the key ecological services such as nutrient cycling, biological control of pests and diseases and conservation of soil and water (CARDOSO et al., 2005). In the state of Para, reforestation with native and exotic species reaches high levels due to the great adaptability of these species in degraded soils. The answers obtained, either in monoculture agroforestry systems, have been effective in the recovery of deforested areas, providing excellent results both for this action as for commercial use, allowing a decrease in aggression to the primary forest and improving the quality of life of populations where this does occur (CORDEIRO, 1999;MONTEIRO, 2004;RUIVO et al., 2007). Although there are numerous studies on the growth and development of the species native species (CARVALHO, 2004;CORDEIRO, 2007;JESUS, 2004;LORENZI, 2002), comparative studies with the species subjected to different plantation systems and the nutritional behavior of soil in microbiological and biochemists terms are not commonly found in the literature, like as the influence of coverage with different systems involving the vegetables species and their influence on soil quality are still poorly understood. The addition of organic matter to soil, due to stay vegetables residues leads to creation of an enabling environment for better plant development, enhancing microbial activity and consequently the nutritional conditions of the soil. Based on this assumption this research was conducted in the city of Aurora to identifying the soils modifications under physical, chemical and biological properties in areas under reforestation in forestry cropping systems and agro forestry by antropic actions and their impact on edaphic fauna. Localization and characterization of the study area The study was conducted at the Farm Tramontina Belém S/A, located in the city of Aurora do Pará (Figure 1), which belongs to the Mesoregion of Northeast of Pará state and Microregion Bragantina. This area suffered intense anthropogenic changes in the last 50 years due to high extractivist activity, food production and livestock that decimated almost completely their natural vegetation. Over the years a secondary forest (locally known as capoeira) was developed. Despite being a zone considered in environmental impact, this is an area that food supply the capital, mainly grains, greens and vegetables. In the locality where this work was developed, a former cattle ranch acquired by an industry of domestic utensils, that was reforested with purposes sustainable economic exploitation and controlled of forest species. The current vegetation is divided in areas of pasture (livestock) abandoned, predominating such as vegetation quicuio-da-amazonia (Brachiaria humidicola) among other invasive species, beyond agroforest systems consisting of native species, the main Mogno (Swietenia macrophylla King), Paricá (Schizolobium parayba var. amazonicum Huber ex Ducke), Freijó (Cordia goeldiana Huber) and few exotic, such as eucalyptus (Eucalyptus sp) and small areas with secondary forest (capoeira) started around 40 years ago, whose seeds have been used for reforestation native species. The selected capoeiras soil was used as a standard for comparison with the reforestations soil. The in the region from, According to Thorntwaite (1948) the climate classification in the studied area is type Br A'a, ("humid tropical"). The average annual rainfall is 2,200 mm not equally distributed throughout the year. However, the period from January to June is its greatest concentration (OLIVEIRA, 2009). The average temperature and relative atmospheric humidity are 26 º C and 74% respectively (CORDEIRO 2007, CORDEIRO et al., 2009). Studies conducted in Brazil (CORDEIRO et al., 20092010) allowed to classify the soil in Aurora do Pará area as Yellow Latossol sandy-clay and the occurrence of concretionary laterite levels in some areas, hydromorphic soils along streams and plain relief to gently Pará State. www.intechopen.com Forest Ecosystems -More than Just Trees 234 rolling inserted on the plateau demoted from Amazon. The nutritional characteristics described in these studies show that they have a low supply of available essential nutrients and low tenor in organic matter (CORDEIRO et al., 2010). Cropping systems studied Since the 1990s were planted around 1,043 ha submitted to different types of planting reforestation with the use of species such as Mogno (Swietenia macrophylla King) by the great commercial value abroad, Ipê (Tabebuia heptaphyta Vellozo), Cedro (Cedrella fissilis Vellozo), Jatobá (Hymenaea intermedia Ducke var. adenotricha (Ducke) Lee & Lang.). Since 1994, the Paricá (Schizolobium parayba var. Amazonicum Huber ex Ducke), is the species with high commercial value used in the reforestation because of its applicability in the production of laminates. In 1996 the Freijó (Cordia goeldiana Huber) was introduced by the high referential commercial value in Europe. In 2003, was included in the reforestation process in the Tramontina area the Curauá (Ananas comosus var. erectifolius LBSmith), a bromeliad that in the Amazon, has a higher concentration in the municipality of Santarém, beyond the regions of Xingu River, Tocantins, Maicuru, Trombetas, Paru, Acará and Guamá. In the Pará state, the Curauá stands out in the Bragança and Santarém districts (OLIVEIRA, 2009). At the time of planting was performed organic fertilization with manure of corral (500g/pit) and bed chicken (150g/pit) for agronomic and forest species, respectively. In the first year of the forestal planting were performed three fertilizations, at 45, 180 and 300 days, using 150 g/pl of the formula NPK 10-20-20. In the planting Curauá, we used 10 g/pl of the formula NPK 10-10-10 at the beginning and end of the rainy season, in the first two years of planting. Systems Forest species Age (years) S1 Monocultivation Curauá ( In cropping systems with curauá (S1), agroforestry system with paricá, mahogany, freijó (S3) and in the cropping system with paricá (S4) occured just the cut of the grass, that was left on the soil of these cropping systems, no irrigation in any of them. In agroforestry cropping system with paricá/curauá was made fertilization with manure of corral (500 g/pit) and bed chicken (150 g/pit) (CORDEIRO et al., (2009 Collection and preparation of the soil samples for physical, chemical and biological analysis In all locations were collected soil samples deformed and undeformed in December 2009. Samples were collected by opening mini-trenches where soil samples were extracted from the depths: 0-10, 10-20, 20-40 cm, from transects in areas previously determined. In each study area were collected 3 composite samples of soil from 5 single samples and were stored in plastic bags, conditioned in cool boxes containing ice for stagnate or decrease the microbial activity. The chemical, physical and biological analysis were made by technicians in the soil laboratory of the Museu Paraense Emílio Goeldi -MPEG. Collection, preparation and identification of the soil fauna The Pedofauna collections was performed using kind traps "pitfall-traps" (Figure 2). These traps were consisted of plastic containers (08 cm x 12 cm) buried in the soil to a depth of 12 cm, with the leaked extremity leveled with the surface of soil, where they remained for three days (AQUINO et al, 2006). In each plot of each treatment, the same depth, were placed four (04) traps, and inside each one of them, was added 60 ml of the preservative substance: 70% alcohol, distilled water (ratio 3:1, regarding the use of alcohol); biodegradable detergent (3 drops) and formaldehyde (10 ml). The fall of undesirable objects was prevented with a cover plate of polystyrene, supported by small wooden rods (AQUINO et al, 2006). The edaphic fauna, after collected, was taken to the laboratory where they were sieved (0.2 mm) to remove the fragments of plant and residues of soil. The identification of edaphic fauna was at the level of Order, with the aid of a stereomicroscope and the specific literature (BORROR, DELONG, 1988, BARRETO et al., 2008. Determination of the physical and chemical caracteristics of soil The granulometric composition was determined by densimeter method (EMBRAPA, 1997) and textural classification of soil in each system was performed using the textural triangle (LEMOS & SANTOS, 2006). The soil density (Ds) was determined by the volumetric ring method type Kopecky. In the characterization of soil were performed the following measurements: total N, by distillation in semimicro Kjeldahl (BREMNER, MULVANEY, 1982), pH in potentiometer in the relation soil:water 1:2.5, organic C, by volumetric method of oxidation with K 2 Cr 2 O 7 and titration with ammonium ferrous sulphate, Ca, Mg and Al exchangeable in extractor of KCl 1 mol L -1 and measured in atomic absorption, exchangeable K and Na in Mehlich-1 extraction solution and determination by flame photometry, P available in Mehlich-1 extraction solution and determination by calorimetry, H + Al were extracted with calcium acetate 0.5 mol L -1 , pH 7.0 and determined volumetrically with NaOH solution. From the values of potential acidity (H + Al), exchangeable bases and exchangeable aluminum, the capacity of total cation exchange (CTC) and cation exchange capacity effective (CTCe) were calculated. Relations were also calculated C/N of soil and the organic carbon stock (EstC), using the formula EstC = Corg x Ds x e/10, according to Freixo et al. (2002). Determination of carbon (CBM) and nitrogen (NBM) of the soil microbial biomass We used the fumigation-extraction method to estimate microbial biomass carbon (CBM) (Vance et al., 1987, Tate et al., 1988. The determination of microbial biomass carbon (C-BMS) of the fumigated and not fumigated extracts was made by titration (dichromatometry) according to De-Polli, Guerra (1999). For CBM calculation , the C content of fumigated samples were subtracted from the values of non-fumigated samples, the difference being divided by the value kc = 0.26 (FEIGL et al., 1995). The estimate of Nmic was made from Kjeldahl digestion. The correction factor (Kn) used for the calculation was 0.54 (BROOKES et al. 1985;Joergensen;Mueller, 1996). From the original values were calculated relations between C mic and C org of soil (C mic /C org ), and N mic and N total of soil (N mic /N total ), by the following equations: (C mic /C org )x100 and (N mic /N total )x100, respectively. These indices indicate the fractions of C org and N total that are incorporated in BM, expressing the quality of MOS (GAMA-RODRIGUES, 1999). Determination of basal respiration of microbial biomass and the soil metabolic quotient The basal respiration was estimated by the amount of C-CO 2 released within 10 days of incubation (JENKINSON & POWLSON, 1976). This technique allows the determination of the soil microbial activity, being quantified from the evolution of CO2 produced in respiration of the microorganisms in samples free of roots and possible insects. The metabolic quotient (qCO 2 ) is calculated as the ratio between the rate of basal respiration and the microbial biomass carbon (ANDERSON & DOMSCH, 1993). Statistical analysis The two-way ANOVA was used to verify differences between the cropping systems studied. When found a significant (5%), the averages of each variable were tested by the www.intechopen.com Tukey test (p <0.05). Additional analysis was also the determination of principal components (PCA) and cluster analysis to determine the degree of correlation between physical, chemical and biological data to be analyzed by soil grouping. Them, according to the variation of its characteristics a multivariate analysis can be use. Physical and chemical properties of soil under forestry and agroforestry systems The production systems studied showed differences in soil physical properties. The type of soil in cropping systems S1 (monocultivation with curauá), S3 (agroforestry system with paricá, mogno, freijó and curauá) and S4 (monocultivation with paricá) is classified as franc sandy loam and only in the S2 (agroforestry system with paricá and curauá), presented soil type franc sandy clay ( With regard to the clay content in the culture system S2 (agroforestry system with paricá and curauá), in the depth from 0 -10 cm high levels were detected when compared with other systems in this same depth, as shown in Table 2. Study conducted (LAVELLE et al, 1992) in the humid tropics, showed as a result a lateral variation in the soil granulometry and, according to the researchers, this may influence the training capacity of the stocks of exchangeable cations on the surfaces of colloids, this case, clay mineral. Then, the results found in this study for the S2 cultivation system may be indicative of the improved in the capacity of formation of the exchangeable cations. The lowest content of clay fraction occur in the cropping systems S1 (monocultivation with curauá), S3 (agroforestry system with paricá, mogno, freijó and curauá) and S4 (monocultivation with paricá). The value for the lowest average was found in the cropping system S4 compared with the other cropping systems studied. Freire (1997) reports that the natural fertility of the soil depends on the adsorptive capacity of clay-minerals and organic colloids, with that, it's possible to affirm that in the cropping system S4, despite the low clay content, there are adsorption capacity and organic colloids in balance that allows the maintenance of natural soil fertility of this cropping system. The relation silt versus clay proved to be higher in the cropping system S3 (Table 2), this demonstrates that the degree of weathering occurred in this area decreases with the depth, ie, the degree of weathering of the soil is high, as occur in the Latosoils. The analysis of soil density showed that, among the systems studied, soil is more dense in the cultivation system S2 (agroforestry system with paricá and curauá). But, as pointed out Santana et al. (2006) density can be an attribute for analysis on the cohesion of the soil horizons, however there is a limitation for such use, ie, the density of the soil suffers from interference of granulometry that can presents high values, this would correspond to cohesive horizons, and this would affect the penetration of root of vegetables. In this aspect, it was verified that, in the cropping system S2, the density is more pronounced between 10 to 20 cm when compared with other systems. In addition, this cultivation system, based on the exposed by Santana et al. (2006), we can consider this soil as "cohesive" as a results contained in Table 2. The results indicate very low acidity (pH > 4.5), and as clay-minerals react with water from the soil, absorbing H +, this may explain the variation of acidity occurred in the cropping system S2, although present statistically significant effect. Was verified that in the cropping systems S1 (curauá monocultivation ) and S3 (agroforestry system with paricá, mogno, freijó and curauá) the pH value, on average, has no statistically significant effect (Table 3). This can be explained by the content of total clay and sand which is also equivalent between them, as shown in Table 2. In the cropping system S4 (paricá monocultivation ), the pH was found to be constant in the three depths, and this may be due to small variations in the levels of sand and clay that were lower compared to other cropping systems, and this may explain the decrease of pH value in relation to the cropping system S2 and a slight increase in relation to cropping systems S1 and S3 (Table 2). The tenors of Corg in cultivation system S2 (agroforestry system with paricá and curauá) were superior to other treatments evaluated (Table 3), even with scarce cover vegetation. Study conducted (Silva Junior et al, 2009) in Amazonian Oxisols after transformation to pasture, showed that carbon concentrations are high in clay soils, independent of vegetation www.intechopen.com cover. But the most plausible explanation is provided by Cordeiro et al. (2009), because the authors report that the area where this system of crops is located was fertilized with cattle manure (500 g/pit) and bed chicken (150 g/pit). This proves that even on degraded land, the use of organic cover helps soil fertility and improving the quality of it (Monteiro, 2004). In the systems studied, we observed that the Corg content decreases according to depth and clay content (Table 2) in the depth of 0 -10 cm, the Corg content of the soil surface within the studied systems are high. The research conducted about the relation of Corg contents and soil depth (Dejardins, et al. 1994;Koutika et al., 1997) showed that the trend of content of Corg is in decreasing accordance with increasing depth. This pattern of behavior on the content of Corg was observed in our study. Study carried out in a toposequence in central Amazonia (Marques et al.,200) report that the Carbon content are high in the surface layers to 25 cm (4.48 ± 0.08). Then, the high content of Corg found in cropping systems study corroborates the assertion of those authors. The highest average of Carbon stock was found in the cultivation system S2 (Table 2), where was found the highest granulometric average, especially the clay and Ds. In descending order, the cultivation system S1 has the second highest average. In the cultivation systems S3 and S4 the average of carbon stock decreases, although the clay content is approximate to that contained in the cropping system S1. However the content of sand has an average ranging from 800 to 840 g/kg, and the variation in average clay content is between 90 and 96 g/kg (Table 1). This may be one explanation for the carbon stocks present a decrease in average of these treatments compared in the cropping system S2 ( Relation between physical and chemical attributes The result of the correlation (r) between the high content of Corg, clay and sand fraction in the cropping system S2, showed an increasing content of Corg in this cropping system ( Study ever done on the relation Corg versus clay content (TELLES, 2002) explained that high clay content allows the formation of macroaggregates and microaggregates that promote physical protection to the Corg, and avoid rapid decomposition of the same. Correlating this result with what was obtained in our study; it is possible to explain the high content of Corg found in S2 culture system, using the same argument. As for the stock of carbon (Cs) results showed that this stock decreases in relation to the increased depth in cropping systems studied. The determination of the hierarchical clustering (HCA) revealed a approximated relation between physical and chemical attributes of soil subjected to analysis, because, according to Barreiro & Moita Neto (1998) this suggests a correlation between the variables of this data set. The results (Figure 3) show that there is formation of two groups where in the group A there was a split in the A1 and A2. In this group there was a correlation between the pH -Ds (Group A1), Ds -Silt/Clay (group A2) and Silt/Clay -Corg (group A3). Another group, silt and clay (group B), confirm that these two variables are heterogeneous with respect to those that make up Group A. These results show that there is variability in the functioning of studied soils in the cropping systems SI, S2, S3 and S4. S3 and S4 are more similar across pH and relation attributes of silt/clay. The cultivation system S1 showed lower similarity with cropping systems S3 and S4 for the same attributes. Thus, one can verify which variables that differentiate the systems studied each other and can interfere or not in the edaphic fauna. Microbiological attributes The average content of microbial biomass carbon (CBM) (Figure 4) and the values of the microbial quotient (qMIC) ( Figure 5) were higher in the system S4 and S5. In the system S4 the soil was covered with coarse vegetable waste, besides presenting a spontaneous vegetable regeneration between the lines of planting paricá such factors may have favored the maintenance of microorganisms in the soil and therefore increase the microbial carbon content. www.intechopen.com For the attribute qMIC very low values were obtained, except in the S4 system, where it was recorded the highest values of qMIC, especially in the layers of the first depth of soil. Jenkinson & Ladd (1981), considered normal that 1-4% of total soil C corresponds to the microbial component, how the collection was done during the dry season, and it is known that water is an important element for microbial activity, it is possible that low values are justified by this fact. Production Systems www.intechopen.com Fertility, Microbial Overall, the results indicate that the incorporation of organic matter is favoring the edaphic conditions and the different systems, especially the S3 and S4 suffering the biggest addition of organic matter, equaling the soils are of capoeira, eventually tending to an equilibrium (OLIVEIRA, 2009;BERNARDES, 2011, PEREIRA Jr, 2011. The Table 5 presents the principal components that shows this relationship. It's possible verify that saturation of bases (V), EC, aluminum saturation (m), Na, and Al were the variables that showed more differences between the systems. In systems S2 and S3, the values of these attributes, in general, are more similar to those found in capoeira. Edaphic fauna of the forestry system and agroforestry The faunistic analysis performed on the Farm Tramontina Belém S/A, recorded 2.568 specimens distributed in eighteen (18) The Diplopoda taxon is present in all treatments, but the highest concentration is in the systems of monocultivation (S1 and S4). In a study of community of invertebrates in litter in agroforestry systems, this order was the second most important (Barros et al., 2006). This importance is due to mobility that they present in the soil, surface and underground, which www.intechopen.com influences the physical nature of the soil changing porosity, moisture and transport of substances (Correia;Aquino, 2005). The order Acari is a grouping of vertical habitat in three levels, euedaphics, hemiedaphics and epiedaphics, and the epiedaphics are more tolerant to desiccation (Lavelle & Spain, 2001), although there is low frequency of individuals, it is higher when compared with those obtained in studies in savannas of Pará (Franklin et al., 2007). Ants have been widely used as biodindicadores in various types of impacts, such as recovery after mining activities, industrial pollution, agricultural practices and other land uses (Smith et al, 2009). In addition, the class Insecta, which belongs to the ant, often grouped according to trophic groups, and the availability of nutrients in the ecosystem (Leivas & Chips, 2008). They are important in below-ground processes, by altering the physical and chemical properties and the environment, its effects on plants, microorganisms and other soil organisms (Folgarait, 1998). These may be the possible explanations for the faunistic results of this taxons (Hymenopetera) Family Formicidae, with the highest absolute frequency (Fi) in the cropping system S1 (curauá in monocultivation) and lower a b s o lu te fr e qu en c y i n th e cr o pp in g sy s t em S4 (p ar ic á i n monocultivation) as well as in other culture systems studied (Table 7). Adult and immature Coleoptera, Collembola, Diplopoda, Diptera and Homoptera adults had higher absolute frequency in the cropping system S3 (paricá + mogno + freijó + curauá) ( Table 7). This may show a variety nutritional or of habitat, or still an increase in the preypredator relation. The same was not found in the monocultivation systems for these taxons. Macroarthropods of soil has an important role in tropical terrestrial ecosystems, exerting a direct influence on the formation and stability, an indirect influence on the decomposition process through strong participation in the fragmentation of necromass (ARAUJO; BANDEIRA; VANSCONCELOS, 2010). Work already carried out in terra firma forest ecosystems in the state of Pará, also found the presence of Hymenoptera, Coleoptera, Collembola, Homoptera, Acari and Diplopoda, grouped or not (Macambira, 2005;& Garden Macambira, 2007;Ruivo et al. 2007 ), which corroborates with the edaphic fauna found in Aurora do Pará The greatest diversity of species occurred at S3 (Table 7), this can be attributed, among other factors, the variety of nutrients, because S3 is an agroforestry system where there is occurrence of paricá, mogno, freijó and curauá. In addition, there may be no natural predators of these species or still the ephemeral life cycle leads to a reproduction in greater numbers, but these factors were not analyzed in this study. The lowest levels of the Shannon-Wiener diversity, therefore the greater species diversity occurred in the cropping system S1 (curauá in monocultivation) and S2 (agroforestry system paricá + curauá). This may be due to movement of other species of invertebrates through of ecotones that over there exist. It is possible that this displacement has occurred in search of food or place to reproduction, or even to escape from possible predators in areas near to these cultivation systems. It should not be ruled out the hypothesis that the curauá produces some substance that is palatable to invertebrate species diversity and this is a nutritional option for them because this plant is present in both cropping systems that had the lowest diversity indices of Shannon -Wiener. The highest indexes of diversity of Shannon-Wiener occurred in cropping systems S3 (parica + magno + freijó + curauá) and S4 (parica in monocultivation). This shows lower diversity of species, although in S3 there is differents plant species, which would cause nutritionals offerings and different habitats, and this would promote greater diversity of species. Study performed about plant diversity and productivity of plants and the effects on the abundance of arthropods (Perna et al, 2005) showed that productivity in the plant structure, local abiotic conditions, physical disorders of the habitats are factors that interact with the diversity of plants and which also influences the abundance of arthropods. This can be an explanation for the low diversity found in the cropping system S3 in our study. In S4, occurs monocultivation with paricá. This features a lower nutritional diversity and of habitats, which in turn attracts lower diversity of macrofauna, especially ants. Suggest the hypothesis that this plant produces some substance not palatable or that act on the reproductive cycle in the majority of taxons identified. It is also possible that there are a greater number of predators in relation to other systems cultivation studied, or even occurred edaphic variations that did not allow the diversity of species and greater population density because this cropping system, the highest number of absence (seven taxons) among the eighteen taxons identified (Table 6). When the population density, the highest concentration indivíduos/m 2 , occurs in S1, where there is a monocultivation of curauá. As this culture system does not present sub-forest for vegetal cover and shading that would mitigate the temperature at the soil surface, it is possible that the solar radiation incident directly on the ground promotes an increase in photosynthetic rate, increasing the supply of nutrients, as well as raises the temperature of the same and is one of the factors that contribute to the reproduction of these ants (Harada & Banner, 1994, Oliveira et al., 2009. But do not dismiss as a possible explanation for the results obtained in two cropping systems, the hypotesis of correlated interferences, for example, environmental variables, ephemeral life cycle, presence or absence of predators, temperature, precipitation, brightness, among other variables in this study were not analyzed. Edaphic fauna and attributes of soil The relation between edaphic fauna and soil attributes (pH, Corg, Ds, relation silt/clay) is shown in Figure 6. The results showed that (I) the immature Coleoptera taxon has high similarity to the taxon Collembola and constitutes the group 1; (II) the taxon Diptera presents similarity to the taxon Orthoptera and constitute the second group. The group 2 has similarity with group 1, (III) the taxon Acari presents similarities with groups 1 and 2, constituting the third group, (IV) the taxon Coleoptera adult presents similarity to the taxon Diplopoda, constituting the group 4. (V) Groups 1, 2 and 3, have similarity to group 4. (VI) the taxon Homoptera has similarity as group 4, thus constituting the group 5. (VII) Groups 1, 2, 3, 4 and 5 they correlate with the physical attributes (Ds, relation silt/clay), but did not correlate with the chemical attributes (pH, Corg). The taxon Hymenoptera showed that is not correlated with the physical attributes: Ds and relation silt/clay, and neither with chemical attributes: pH and Corg. Also has no similarity to other taxons analyzed. But it is possible that other edaphic variables correlate with this taxon, but these were not objects of the present study. Ants have been widely used as biodindicadores in various types of impacts, such as recovery after mining activities, industrial pollution, agricultural practices and other land uses (Smith et al, 2009). In addition, the class Insecta, which belongs to the ant, often grouped according to trophic groups, and the availability of nutrients in the ecosystem (Leivas & Chips, 2008). They are important in below-ground processes, by altering the physical and chemical properties and the environment, its effects on plants, microorganisms and other soil organisms (Folgarait, 1998). These may be the possible explanations for the faunistic results of this taxons (Hymenopetera) Family Formicidae, with the highest absolute frequency (Fi) in the cropping system S1 (curauá in monocultivation) and lower absolute frequency in the cropping system S4 (paricá in monocultivation). The order Acari has the major absolute frequency in S2 (Table 3), maybe has better adaptability to the SAF, and the physical attributes shown in Figure 5, which may also is occurring with Homoptera, Collembola and Coleoptera immature. Even in this agroforestry system, the absolute frequency of the taxon Coleoptera, Hymenoptera and the population density decreases, according to data contained in Table 4. This may be related to the increase of soil density and decrease in the silt/clay relation contained in Table 2. With the modifications imposed by use of soil, particularly by agriculture, the fauna and the microorganisms, in different degrees of intensity, are affected by the impacts caused by agricultural practices, that may alter the composition and diversity of soil organisms. The addition of new organic matter through the incorporation of waste or the maintenance of forest cover (using the agroforestry system, with or without burning the area) or even a diversified system (as occur in Aurora do Pará), show the importance of maintenance, incorporation and slow decomposition of organic matter on the ground. So far, the studies show that the types of management adopted did not influence negatively the characteristics of soil and that adding of diversified organic matter in the soil, the retention and incorpration and slow decomposition of these residues led to the creation of an edaph-environment favorable to the maintaining soil quality. The set of attributes of the soils here studied, especially those related to microbial biomass and chemistry, was adequate to indicate the quality of the substrate. However, the continuation of this kind of work, in the long term, it is necessary, in order to identify differences in biological characteristics of soil between the different management systems, especially taking into account the local climatic variation. Thus, it is necessary to intensify studies of the seasonal variation of soil attributes, the variables listed as indicators of soil quality and intensify the studies in determinated practices of management, such as the tillage and the SAF as potential in the carbon sequestration. Conclusions The result show that the types of management adopted did not influence negatively the characteristics of soil and adding organic matter to the diverse soil, retention and development and slow decomposition of these residues led to the creation of an edafoambiente of maintaining soil quality. The set of attributes of the soil studied here, especially those related to microbial biomass and chemistry, was adequate to indicate the quality of the substrate. However, the continuation of such work in the long term and different climatic conditions, it is necessary, in order to identify differences in biological characteristics under different soil management systems, especially taking into account the local climatic variation. The Paricá (Schizolobium amazonicum Huber (Ducke)) is a viable native species for recuperation of disturbed areas and with detach in the wood market, nationally and www.intechopen.com internationally. Its rapid growth and adaptation to areas with low nutrient levels allow it to be optimum in agroforestry, being the second plant species used in reforestation in the Para state , and primarily designed for industry of lams rolled. Thus, it is necessary to intensify studies of the seasonal variation of soil attributes, the variables listed as indicators of soil quality and enhance studies in certain management practices such as tillage and the SAF as a potential in carbon sequestration. Acknowledgements To Phone: +86-21-62489820 The common idea for many people is that forests are just a collection of trees. However, they are much more than that. They are a complex, functional system of interacting and often interdependent biological, physical, and chemical components, the biological part of which has evolved to perpetuate itself. This complexity produces combinations of climate, soils, trees and plant species unique to each site, resulting in hundreds of different forest types around the world. Logically, trees are an important component for the research in forest ecosystems, but the wide variety of other life forms and abiotic components in most forests means that other elements, such as wildlife or soil nutrients, should also be the focal point in ecological studies and management plans to be carried out in forest ecosystems. In this book, the readers can find the latest research related to forest ecosystems but with a different twist. The research described here is not just on trees and is focused on the other components, structures and functions that are usually overshadowed by the focus on trees, but are equally important to maintain the diversity, function and services provided by forests. The first section of this book explores the structure and biodiversity of forest ecosystems, whereas the second section reviews the research done on ecosystem structure and functioning. The third and last section explores the issues related to forest management as an ecosystem-level activity, all of them from the perspective of the �other� parts of a forest. How to reference In order to correctly reference this scholarly work, feel free to copy and paste the following:
2017-09-16T04:08:22.026Z
2012-03-07T00:00:00.000
{ "year": 2012, "sha1": "5613d3bd387f3c4db43dd8386bad07d3cc27bc32", "oa_license": "CCBY", "oa_url": "https://www.intechopen.com/citation-pdf-url/30814", "oa_status": "HYBRID", "pdf_src": "Adhoc", "pdf_hash": "82126eea7df3439e41b54d214d7a0b393b9d8b30", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Environmental Science" ] }
237381929
pes2o/s2orc
v3-fos-license
IMPLICATIONS OF COMPUTER-AIDED LEARNING IN ELT FOR SECOND LANGUAGE LEARNERS AND TEACHERS DURING COVID-19 Purpose of the study: The primary purpose of this research study was to get detailed insights about numerous aspects of the online ELT for second language learners and teachers in Faisalabad. Methodology: Our study was based on an online survey. Respondents were students and teachers of the English department from different universities in Faisalabad. Data was collected and analyzed to find answers to our research questions. Other characteristics like focus, comfort, understanding, communication, and expense pinpoint the differences between online and physical ELT methods. Main Findings: We found that students and teachers most commonly faced internet connectivity and audio-visual issues. The overall opinion of students about teachers was encouraging. However, teachers claimed that students were not that much serious in online classes. Online ELT has improved the technical capabilities of respondents, and it has made them proficient in using smartphones, online storage, word processing, and computer troubleshooting. Faculty respondents showed interest in learning new tools despite the burden faced and psychological fears. Finally, social media and solution-oriented discussion forums can prove effective and efficient in addressing future pandemic outbreaks. Applications of this study: This study provided us with valuable insights that can help design an effective online educational framework for efficient and result-oriented English language learning. Novelty/Originality of this study: Our study explored faculty and students' psychological and technological readiness at postgraduate institutes to cope with future pandemics. INTRODUCTION The English language plays a vital role in today's modern world as it is involved in communication in every field of life, including education and business. Due to the ongoing pandemic situation, business activities are reducing, and people working in the private sector lose their jobs. Moreover, young graduates also face difficulties in finding employment in developing countries like Pakistan. As a result, freelancing and digital marketing are getting boom these days. English is a global language used in media, the internet, and international communication. So, the importance of English Language Teaching (ELT) cannot be ignored. People worldwide use it as a 'common language' to share their ideas, express their thoughts, and communicate with others. Although the English language is being taught as a second foreign language in all academic institutions throughout Pakistan, yet students cannot get the most out of it in professional life. English is a universal communication language used to communicate textual, visual, audio, and infographic information globally. To use English as effective for education purposes importance of expressions, emotions, and fluency in student education must be given chief priority (Unsworth and Mills, 2020). All institutions are making maximum efforts to improve the English learners' language competence to grab many opportunities to make progress personally and professionally. English language teachers and curriculum developers are also making consistent efforts to improve English language teaching in Pakistan. During the past two decades, an immense increase in student enrollments resulted in upgrading old colleges and constructing new ones in China. For teaching English as Foreign Language (EFL), job opportunities for English teachers have increased to deal with rapidly growing student strength (Zhiyong et al., 2020). The current prevailing COVID-19 situation has created a massive impact on both teachers and learners. The Working dynamics of our educational institutes, teachers, and students have changed entirely. Virtual classrooms replace classrooms, and computer-aided tools have taken traditional white or blackboards. For how much time this situation continues is not easy to predict (Jena, 2020). Since the 1990s, computers are being used in our country computers for different purposes, but smartphones have revolutionized the world and provided a gadget equipped with powerful apps. From entertainment to business and business to education, it has all sorts of apps to cope with different circumstances. However, it is also a fact that not everybody is equally technical or proficient in using computers and smartphone apps (Asad et al., 2020). Computer-Assisted Language Learning (CALL) provides opportunities to teachers in providing resources to students in Our research explores various dimensions of modern English language teaching done via computing technology these days. By examining online education during this pandemic, we came up with suggestions that can improve second language learners' English language education process. Our recommendations can assist learners and speakers in being successful members of an international English-speaking community. During 2000-2010, there is no significant research contribution on topics integrating technology and ELT. The highest number of publications on such issues was 12 in 2016, which lowered to 4 in 2017. In comparison to China, less attention was paid to smartphone technology in Saudi Arabia. During the last decade, most research articles used questionnaires for data collection (Nawaila et al., 2020). Section 2 explores the significance of the English language in our modern world. How COVID-19 and Information Technology (IT) has transformed the traditional educational system in Pakistan is explored in section 3. In section 4 of this paper, we have provided the research questions of this study, while section 5 explores different studies conducted worldwide. Finally, our research methodology and findings are discussed in sections 6 and 7, respectively. SIGNIFICANCE OF ENGLISH LANGUAGE English is a universal language as it is understood and used worldwide. Internet today has emerged as an immense source of information. The majority of the websites, blogs, and even search engines use English as their primary language. Social media websites, for instance, Facebook, Twitter, and Instagram are also using English as a basic language. Although these platforms provide multi-language capability and automated translators, most users prefer the English language version. E-Commerce is getting popular quickly with the advancement of the internet and business technology. All the popular E-Commerce platforms like Amazon, AliExpress, and Daraz also use English as their primary language. People, especially students in developing and developed countries, are highly focused on freelancing due to the job saturation in respective countries. Moreover, the number of unemployed educated persons is increasing alarmingly during this pandemic. So, switching to freelancing is a promising dimension to earn a livelihood. Most clients on freelancing platforms like Fiverr, Upwork, and People Per Hour are native English people. The primary mechanism to deal with a client is written communication, although some platforms support video conversation. No matter what the scenario is, a freelancer should communicate clearly and precisely with their client to achieve his goals. Digital marketing has got boom since 2013, and it uses sophisticated technology for marketing products and services. Digital marketing supports bidirectional communication between seller and buyer. Both can use various communication channels, including social media, website forums, E-Mail, and chatting. Digital marketing provides an opportunity to reach a global audience with the help of search engines like Google (Sathya, 2017). The language understanding of students depends upon the effectiveness of ELT methods used by the course instructors. 58% of teachers and 63% of students preferred the Grammar Translation Method (GTM) over the direct method and observed that this ELT method was more effective and beneficial (Shah and Saeed, 2015). Moreover, applications of the English language are not limited to marketing or business. There are hundreds of areas that direly need command over this language. The majority of job qualifying tests, interviews, and tasks are English-oriented. Even on jobs, the English language is being used for official correspondence. Curriculum developers also prefer the English language for textbooks. One of the core reasons is the ease and comfort of this language. English writing skills are required for different scenarios. Polishing students' writing skills, traditional teaching methods need certain refinements that enable teachers to understand students' issues properly. Teacher interaction, motivation, and knowledge are key pillars to achieve this goal (Shegay et al., 2020). ROLE OF INFORMATION TECHNOLOGY IN ONLINE EDUCATION COVID-19 has changed the entire dynamics of Pakistan and the rest of the world. A huge majority of people have switched to online shopping for implementing social distancing in true meaning. During the first wave in 2020, telemedicine took trend as medical professionals were either busy dealing with COVID patients or practicing isolation principles in true letter and spirit. The usage of online consultation apps like Oladoc and Marham has also increased to a great extent. MARHAM app is quite popular in Lahore and Karachi as the number of registered medical practitioners is, 495 which is promising. MARHAM has a dedicated mobile app for its users that serves online video consultations, Apart from social media handles. On the Facebook page, 43.23% of the issues were raised by female users, and this shows that this telemedicine app is quite popular among young females. The majority of female users belong to the age group having an age range from 18 years to 34 years (Ittefaq and Iqbal, 2018). COVID-19 is an extremely lethal and quick-spreading virus. Patients with certain underlying conditions like asthma, heart, diabetes, and immunodeficiency are at greater risk of getting seriously ill. Telemedicine can assist such patients and provide an opportunity to get examined by specialists' doctors via two-way audio/video conference. In this way, telemedicine can fulfill patients' needs in a cost-effective and timely manner that strictly obeys social distancing principles (Portnoy et al., 2020). As with other disciplines, the education sector is also severely affected by this pandemic. Keeping in view of its disasters, all the public and private sector institutions are closed for on-premise classes. Institutes have switched to the online education model, which has many challenges for both students and teachers. Technological gadgets and apps have taken the place of physical classrooms. To assist academic activities, platforms like Google and Microsoft are constantly opening new avenues. There are also third-party apps like Zoom are available in the digital world. Moreover, these organizations are improving their services and functionalities. For advancing technology, science, and arts, English is considered critical as it is the first foreign language. While multimedia and CALL technology is an emerging trend in various countries, including Pakistan, China started emphasizing CALL in 1978 to enhance students' concentration by making topics look interesting (Asrifan et al., 2020). A case study conducted at Mizoram University explored different modes adopted by teachers and students during the COVID-19 pandemic. 45% of faculty members preferred using Zoom, Google Meet, and Skype for online education. In addition to this, 100% of respondents designated WhatsApp or Telegram as the most promising tool for communication. Half of the teacher respondents opted for YouTube to upload recorded lectures (Mishra et al., 2020). Hence, the importance of using online tools can't be ignored in this era. We've categorized these tools into four categories. Let us briefly discuss some popular tools used in Pakistan since the first wave of this pandemic. Generic Communication Tools Teachers and students use these sorts of tools to communicate various academic matters. Teachers use such tools to make certain announcements and notices. On the other hand, students discuss their problems with faculty via these groups. Some of the popular platforms to serve this purpose are: Classroom Management Tools Effective management of the classroom is of chief importance. Teachers share lecture handouts, recommended books, PowerPoint slides, and other related courses using these tools. In addition to this, these tools help the faculty to manage the attendance records as well. Quiz and assignment submissions can be easily made via classroom management tools. Some popular tools are: Virtual Classroom Tools Face-to-face communication is no doubt the best method to share ideas. During this pandemic, the chances of spreading the virus increase in the absence of social distancing. Hence, video conferencing is the best way to achieve this objective. Google Meet, Zoom, and Microsoft Teams have commonly used tools for conducting virtual classes. All of these tools offer screen sharing, chatbox, annotations, and file sharing as add-ons. Apart from online meeting tools, video-sharing platforms like YouTube and Daily Motion are also great sources for sharing academic information, including video tutorials, lectures, and animations. YouTube can be used as an effective technology platform for teaching art students. YouTube can help teachers to find and share some exciting material with students. On the other end, students can use this huge video library for learning new skills. Integrating information technology with education can enhance both art teachers and students (DeWitt et al., 2013). Tools that can be used for conducting virtual classes include: Assistive Tools During this pandemic, services offered by various tools other than those discussed above are worthwhile. Word processing (MS-Word) and presentation (MS-PowerPoint) software are in the limelight. These have smoothened the lecture delivering process to a great extent. Apart from these, there are certainly other tools as well. Various institutes have adopted recorded lecture schemes as well,l where faculty provides pre-recorded lectures to students. Camtasia and Filmora are the big names. These are high-and tools to edit and publish lecture videos. A huge population of faculty also uses third-party mobile apps to serve this purpose. Students can pause, play and replay a pre-recorded lecture according to their convenience; this enables students to grasp the knowledge without any pressure or time constraint. A short case study was conducted on a small sample of business administration students at Yonsei University Korea. 53.8% of students preferred pre-recorded lectures due to this reason, while only 7.7% were favoring live video conference lectures on Zoom (Islam et al., 2020). Last but not least, online storage providers like Google Drive and Dropbox are also being used widely. Organizing, sharing the files, and troubleshooting via these tools demand skills as well as experience. RESEARCH OBJECTIVES The main aim of this study was to get answers and insights about various issues faced by both students and faculty, specifically in ELT. Table 1 shows multiple research questions considered as our study parameters. Our research questions were abstract; hence, we sub-divided the main questions into various sub-questions to get objective answers and detailed insights. The questions are provided in Annexure-A. What are the key differences between online and physical ELT methods given to teachers and students? 2. What are the major technical difficulties faced by faculty as well as students in the online ELT method? 3. What was the assessment of the students about teachers during online ELT? 4. What was the assessment of the teachers about students' learning outcomes? 5. How has information technology benefitted the students as second language learners? 6. How has information technology benefitted teachers during a pandemic? 7. How much our educational system is prepared for using computer-aided tools? 8. How to utilize information technology effectively for ELT during future pandemics? LITERATURE REVIEW Ulla et al. (2020) explained that despite research on Mobile Assisted Language Learning (MALL) and ELT, the area involving the integration of Information and Communication Technology (ICT) in ELT remained a little unexplored. This study was focused on understanding different teaching practices employed by teachers in teaching EFL with the help of internet-based applications in Thailand. Some popular tools used by EFL teachers included social media groups, YouTube, and Google forms. Quinn and Clark (2015) conducted a study and explored the main factors that support education via pre-recorded lectures. 81% of participating respondents liked this mode of education due to many reasons. Majority of students considered viewing pre-recorded video lectures alone. Students were also able to develop a pattern to watch these lectures. Convenience was one of the leading factors because this mode freed students from the bound of time or location. The ability to replay lessons helped students to gain a deep understanding of the subject matter. Pre-recorded lectures provided students ample time to take detailed notes as well as manage lecture breaks. There was no significant difference between both groups in terms of correct answers. However, the right answers to CQ by the LL group were significantly higher than the PRL group. The PRL Bakar (2007) elaborated that the first smart school was established in Malaysia in 1991, and this enabled the integration of Information Communication Technology (ICT) in ELT for teachers and students. The use of ICT increased the interest of students in the subject, and students became self-motivated. Students were more engaged in classrooms and started to use the English language practically while doing various activities. Qureshi et al. (2012) surveyed to identify various technical barriers preventing the realization of online education in Pakistan. 62% of students believed that the latest hardware and software were available in the university. Additionally, 53% responded that internet services at the campus were fast, and according to 56% of the respondent, high-speed internet was also available outside the campus. Downloading material from the internet was recognized as a quick way to access information by 54% of students. Only 350 respondents from Iqra University Islamabad, Pakistan, participated in this study. Raza et al. (2017) conducted a study to determine the influential factors critical for implementing m-learning in Pakistan's higher education institutes. Both faculty and students were enthusiastic about m-learning, but students' readiness was more significant. However, the teachers' attitude towards m-learning adoption was equally important. Rahim et al. (2011) explored the integration of Information and Communication Technology (ICT) in the Gilgit area of Pakistan and conducted interviews on twenty-six schools. Staff in the majority of institutes were using the computer only for office work. The percentage of teachers using ICT for education was found to be on the lower side. In some schools, staff was not aware of computers; however,r this was extremely low in numbers. Internet availability was not provided to faculty. Hence, the unavailability of technology was identified as the main hindrance in integrating ICT and education in mountain areas. Milal et al. (2020) suggested training to be essential for professional skills development for English language teachers. Effective training programs enabled teachers to cope with common problems, and teachers learned new methods of ELT. The training program also improved communication and organization of teaching material. Additionally, classroom management became more effective, and students' behavior got better. Schurz and Coumel (2020) explained the key differences of ELT methods in different European countries, mainly Sweden, France, and Austria. ELT in Austria was more focused on grammar in comparison with Sweden and France. Sweden ELT methods included fluency practices more. Sultan and Hameed (2020) surveyed the students of Pakistan Military Academy (PMA) to find out motivation towards curriculum, integrated with local and global cultures. As second language learners, more than 50 percent of respondents were interested in learning about the UK and USA cultures. In comparison, more than 60 percent were attracted to learning about the culture of other English-speaking countries. A very low number of faculty asked students to watch cultural movies or videos. Soomro et al. (2020) narrated that in our country, students lack English speaking skills. Faculty teaching in the mother language could be one of the reasons. More than of 60% teachers believed that professional training institutes could solve this issue. Additionally, only 33 percent not teachers employ audio-visual techniques for ELT, which was surprising because most schools in Pakistan don't have multimedia classroom facilities. Irfan et al. (2020) elaborated that the English language was focused at school levels to build and polish students' listening, speaking, reading, and writing skills. In Pakistan, finding well-trained teachers was a dilemma, especially in villages and other far areas. The majority of university graduates were not proficient in these four basic skills. The lack of using audiovisual aids was the least focused method when compared with other methods. Kosten (2020) described that CALL posed serious problems called English problems in countries with mono-lingual educational cultures. During the past decade, most of the research was conducted on bilingual and multilingual CALL. Many CALL tools offered mono-lingual support, specifically English. Offering multilingual support could eliminate the English problem in mono-lingual countries. Enayati and Gilakjani (2020) explained for effective teaching of EFL; vocabulary played a critical role. Unfortunately, less concentration and time was spent by teachers in focusing on vocabulary learning. The use of CALL could dramatically improve students' ability to speak words properly. A study was conducted on EFL learners in Iran, and researchers found that the use of Tell Me More (TMM) could develop and enhance learners' speaking abilities. RESEARCH METHODOLOGY To answer our research questions, we've conducted this exploratory study in various public and private sector universities in the Punjab province. Data was collected through questionnaires and analyzed statistically. Below is our detailed research methodology. Research and Instrument Design We have adopted quantitative research methodology as it is an extremely effective fact-finding technique. For this, we collected data by using a questionnaire instrument. To make this study more convenient for respondents, separate questionnaires were designed and shared with participants. Each questionnaire was divided into various sections. Detailed instructions were written to assist respondents with each unit in collecting different information, including respondent demographics. The questionnaires were designed using Google Forms, and a 5-point Likert scale was used according to the suitability of questions. Participants of this Study We considered faculty and students of the department of English from various public, private and semi-government sector universities in Faisalabad. Teachers and students of schools and colleges were not taken into account. Forty teachers and 174 students participated in this study. Data Collection Questionnaires were shared through various mediums, including E-Mail, social media, and WhatsApp groups. Teachers were also requested to furnish it to students for error-free data collection. As discussed earlier, the questionnaire was selfexplanatory, and for the sake of convenience, an instruction manual was also provided to the respondents. Responses were received online in the form of a Google sheet. Data Analysis After collecting the responses, we removed anomalies. Data reliability was also assessed through the Cronbach alpha method, which is popular among researchers for ensuring that data is consistent internally (Taber, 2018). Reliability percentages for teachers' and students' datasets were 75.53% and 75.57%, respectively. This ensured the integrity and reliability of collected responses. As all the questions were close-ended ones, SPSS was chosen to perform statistical analysis on the data. Results along with graphical representations and interpretations are given in the coming section. RESULTS & DISCUSSION ON FINDINGS Before moving to the detailed insights of this study, let us give you an overview of respondent demographics. Table 2 shows the precise demographics of our respondents. Table 3 enlists different questions that helped us understand the differences between online and physical ELT classes in view of both respondent categories. 17 13 Key Differences between Online and Physical ELT Methods My understanding of student issues regarding lecture matter was better in online classes as compared to physical classes. 10 21 I was more comfortable covering the syllabus in physical classes. 31 3 I was more active in online classes. 15 17 Communication with students was easy. 18 17 The E-classroom environment was favorable for teaching. 17 16 Online classes caused extra expenditure for my regular work. 31 7 All the sub-questions were used objectively to highlight the key differences felt by respondents during the journey of online education in COVID-19. Focus, understanding, comfort, activity, communication, environment favorability, and teaching expenses were major factors that we considered to develop the critical difference between online and physical ELT mode for second language learners and teachers. All these factors, along with the opinion of respondents, are shown in Figure 1. Figure 1: Characteristics of CALL in ELT It is quite evident from Figure 1 that understanding was the biggest issue reported by teachers. Only 25% of teachers considered that understanding was not a problem in online ELT. Difficulties with understanding lecture matter and general instructions were also obvious from the collected data as only 40% of students felt that their understanding was better in online mode. Comfort was found to be the characteristic exhibiting consensus between teachers and students. It was due to the ease of working from home. Only 39.08% of students and 42.50% of teachers felt that their focus was better in computer-aided ELT. This feature again shows significant harmony between both respondent categories. According to 81.03% of students, communication with teachers was much easier in online mode ELT because multiple touchpoints like WhatsApp and E-Mail were available. On the contrary, the percentage of teachers who found communication easy is only 45%, which is considerably low compared to that of students. When it comes to activity and classroom environment, we found that teachers were more in favor of physical class environment even though teachers reported more comfort in working from home. 42.5% and 37.5% are the percentages of teachers who claimed that the virtual classroom environment was favorable for teaching and was more active in Elearning. Moreover, more than 77% of students testified that they were more active in online classes, which is biased, we guess, due to the psychological fears of the physical classroom environments. figures. More than 56% of students also responded the same, and these numbers significantly support the fact that online education has increased overall teaching and learning expenses. Table 4 explores various technical problems faced by teachers and students during the online education mode. These pinpointed key issues need attention. 'Learning new tools is boring' was found to be the least critical issue (only 5% of teachers and 18.39% students) as a substantial majority of respondents, especially teachers, liked learning new tools. By visualizing the graph, we found that the number of teachers who faced problems was considerably larger than the number of students. It was really exciting to find out that teachers found learning and working with Computer-Aided Tools interesting, yet 55% of teachers responded that their pace of learning was slow. Compared with the percentage of students (32.76%), which was natural and consistent with teaching experience. Major Technical Issues Faced During Online Education Internet connectivity and audio-visual problems were the most common issues faced by 70% and 72.5% of respondent teachers, respectively. 52.87% of student respondents, on the other side, reported internet connectivity as the most common issue faced during a virtual classroom environment. 62.5% of teachers found designing online exams technically Table 5 depicts the general assessment of students about teachers during the online ELT process based on various factors. Teachers' assessment about students' learning outcomes Students' assessment about teachers Students learning outcomes were one of the major factors we considered for our study. It can help us understand teachers' assessment of their students critically compared with the physical mode of ELT. Table 6 shows the evaluation made by teachers about their students during the online ELT method. Students' overall knowledge has improved during online education (regardless of exam results). 26 The overall behavior of students was good during online classes. 9 24 Students were punctual in taking online classes. 6 27 Students were able to understand and follow your instructions easily. 14 18 Benefits of IT in Online Education during Pandemic Information Technology has benefitted the modern world in every aspect of life. We have also explored different benefits perceived by respondents of this study offered by IT in the education sector. Table 7 shows various uses of incorporating IT during the pandemic as students view, while Table 8 contains benefits reported by faculty members. Online education during a pandemic couldn't be possible without proper Information Technology (IT) support and the internet. IT has changed the entire dynamics of business, research, and education. Figure 3 shows a comparison between the respondents' perceptions regarding the benefits of IT tools in ELT. Both English language teachers and second language learners who have reported that they have learned MS Word and MS PowerPoint in a virtual teaching environment were 92.5%, 85.63%, 82.18%, and 85%, which was extremely remarkable, as the respondents belonged to arts discipline. Online education has also improved the ability of respondents to diagnose and troubleshoot technical issues. More than 70% of respondents from both categories reported that they have become technically sound and confident enough to provide quality education online in future pandemics. Although smartphones are common, the respondents believed their proficiency in using smartphones, apps, and online storage has enhanced. 78.74% of students and 80% of teachers were unfamiliar with online video conferencing tools before moving towards online education. How much our educational system is prepared to adopt online teaching? The COVID-19 outbreak has challenged the educational process. Despite ongoing vaccination, it was direly needed to assess how much our educational system is psychologically prepared for online education during future outbreaks. Table 9 shows different factors that we used for assessment. Our research's primary aim was to judge the implications of computer-aided tools for online teaching and learning in view of different online classroom stakeholders; hence "Are we ready for future pandemics?" is a question that cannot be ignored. We have testified to this by considering some practical and psychological aspects of human beings. Respondents who preferred online ELT were below 50%, which was considerably low. Although only 22.5% of teachers claimed to be free from extra burden while working in a virtual environment yet 75% were interested in using computeraided tools yet, 77.5% showed a willingness to learn new tools in case of future pandemic outbreaks. Psychologically, teachers (82.5%) were highly determined to work with tools without fears compared to students (56.32%). All these factors indicated that faculty was more mentally prepared than students, which was a healthy sign. It is depicted in Figure 4. Suggestions to make online ELT more effective during the pandemic in Pakistan In this final section, our main purpose was to gather requests from faculty as well as students to tailor the current educational environment to be more effective. These suggestions were subjective, and we included these in our questionnaires as close-ended questions for better understanding. These suggestions are highlighted in Table 10. IT training programs organized at mass levels for art teachers/students will be beneficial. 115 11 25 11 Developing dedicated portals for technical problem discussion and solutions will be extremely beneficial. 118 17 26 9 Social media communities can facilitate computeraided ELT and learn to a great extent. 114 19 27 9 For coping with future pandemics effectively, certain measures are required apart from being mentally strong and working smartly with computer-aided tools for ELT. We provided respondents with some objective measures shown in Figure 5. Both respondent categories (67.5% teachers, 65.52% students) strongly emphasized the utilization of social media for effective online ELT. Apart from this, 66.09% of students voted for mass IT training programs, and 67.82% favored the development of dedicated portals for problem sharing and discussions on computer-aided tools. The uniformity of IT tools across universities could make a huge difference in the effectiveness of online ELT education in 65% of faculty. As we've discussed earlier, respondents felt that online education was expansive. More than 60% of respondents suggested that special internet packages should be announced to lower this burden and smoothening of the education process to compensate for these expenses. Existing literature in this area considered different dimensions individually. Students' attitudes at a postgraduate level were studied generally, which highlighted technical issues faced by students in remote areas of Pakistan (Shahzad et al. 2020). We found that there was a need to study problems faced by teachers alongside students' issues. This helped us consider critical success factors to motivate faculty staff by providing the best solutions to their problems. (Qureshi et al. 2012) found different technical barriers to E-learning in Islamabad city, well-equipped with technical knowledge, skill, workforce, and equipment. Hence, we focused our study on Faisalabad, a big one but contained many rural areas. Moreover, our study also concentrated on the readiness of our country's education machinery to cope with future pandemic outbreaks. Furthermore, research in online ELT during pandemics focused on the targeted benefits of using ICT. But we tried to study the generic positive outcomes and improvement in general ICT awareness reported by stakeholders of the education sector in Higher Education Institutes (HEIs). English writing skills were the most critical success factor for English language learners (Shegay et al., 2020). Still, it was direly needed to find out effective ELT methods during pandemic outbreaks and equip students with the best possible skills to fulfill market needs. CONCLUSION Due to COVID-19, switching to online mode was entirely a new thing in Pakistan. Therefore, we conducted this research to find answers to various questions to get deep knowledge of how COVID-19 has transformed the educational sector in our country and what measures the higher education sector should take to develop improved and matured online ELT in case of future pandemic outbreaks. Our respondents found online education more comfortable, enjoyable, and positive as compared to physical classes. It is suggestive of the fact that audio-visual multimedia and animations helped in enhancing students' interest. Respondents showed a keen interest in working with IT tools and showed a willingness to learn new tools. This was, indeed, a very good sign. Another positive was the seriousness of ELT faculty despite the technical challenges faced due to lack of technical expertise. However, students were reported to be hesitant in taking an active part in academic activities. Students also showed behavioral issues and a lack of improvement in overall knowledge. Online teaching also enabled respondents to learn software tools to support online classes. Respondents' ability to diagnose and resolve technical issues was improved greatly, and respondents were sufficiently assured to be prepared for future pandemic situations. However, to deal with future outbreaks, certain measures could be taken to improve current online ELT and education. LIMITATION OF THIS STUDY AND FUTURE DIRECTIONS The respondents of this research study belonged to the English department in various public, private and semigovernment institutes in Faisalabad. Although data was reliable yet, there is a need for a full-fletched exploratory study comprising a population of different areas, including northern urban and rural areas where IT infrastructure and facilities are different compared with technologically equipped and advanced cities of Pakistan. In the future, we aim to extend this research on ELT for science students in universities and affiliated colleges. AUTHORS CONTRIBUTION Faiza Saeed choose ELT as her main area of research and conducted a study on emerging and existing ELT methods on college students in Faisalabad. She explored the internet and reviewed the literature to find out this research gap as it was in need of time. She also identified research questions and designed a questionnaire. Aniqa Rashid supervised the whole research task. Wajiha Saleem performed data analysis and discussed the finding. Muhammad Sufyan Afzal worked on data collection from respondents.
2021-09-01T15:03:38.495Z
2021-01-01T00:00:00.000
{ "year": 2021, "sha1": "d3ab8c96da925a779975892d47173725740082e8", "oa_license": "CCBYSA", "oa_url": "https://mgesjournals.com/hssr/article/download/hssr.2021.93154/3604", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "2439c54eff312abb8fbc82476a31ad3bce505b43", "s2fieldsofstudy": [ "Education" ], "extfieldsofstudy": [] }
250282015
pes2o/s2orc
v3-fos-license
Knowledge, attitude and practice of community pharmacy personnel in tuberculosis patient detection: a multicentre cross-sectional study in a high-burden tuberculosis setting Introduction Control of tuberculosis (TB) is hampered by suboptimal case detection and subsequent delays in treatment, which is worsened by the COVID-19 pandemic. The community pharmacy is reported as the place for first aid medication among patients with TB. We, therefore, analysed knowledge, attitude and practice (KAP) on TB patient detection (TBPD) of community pharmacy personnel, aiming to find innovative strategies to engage community pharmacies in TBPD. Methods A multicentre cross-sectional study was performed in four areas of Indonesia’s eastern, central and western parts. Pharmacists and pharmacy technicians who worked in community pharmacies were assessed for their characteristics and KAP related to TBPD. Descriptive analysis was used to assess participant characteristics and their KAP, while multivariable regression analyses were used to analyse factors associated with the KAP on TBPD. Results A total of 1129 participants from 979 pharmacies, comprising pharmacists (56.6%) and pharmacy technicians (43.4%), were included. Most participants knew about TB. However, knowledge related to TB symptoms, populations at risk and medication for TB were still suboptimal. Most participants showed a positive attitude towards TBPD. They believed in their professional role (75.1%), capacity in TB screening (65.4%) and responsibility for TBPD (67.4%). Nevertheless, a lack of TBPD practice was identified in most participants. Several factors significantly associated with performing the TBPD practice (p<0.05), such as TB training experience (p<0.001), provision of a drug consultation service (p<0.001), male gender (p<0.05), a positive attitude towards TBPD (p<0.001), short working hours (p<0.001) and central city location of the pharmacy (p<0.05). Conclusions Most participants had good knowledge and attitude, which did not translate into actual TBPD practice. We identified that TB educational programmes are essential in improving the KAP. A comprehensive assessment is needed to develop effective strategies to engage the community pharmacy in TBPD activities. 1. Abstract conclusion is not matching the study aim, so it is suggested to rephrase. 2. …and referring the suspected TB patient to the health facility for further examination in the sixth last months.., May I know that why in last 6 th months & how they remembered that they refer the patients to the clinics in last six months. Further, inclusion criteria indicated for the study participant should be 6 months' minimum experience in the community pharmacy… Any justification? If above statement is the reason, then need to rephrase and make it more clear. 3. Why author used 3 Likert scale for the knowledge but 5 for attitude and practice? I will appreciate to standardised or otherwise rationalise it and strengthen with any reference. 4. Does ethical approval was granted for this work, need to highlight. 5. Research Results shows that 49.3% had never TB training, means that 50.7% participant had TB training. Is there any proper training in the system been provided to the community pharmacist? If so then elaboration is required in the article. [By the way, this question was generated based on table2, section 3; TB sign & Symptoms (Correct answers 29.5%), as 50.7% had training and only these limited number respond correctly? 6. TB attitude; TB Practice……. Subheading should be selfexplanatory, rephrasing is suggested. 7. Discussion needs more arguments/reasons instead of literature comparison. 8. General conclusion should meet the hypothesis, which is lacking. Moreover, 2 nd sentence is not true with TB training related information. I will strongly suggest to rephrase the conclusion. 9. It is advised to re-confirm the references and omit the repetition like ref.21 & 29. 10. Reference 13, need to recheck and should comply with the journal format. 11. Reference 14 need to make sure in English and accessible for the readers. REVIEWER Daftary, Amrita York University, School of Global Health and Dahdaleh Institute of Global Health Research REVIEW RETURNED 13-Jan-2022 GENERAL COMMENTS A well written, interesting paper explicating pharmacy based knowledge, attitudes and practices related to TB in Indonesia. Main comments 1. Describe the context of the four settings -are these urban or rural areas or mixed. Perhaps clarify the meaning of a peripheral vs central area. 2. How were the pharmacies/participants selected? The sample size calculations do not explain participant/pharmacy selection. Was it convenience, locality focussed? 3. Gender seems to be a factor with male gender associated with better practice. Still females comprised 80% of the sample. Were males more likely to be pharmacists (vs lower cadre professionals) -does this explain the finding associating males with better practices. The high proportion of females in Table 1 was surprising. A little discussion would help non-experts understand this. Supplementary Tables 4 and 5: Please state on the table somewhere whether these are univariate or multivariate fits. Supplmentary Tables 4 and 5: What is the amount of missing data for these analyses, and how was it handled in the analysis. The English is overall good but could use another go-through for grammar. Reviewer 1 Dr. Amer hayat khan, Universiti Sains Malaysia Interesting work, but a few queries will improve it more. • Abstract conclusion is not matching the study aim, so it is suggested to rephrase. Author response: Thank you for your suggestion. We have revised the conclusion based on the objective and result. Please kindly find in the revised manuscript. Line 60: "Most participants had good knowledge and attitude, which did not translate into actual TBPD practice. We identified that TB educational programs are essential in improving KAP towards TBPD. A systematic and comprehensive assessment is needed to develop effective strategies to engage the community pharmacy in TBPD activities." • …and referring the suspected TB patient to the health facility for further examination in the sixth last months.., May I know that why in last 6th months & how they remembered that they refer the patients to the clinics in last six months. The six months were based upon the results of our previous studies that there was the pharmacy personnel who provided the TBPD practice once six months. [3,4] Hence we tried to capture the TBPD in the prior six months. Since it is unusual activity, we believe that the participants can still remember when they conducted TBPD activities. To follow up on your valuable comment, we have added new information to the revised manuscript: Line 177: " The TBPD practice was evaluated over the past six months. The six months were based upon the results of our previous studies [11,23] that there was the pharmacy personnel who provided the TBPD practice once six months. To have a clear definition and comprehensive duration assessment, we defined "very often" as the practice performed at least every week; "often" is the practice performed at least once a month; "sometimes" is the practice performed at least once in 2-4 months; "rarely" is the practice performed once in 5-6 months; and "never" is never doing the practice in the last six months.." • Further, inclusion criteria indicated for the study participant should be 6 months' minimum experience in the community pharmacy… Any justification? If above statement is the reason, then need to rephrase and make it more clear. Since we captured the referral activities in the 6-month duration, we included the participant who had experience in a minimum of 6 months in the community pharmacy. We have added the text to clarify the information. Please also consider our comment above and kindly find new information in the revised manuscript: Line 193: " The six-month experience was defined considering that the study captured TBPD activities in the last six months." • Why author used 3 Likert scale for the knowledge but 5 for attitude and practice? I will appreciate to standardised or otherwise rationalise it and strengthen with any reference. We used the KAP guideline for TB published by WHO [5] and Ruel, et a l [6] in developing our item response. The item responses were selected based on the domain and construct of the item. We attempted to assess whether the participant understood or not the knowledge items. Hence, a dichotomous (true/false) item is the ideal option for assessing the knowledge items. To cover participants who really do not know about the item answer, we provide "don't know" as an option in the knowledge item. It will reduce the potential participant who guesses the answer. Guessing the answer from the participants will lead to reducing the item reliability. We ask their opinion on the attitude items that a dichotomous item cannot justify. The attitude item should provide a wider range of responses than the dichotomous responses exploring the participant's opinion. Hence we used the 5 Likert option as suggested by Ruel, et al for the attitude items. In the practice items, rating scales were used to assess the frequency of practice. Therefore we used very often", "often", "sometimes", "rarely", and "never" for the practice items. To clarify this issue, please kindly find our revision in the revised manuscript: Line 153: " The questionnaire was developed based on the guideline for knowledge, attitude, and practice survey in TB published by the WHO [5], the practice of survey research [6], the Indonesian national TB guideline [7], experts' consensus on the psychological factors for implementing evidencebased practices [8], and previous relevant studies [9][10][11]." Line 172: " The participant's characteristics domain items were assessed using closed and short open questions, while the item of knowledge domain was measured on a nominal scale ('true', 'false', and 'do not know'). Five Likert scales were used for the attitude, while rating scales were used for practice domain items. We used "strongly agree", "agree", "doubt", "disagree", and "strongly disagree" for the attitude items, and "very often", "often", "sometimes", "rarely", and "never" for the frequency of practice items." • Does ethical approval was granted for this work, need to highlight. Ethical approval was granted for this study. It was highlighted in the part of the ethics declaration on the last page of the original manuscript. Please kindly find in line 600 "Ethic declarations": "The ethics committee approved this study of Universitas Sumatera Utara No. 599/KEP/USU/2021. All methods were carried out in accordance with the principle of the declaration of Helsinki.". • Research Results shows that 49.3% had never TB training, means that 50.7% participant had TB training. Is there any proper training in the system been provided to the community pharmacist? If so then elaboration is required in the article. [By the way, this question was generated based on table2, section 3; TB sign & Symptoms (Correct answers 29.5%), as 50.7% had training and only these limited number respond correctly? Thank you for your critical comment. There is no proper TB training system for pharmacy in our study setting. It was highlighted in our previous study [3]. Hence, developing an integrated training system has been a study implication for policy and practice in this study. Please kindly find relevant information in the revised manuscript: Line 370: " Unintegrated pharmaceutical services in TB programs and a lack of public-private collaboration with community pharmacies were reported in Indonesia. [3,4] This potentially leads to the limited exposure of community pharmacy personnel to the educational program from the national TB programme." Line 445: " Second, a TB training system for improving the KAP should be developed for pharmacy personnel. The training is not only for increasing TB awareness about case detection activities but also for minimising irrational dispense of TB drugs [12] and raising awareness on the other potential roles of community pharmacy in TB (e.g., treatment supporter, TB medication counsellor)." • TB attitude; TB Practice……. Subheading should be self-explanatory, rephrasing is suggested. Thank you for your suggestion. We have revised the sub-heading. Line 303: "The Attitude of Pharmacy Personnel toward Tuberculosis Patient Detection Line312: "The Practice of Pharmacy Personnel toward Tuberculosis Patient Detection " • Discussion needs more arguments/reasons instead of literature comparison. We have added our argumentation in the general discussion and the study implication. Please kindly the information in the revised manuscript. Line 368: " Good knowledge was associated with participants who have a pharmacist background. This finding highlights the importance of exposing TB knowledge to the pharmacy technicians since they also have a role as the frontline in pharmacy." Line 370: " Unintegrated pharmaceutical services in TB programs and a lack of public-private collaboration with community pharmacies were reported in Indonesia. [3,4] This potentially leads to the limited exposure of community pharmacy personnel to the educational program from the national TB programme" Line 382: " We finally analysed that experience in following TB training is essential for improving TB knowledge, forming a positive attitude, and performing activities on TBPD. Our study thus emphasises the importance of TB training to gain TB knowledge and a positive attitude. The knowledge and attitude can then generate action for TBPD. " Line 404: "Furthermore, we assessed that time available to perform TBPD activities is essential. Our study emphasised the need for workload assessment for the community pharmacies to be able to conduct TBPD activities." • General conclusion should meet the hypothesis, which is lacking. Moreover, 2nd sentence is not true with TB training related information. I will strongly suggest to rephrase the conclusion. Thank you for your suggestion. We defined the conclusion based on the study objective. According to the study objective, we analysed the KAP in TB patient detection (TBPD) of the community pharmacy personnel, aiming to find innovative strategies to engage community pharmacies in TBPD activities. To follow up your comment, please kindly find our revision in the revised manuscript. Line 456:"Our study showed that most Indonesian pharmacists and pharmacy technicians have a good knowledge and attitude related to TBPD. However, their knowledge and attitude do not align with their actual TBPD practice. We identified that a TB educational program is essential in improving KAP among pharmacy personnel for TBPD activities. A systematic and comprehensive assessment is needed to develop an effective strategy for engaging the community pharmacy in sustainable TBPD activities." • It is advised to re-confirm the references and omit the repetition like ref. Author response: Thank you for your compliment. • Describe the context of the four settings -are these urban or rural areas or mixed. Perhaps clarify the meaning of a peripheral vs central area. We included the province's capital as the central city while the non-capital of the province as the peripheral area. To clarify, we have added the information in the revised manuscript. Please kindly find the revision: Line 132: " A cross-sectional study was performed in four areas in Indonesia's western, central, and eastern parts. The three areas are the capital of provinces, while one area is a peripheral area outside the province capital. " • How were the pharmacies/participants selected? The sample size calculations do not explain participant/pharmacy selection. Was it convenience, locality focussed? We used a locality focus for the sample collection that collaborated with the local professional organisation of pharmacists and pharmacy technicians in the study sites. Since the database of pharmacy personnel is comprehensively managed by the two professional organisations, we collaborated with the two local professional organisations for data collection. The responsible data collectors from the professional organisation identified and distributed the questionnaire to the potential participants based on the participant eligibility, database, networking, and geographical distribution at the district level. All the collected data were managed and analysed for eligibility of the data and achievement of the sample size by the researchers at each study site. Considering your valuable comment, please kindly find the additional information in the revised manuscript Line 203: " Considering that pharmacists and pharmacy technicians operate pharmacies, two responsible persons for data collection were appointed in each study site, i.e. a data collector for the pharmacists and one for the pharmacy technicians. We collaborated with the two local professional organisations for data collection. The responsible data collector identified and distributed the questionnaire to the potential participants based on the participant eligibility, database, networking, and geographical distribution at the district level. In light of the pandemic COVID-19 situation, we distributed the questionnaire using online and offline approaches. All the collected data were managed and analysed for eligibility of the data and achievement of the sample size by the researchers at each study site (ISP, Kh, MAB, MNK)." • Gender seems to be a factor with male gender associated with better practice. Still females comprised 80% of the sample. Were males more likely to be pharmacists (vs lower cadre professionals) -does this explain the finding associating males with better practices. Thank you for your critical comment. We have added a subgroup analysis for gender in the revised manuscript. The subgroup analysis explained that males have a more positive attitude and provide more drug consultation services than females. Please kindly find our sub-group analysis in supplementary file 5. We also have added a new discussion in the revised manuscript. Line 391: " We identified that the proportion of females (80.6%) is higher than males (19.4%) in our study. It is in line with national data showing that females represent the majority of pharmaceutical personnel in Indonesia (80.6%) [13]. Although the proportion of females is higher than males, our study identified that males are more likely to perform TBPD practices. Our sub-group analysis explained that males have a more positive attitude towards TBPD and provide more drug consultation services than females (See Supplementary File 5). The positive attitude may drive them to provide the drug consultation service and lead them to perform TBPD activities as well. It can be explained that providing drug consultation services will give them more opportunities to meet patients directly, leading to the TBPD activities." • Even if the term TB cases was used in data collection tools, in the text of the paper this term should be substituted with TB patients or people with TB. Likewise suspected TB patients should be replaced with presumptive patients or the like. Thank you for your suggestion. We have revised the terms accordingly. Reviewer: 3 Dr. Kedar Mehta, GMERS Medical College Gotri Vadodara • I shall congratulate the authors for selecting and publishing this novel and interesting topic. However, there are major concerns in the study. Authors response: Thank you for your compliment • Title is misleading. This is just KAP study among pharmacists -which is not clearly mentioned in title. We have revised the title to "Knowledge, Attitude, and Practice of the Community Pharmacy Personnel in Tuberculosis Patient Detection: A Multicentre Cross-Sectional Study in a High-Burden Tuberculosis Setting". Please kindly the revision in the revised manuscript. • In abstract, methods section is poorly written. For example, 'Descriptive and regression analyses were used for the analyses.' It is not clear which analyses? Thank you for your correction. We have revised the text accordingly. Please kindly find the revision on the revised manuscript. Line 47: " Descriptive analysis was used to assess participant characteristics and their KAP, while multivariable regression analyses were used to analyse factors associated with the KAP on TBPD.". • Sample size and Sampling process is not explained clearly. How did you select 979 pharmacies? Then how did you select 1129 pharmacy personnel from these pharmacies? Sampling method is missing. We used a locality focus for the sample collection that collaborated with the local professional organisation of pharmacists and pharmacy technicians in the study sites. Since the database of pharmacy personnel is comprehensively managed by the two professional organisations, we collaborated with the two local professional organisations for data collection. The responsible data collectors from the professional organisation identified and distributed the questionnaire to the potential participants based on the participant eligibility, database, networking, and geographical distribution at the district level. All the collected data were managed and analysed for eligibility of the data and achievement of the sample size by the researchers at each study site. Considering your valuable comment, please kindly find the additional information in the revised manuscript: Line 203: " Considering that pharmacists and pharmacy technicians operate pharmacies, two responsible persons for data collection were appointed in each study site, i.e. a data collector for the pharmacists and one for the pharmacy technicians. We collaborated with the two local professional organisations for data collection. The responsible data collector identified and distributed the questionnaire to the potential participants based on the participant eligibility, database, networking, and geographical distribution at the district level. In light of the pandemic COVID-19 situation, we distributed the questionnaire using online and offline approaches. All the collected data were managed and analysed for eligibility of the data and achievement of the sample size by the researchers at each study site (ISP, Kh, MAB, MNK)." • Statistical analysis is not robust. Thank you for your comment. We have attempted to add and revised the manuscript accordingly. Please kindly find the revised manuscript • In table 2, it is mentioned 'Tuberculosis (TB) is caused by virus ' ? What was the question asked to pharmacists about TB pathogenesis!! Thank you for your correction. Knowledge about the pathogen is fundamental to managing TB patients. Hence the type of pathogen should be known by the pharmacy personnel. Considering your comment, we have revised the term based on our purpose. We have changed the term "pathogenesis" to "pathogen". Please kindly find our revised manuscript in table 2 and line 287. Line 166: "We assessed TB knowledge related to activities in TB detection, i.e., TB pathogen, transmission, symptoms, risk population, diagnosis, and medication". • Suppl files number -4 and 5 are main outcomes of the study so it needs to be explained and discussed at length. All factors have to be identified and its further role can be discussed. We have added the result and discussion regarding supplementary file 4 and 5 in the revised manuscript. This finding highlights the importance of exposing TB knowledge to the pharmacy technicians since they also have a role as the frontline in pharmacy. Unintegrated pharmaceutical services in TB programs and a lack of public-private collaboration with community pharmacies were reported in Indonesia. [3,4] This potentially leads to the limited exposure of community pharmacy personnel to the educational program from the national TB programme" Line 384: "The knowledge and attitude can then generate action for TBPD. It is in line with the knowledge, attitude, and practice (KAP) theory that states that the changes in human behaviour are divided into three successive processes, i.e., knowledge acquisition, the generation of attitudes, and the formation of behaviour [14]. In the health belief model, knowledge plays a key role in generating action, and then belief and attitude drive behaviour change [15].. Line 397: "The positive attitude may drive them to provide the drug consultation service and lead them to perform TBPD activities as well. It can be explained that providing drug consultation services will give them more opportunities to meet patients directly, leading to the TBPD activities" • Conclusion is not based on the study findings -'Community pharmacies have potential roles in supporting TB case detection considering the current basic available knowledge and positive attitude' -Thank you for your suggestion. We defined the conclusion based on the study objective. According to the study objective, we analysed the KAP in TB patient detection (TBPD) of the community pharmacy personnel, aiming to find innovative strategies to engage community pharmacies in TBPD activities. To follow up your comment, please kindly find our revision. Line 456:"Our study showed that most Indonesian pharmacists and pharmacy technicians have a good knowledge and attitude related to TBPD. However, their knowledge and attitude do not align with their actual TBPD practice. We identified that a TB educational program is essential in improving KAP among pharmacy personnel for TBPD activities. A systematic and comprehensive assessment is needed to develop an effective strategy for engaging the community pharmacy in sustainable TBPD activities." Reviewer: 4 Dr. Carl Lombard, South Africa Medical Research Council • page 7. Seems that the study have been completed (end of 2021) before the publication of this protocol. This has be clarified or updated. Thank you for your comment. Since the protocol is needed to provide better study planning and guidance for the research team, we have developed the protocol at the beginning of the study. However, we did not publish it considering timeframe of the study. To clarify this information, please kindly find the additional information in the revised manuscript: Line 235: " An unpublished protocol was developed prior to the study to provide better study planning and guidance for the research team " • page 13. The precision of estimates linked to the expected number of providers should be stated similar to the sample size outline for participants.. Some clustering effect has to be considered. Thank you for your valuable comment. We have now included the expected number of providers and its estimate into study size and participants characteristics outline (line 256). Our multivariate analyses have analysed all the participants' characteristics to the defined study outcomes. However, we agree that a clustering analysis should be considered to sharpen the results. Hence, we have added a sub-group analysis for variables with significant value in the multivariate analysis. We performed a gender analysis to explain why the male gender is more likely to perform TBPD activities than the female gender despite the male proportion being lower than the female. The other participant characteristics effect has been described in the multivariate analysis and discussion part. Please kindly find our additional information in the revised manuscript: Supplementary file 5 and Line 391: " We identified that the proportion of females (80.6%) is higher than males (19.4%) in our study. It is in line with national data showing that females represent the majority of pharmaceutical personnel in Indonesia (80.6%) [13]. Although the proportion of females is higher than males, our study identified that males are more likely to perform TBPD practices. Our subgroup analysis explained that males have a more positive attitude towards TBPD and provide more drug consultation services than females (See Supplementary File 5). The positive attitude may drive them to provide the drug consultation service and lead them to perform TBPD activities as well. It can be explained that providing drug consultation services will give them more opportunities to meet patients directly, leading to the TBPD activities." • page 14. Estimates. Need to state that estimates will be done with 95% confidence intervals Thank you for your comment. We have included the information in the method section. Please kindly find in the revised manuscript. Line 232: " All significance levels were set at 5%, and 95% confidence intervals (CI) were presented." Reviewer: 5 Dr. KK Dobbin, University of Georgia • Overall this study was well done and nicely reported, and provides preliminary work on important future research on pharmacy interventions in TB in Indonesia. Authors response: Thank you for your compliment. Statistical issues: • At several points p-values are reported as p< 0.00. We all know what you mean, but this is not good reporting. p< 0.01 or p< 0.001, etc. is better. Thank you for suggestion. We have revised accordingly. Please kindly find in the revised manuscript. • The high proportion of females in Table 1 was surprising. A little discussion would help non-experts understand this. The highest proportion of females is due to the included subjects are dominated by the female gender. It is in line with the national report that pharmacy personnel is nationally dominated by the female gender as many as 63.699 people or 80,6% [13]. We have added a new discussion in the revised manuscript. Line 391: " We identified that the proportion of females (80.6%) is higher than males (19.4%) in our study. It is in line with national data showing that females represent the majority of pharmaceutical personnel in Indonesia (80.6%) [13]. Although the proportion of females is higher than males, our study identified that males are more likely to perform TBPD practices. Our sub-group analysis explained that males have a more positive attitude towards TBPD and provide more drug consultation services than females (See Supplementary File 5). The positive attitude may drive them to provide the drug consultation service and lead them to perform TBPD activities as well. It can be explained that providing drug consultation services will give them more opportunities to meet patients directly, leading to the TBPD activities." • Supplementary Tables 4 and 5: Please state on the table somewhere whether these are univariate or multivariate fits. We have added new information in each title, "Multivariable regression analysis.." • Supplementary Tables 4 and 5: What is the amount of missing data for these analyses, and how was it handled in the analysis. As described in figure 1 (flow diagram of included participants), we do not have any missing data since the data collection instrument and process were designed for all participants to complete all the information needed. So, there are no technics to be reported to handle the missing data. • The English is overall good but could use another go-through for grammar. Thank you for your suggestion. We have read and reviewed it carefully. An English expert also reviewed the manuscript. Please kindly find our revision in the revised manuscript.
2022-07-06T06:16:58.629Z
2022-07-01T00:00:00.000
{ "year": 2022, "sha1": "3e194eec30d660c6eb2a1d28ae945f8c4e29377c", "oa_license": "CCBYNC", "oa_url": "https://bmjopen.bmj.com/content/bmjopen/12/7/e060078.full.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "729fa4133bcca94bb17d17cc5fd8e1c4f12a18ee", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
251661730
pes2o/s2orc
v3-fos-license
Neuromodulation devices for heart failure Abstract Autonomic imbalance with a sympathetic dominance is acknowledged to be a critical determinant of the pathophysiology of chronic heart failure with reduced ejection fraction (HFrEF), regardless of the etiology. Consequently, therapeutic interventions directly targeting the cardiac autonomic nervous system, generally referred to as neuromodulation strategies, have gained increasing interest and have been intensively studied at both the pre-clinical level and the clinical level. This review will focus on device-based neuromodulation in the setting of HFrEF. It will first provide some general principles about electrical neuromodulation and discuss specifically the complex issue of dose-response with this therapeutic approach. The paper will thereafter summarize the rationale, the pre-clinical and the clinical data, as well as the future prospectives of the three most studied form of device-based neuromodulation in HFrEF. These include cervical vagal nerve stimulation (cVNS), baroreflex activation therapy (BAT), and spinal cord stimulation (SCS). BAT has been approved by the Food and Drug Administration for use in patients with HfrEF, while the other two approaches are still considered investigational; VNS is currently being investigated in a large phase III Study. Introduction In the last decades, a consistent body of pre-clinical as well as clinical evidence, clearly demonstrated that sympathetic overactivation, always combined to different degrees of vagal withdrawn, plays a major role in the pathophysiology of chronic heart failure (HF) with reduced ejection fraction (HFrEF), regardless of the aetiology. 1 As a logical consequence, unravelling invasive and even better non-invasive markers of this autonomic imbalance, as well as therapeutic interventions aimed at reducing and potentially correcting it, have become a main goal in experimental and clinical HFrEF research. Interventions directly targeting the autonomic nervous system (ANS) are generally referred to as neuromodulation or autonomic regulation therapy (ART). ART can be either performed using pharmacological or surgical interventions that directly target the ANS, or using electrical devices aimed at modulating the autonomic balance by means of the direct delivery of electrical energy to affect neural processes (neuronal stimulation or inhibition, or a combination of both). The possibility of treating diseases through electrical neuromodulation has led to a new area of therapeutic treatment, known as electroceuticals or bio-electronic medicine 2 ( Figure 1). This review will focus on the three most studied device-based ART modalities in the setting of HFrEF: cervical vagal nerve stimulation (cVNS), baroreflex activation therapy (BAT), and spinal cord stimulation (SCS). Principles of electrical neuromodulation: the dose-response issue The two essential components of an electrical neuromodulation system are the generator of the electrical current and the electrode (or the electrodes) that delivers Neuromodulation devices for heart failure E13 the current to the target. A first important distinction is between open-loop and closed loop systems. In open-loop systems, the stimulation protocol is intrinsically independent on the evoked biological response and/or the biological demand, although some kind of 'adaptation' can be implemented, for instance by progressively modifying some parameters over time based on pre-acquired knowledge of the physiological responses to electrical neuronal stimulation and/or by programming a pre-defined response to an external input. In closed loop systems, at least one biomarker is continuously monitored, and algorithms can be implemented to decide the timing (when) and the strength (how much) of the electrical stimulation, while monitoring the marker of interest. The latter are conceived to mimic, albeit with an obviously minor degree of integration, the principle of functioning of an animal or human feedback neuronal network, including biomarkers, sensors, and data-processing algorithms. Conventional biomarkers for closed loop neuromodulation include electrical neural signals and non-neuronal biomarkers such as electrocardiography (ECG). Recently, closed loop implantable therapeutic neuromodulation systems based on Table 1 Parameters that can be modified in the setting of electrical neuromodulation Stimulation modalities related parameters For closed loop systems: safety parameters Electrode and waveform configuration Right vs. left vs. bilateral stimulation Limits for stimulation withdrawal (e.g. low heart rate) Current amplitude, frequency, and duty cycle (duration on the on/off cycles) Bidirectional efferent and afferent (technically easier) vs. preferential efferent or preferential afferent stimulation (technically more complex) Continuous stimulation vs. respiratory and/or pulse-synchronous stimulation With pulse-synchronous stimulation: delay from the R-wave (or other trigger) and number of pulses per cycle Open-loop vs. closed loop stimulation Titration protocols neurochemical monitoring have also been developed outside the cardiovascular arena. 3 The concept of 'dose' for electrical therapies is by far more complex than for pharmacological therapies, since there are more than 10 different parameters that can be modified simultaneously (also depending on the specific type of stimulation), with hundreds of possible combinations. Table 1 summarizes the most relevant. For simplicity purposes, these parameters can be divided into electrode and current-related parameters, stimulation modality-related parameters, and safety parameters (namely parameters used in closed loop systems to actively and continuously modify the stimulation modality according to the response). Such complexity reflects the highly integrated and extremely dynamic behaviour of the therapeutic target, namely the ANS, both in physiological and in pathological conditions, that is still far from being completely unravelled. 5 A huge amount of pre-clinical and clinical studies with hundreds of subjects would be required to address the issue of the most suitable stimulation protocol in different settings, with obvious ethical concerns. Computational model strategies combined to artificial intelligence techniques are expected to complement the classical translational approach based on animal models and provide an important drive in the clinical implementation of electrical neuromodulation in the next future. 6 Indeed, the final biological response to electrical neuromodulation reflects our capability to comprehend and to modify, the outcome of advanced mathematical operations performed by complex neuronal networks. These operations can be simulated through the implementation of artificial representations of neuronal networks and of their interactions with neuromodulation technologies. This kind of approach has already been implemented, for instance, to unravel the interactions between SCS and the dynamics of spinal circuits for the design of the most suitable stimulation protocol to reduce chronic pain 7 and to improve motor control in people with spinal cord injury. 8 Computational modelling has also been successfully used in the field of deep brain stimulation. 9 At present, clinical use and technological innovations, such as novel waveforms, advanced stimulator capabilities and lead designs, largely outrun our scientific understanding of the dose-response relationship of this therapeutic option, as will become clear in the text sessions. Cervical vagal nerve stimulation The vagus nerve (VN) contains approximately 80% of afferent and 20% of efferent neuronal projections. The latter are pre-ganglionic fibres directed towards postganglionic neuronal stations embedded within several peripheral organs in addition to the heart, including upper and lower respiratory organs, gastrointestinal organs, and ovaries. The spectrum of VN fibres, classified according to diameter and conduction velocity, ranges from Aα, the largest and fastest, to unmyelinated C-fibres, the smallest and slowest. Cardiac vagal control in mammalians relies on B-type (efferent) and C-type fibres (afferent and efferent). Notably, the distribution of the right and left VN fibres on postganglionic parasympathetic neurons located within epicardial fat pads is not symmetrical: the right VN has a larger influence on the sinus node activity, whereas the left VN has a predominant control over the atrioventricular node function. Both affect atrial and ventricular cardiomyocytes. Most of pre-clinical evidence suggests an organotopic or function-specific organization of neural fibres within the VN. 10 Several factors affect neuronal fibres engagement during electrical stimulation, including distance from the stimulation electrode, local electric field strength, and fibre diameter-with A-fibres being recruited first and C-fibres last. Accordingly, the possibility of achieving a selective VNS to limit off targets' side effect while increasing the effective dose to the therapeutic target (e.g. cardiac fibres) has been extensively studied in recent years. 11 Several key paradigms have been developed including spatial selectivity, fibre selectivity, anodal block, neural titration, and kilohertz electrical stimulation block, as well as various stimulation pulse parameters and electrode array geometries. 12 Recently direct neuronal recordings of VN activity in humans using ultrasound-guided microneurography have been performed. 13 Historically, cVNS was first studied as an antiarrhythmic intervention. More than 100 years after the landmark observation of Einbrodt on the protective effect of VNS from the deadly effects of direct electrical current delivery to the heart, several studies in anaesthetized animals described the antiarrhythmic effect of cVNS during acute myocardial ischaemia. [14][15][16][17] The conclusive demonstration came in 1991 from a conscious canine model of sudden death during acute myocardial ischaemia; approximately 50% of the anti-fibrillatory effect of right cVNS was related to heart rate (HR) reduction, 18 suggesting the existence of other protective pathways. Vagal nerve stimulation exerts anti-apoptotic effects through the same protective pathways of ischaemic pre-conditioning, [19][20][21] and antiinflammatory effects through the cholinergic antiinflammatory pathway, a neural mechanism inhibiting pro-inflammatory cytokine release through the activation of cholinergic nicotinic receptors on macrophages and other immunocompetent cells. This mechanism was first described by Tracey at hepatic level, 22 and then confirmed at cardiac level, where nicotinic receptors were proved to be crucial for the HR-independent protective effect of cVNS leading to infarct size reduction in ischaemia/reperfusion rat models. 23 The first experimental data on the efficacy of chronic cVNS in HFrEF were reported in 2004. 24 Rats with a previous 14-day-old large anterior myocardial infarction (MI) leading to HFrEF were randomized to sham stimulation or active cVNS (10 s on/50 s off), at 20 Hz, with 0.2 ms pulses. A 20-30 b.p.m. HR reduction (starting value around 360 b.p.m.) was used as target to adjust cVNS stimulation amplitude. A 6-week therapy duration significantly improved LV function, biventricular weight, norepinephrine and B-type natriuretic peptide (BNP) Neuromodulation devices for heart failure E17 levels, and survival compared with sham-operated animals. Subsequently, between 2005 and 2013, the effects of right cVNS were evaluated on a canine model of chronic HFrEF induced by coronary microembolizations. In the first two studies, 25 right cVNS was delivered by a closed loop system, namely the CardioFit system, that uses an intracardiac sensing lead to synchronize the stimulation to the cardiac cycle and to modulate VNS intensity, targeted at 10% HR reduction during stimulation. Compared with sham-operation, 3 months of cVNS had a favourable effect on LV haemodynamics, tumour necrosis factor-α and interleukin-6 levels (reduced), nitric oxide synthase expression and Connexin 43 expression (increased), without affecting nerve structure. These favourable effects were additional to those achieved with metoprolol alone. In the third study, 26 the same group proved that the beneficial effects of cVNS were still significant when the stimulation was performed using a different, open-loop, cVNS system (Boston Scientific Corporation), with no acute impact on HR. Finally, cVNS (Cyberonics system, at 20 Hz) was also studied, in comparison with sham controls, in a canine model of 8 weeks high-rate ventricular pacing-induced HF, confirming previous findings. Vagal nerve stimulation intensity was adjusted before the beginning of pacing to reduce HR by ∼20%. Chronic left-sided (to avoid effects on HR) cVNS use was first reported in humans for the management of drug-refractory epilepsy, 27 obtaining Food and Drug Administration (FDA) approval in 1997, then for resistant depression, 28 with hundreds of thousands of devices implanted all over the word and a very good safety profile. Table 2 lists the main clinical studies of cVNS (four published, one ongoing), BAT, and SCS in HFrEF, showing their main inclusion/exclusion criteria, their objectives, and their stimulation protocols. Table 3 shows patients' characteristics and 6-month results of the same studies. The first human single-centre study of right cVNS in HFrEF showed favourable results 30 and was extended to a multi-centre Phase II study, the European Multicentre CardioFit Study, including 32 patients with advanced HFrEF [New York Heart Association (NYHA) Classes II-IV, left ventricular ejection fraction (LVEF) ≤ 35%], and using the CardioFit 5000 device 31 already tested in animals, and including an intracardiac sensing lead sense to provide a pulse-synchronous (1-3 pulses per each cardiac cycle) cVNS. The electrode design aimed to achieve a preferential stimulation of efferent fibres, by means of anodal block and to minimize the off-target recruitment of A-type fibre by means of a multi-contact cuff design. Vagal nerve stimulation was delivered at 1-3 Hz, with 10 s on/30 s off, and an HR safety boundary (leading to temporary VNS stop) that was initially set at 55 b.p.m. Vagal nerve stimulation intensity was up titrated in five to six visits to reach a mean level of 4.1 ± 1.2 mA; a further increase was mostly limited by hoarseness and jaw pain. Notably, the acute on-phase HR lowering was modest (around 1.5 b.p.m.) but consistent across patients, with few exceptions showing as much as 10 b.p.m. acute decrease. At 6 months, HR from resting ECG decreased from 82 ± 13 to 76 ± 13 b.p.m., while data from 24-Holter ECG recording showed no changes in the mean HR, paired with a significant increase in heart rate variability (HRV) as assessed by pNN50. Notably, changes in time-domain HRV indices not associated with changes in mean HR are strongly suggestive of an improved cardiac vagal output, as opposed to changes in both parameters. 32,33 Accordingly, 6 months efficacy data showed a significant improvement in quality of life (QoL) scores, functional capacity, and LV volumes and function (LVEF from 22 ± 7 to 29 ± 8%), which were maintained at 1-and 2-year follow-up, with no major safety concerns. 34 The Autonomic Regulation Therapy for the Improvement of Left Ventricular Function and Heart Failure Symptoms (ANTHEM-HF) study 35 compared right (n = 29) and left (n = 31) cVNSs performed using the openloop cyberonics system, in a multi-centre, open-label, Phase II, randomized clinical trial with no control arm, performed in India and enrolling patient with an NYHA Classes II and III and LVEF ≤ 40%. None of the subjects had implantable cardioverter defibrillator (ICD) or cardiac resynchronization therapy (CRT). The stimulation system had already been approved for drug-refractory epilepsy. Vagal nerve stimulation was delivered at 10 Hz, with a 14 s on/66 s off stimulation protocol, using a vagal electrode not designed for asymmetric stimulation and reaching a mean output current of 2.0 ± 0.6 mA at the end of the 10-week up-titration period. At 6 months, a significant (+4.5% in absolute values) increase in LVEF was observed in the entire study population, combined to a non-significant decrease of left ventricular end-systolic volume (LVESV; co-primary endpoints) and a significant improvement in QoL, 6 min walking test (6MWT) and NYHA class; these effects were maintained at 12 months, 36 and until 42 months. 37 The benefit tended to be greater for right cVNS compared with left cVNS at 6 months, while no side differences were observed thereafter, although with the limitation of a smaller sample size. The long-lasting protective effects of cVNS were confirmed by the analysis of markers of autonomic tone (HRV) and reflexes (HR turbulence) and of cardiac electrical stability (T-wave alternans, R-wave, and T-wave heterogeneity) and by assessing the burden of nonsustained ventricular tachycardia episodes. 38 Interestingly, the beneficial effects of cVNS in ANTHEM-HF were found to be independent form baseline N-terminal pro-BNP (NT-proBNP) levels. 39 The Neuronal Cardiac Therapy for Heart Failure (NECTAR-HF) study was a Phase II, multi-centre shamcontrolled study enrolling 96 patients (NYHA Classes II and III, LVEF ≤ 35%) randomized 2:1 to active cVNS or sham treatment for the first 6 months; subsequently, cVNS was turned on in all patients. 40 Most (76%) had an ICD, 9% CRT. The stimulation system (Boston Scientific, MN, USA) provided an open-loop cVNS aiming at both central and peripheral targets and obtained through a rechargeable generator already approved for chronic pain therapy (Precision™) and an investigational helical bipolar vagal electrode relatively similar to that used in ANTHEM-HF. Stimulation was delivered at 20 Hz, and at the relatively low mean amplitude of 1.4 ± 0.8 mA. Notably, the maximum tolerated current amplitude in vivo is inversely related to the pulse frequency, 41 explaining why in most of the patients in the NECTAR-HF cVNS up-titration was limited by off-target effects. Left ventricular end-systolic diameter (LVESD, primary efficacy endpoint) at 6 months was not changed, as well LVEDD, LVESV, LVEF, peak VO 2 at cardiopulmonary exercise test and NT-proBNP levels (all additional secondary endpoints), but a significant improvement in QoL and in NYHA class was observed. These findings were substantially confirmed at 18 months (with all patients on active cVNS), except for the QoL improvement, that was no longer observed. Mean HR at 24 h Holter ECG did not change, as well as standard deviation of the intervals between normal beats (SDNN) and root mean square of successive normal to nomal interval differences (RMSSD), while a slight improvement was observed in another time-domain marker of HRV, namely standard deviation of the average normal to normal intervals for each 5 minutes segment of a 24 hours HRV recording (SDANN). A subsequent subanalysis of the study, using tridimensional heat maps applied to 6 and 12 months 24 h Holter ECG, was able to detect subtle VNS-evoked HR changes only in 12% of the treated patients (vs. 0% in the sham arm), 42 suggesting less efferent fibre recruitment compared with pre-clinical studies using the same device, possibly related to the lower stimulation amplitude. Yet, a positive heat maps response was not associated with any difference in conventional measures of frequency and time-domain HR variability, further complicating the puzzle. Notably, most of the patients enrolled were able to properly guess their randomization group. The largest clinical trial of cVNS in HFrEF completed so far is the Increase of Vagal Tone in Heart Failure (INOVATE-HF), a Phase III international, multi-centre, randomized trial assessing the efficacy and safety of right-sided cVNS with the CardioFit system 43 (Figure 2). A total of 707 patients (NYHA Class III, LVEF ≤40%) were enrolled, mostly in the USA, and 3:2 randomized to cVNS plus guideline directed medical therapy (GDMT) or continuation of GDMT alone. At baseline, a slightly lower LVEF in the cVNS arm was the only difference between the two groups. Most patients (88%) had a cardiac device, including 34% with CRT. A composite of all-cause mortality or unplanned HF hospitalization equivalent was used as primary efficacy endpoint, while 90 days freedom from procedure and system-related complications and number of patients with death for any cause or complications at 12 months were the two co-primary safety endpoints. The second interim analysis led to study discontinuation for futility in December 2015, after a mean follow-up of 16 months (range: 0.1-52). Mean current amplitude was 3.9 ± 1.0 mA, with 73% of patients achieving the goal of >3.5 mA. Among the secondary endpoints, LVESV index did not change, while QoL, NYHA class, and 6MWT significantly improved with cVNS. Age, 6MWT distance at baseline, HF aetiology, diabetes, and CRT were not found to affect the primary outcome in the subgroup analysis. Yet, in a post-hoc exploratory analysis of the INOVATE-HF restricted to patients with no CRT, a QRS interval duration <130 ms and a baseline ability to walk >300 m (the inclusion criteria used in CardioFit), showed a weak favourable trend vs. reverse LV remodelling. Very recently, the symptomatic and functional responses to cVNS in the three completed randomized trial, namely the ANTHEM-HF (overall population), INOVATE-HF, and NECTAR-HF were compared in a post-hoc analysis. 44 SDNN, LVEF, and Minnesota living with HF mean scores at 6 months were significantly more improved in ANTHEM-HF compared with NECTAR-HF. Patients enrolled in the ANTHEM-HF also obtained a greater improvement in 6MWT compared with those of the INOVATE-HF. Finally, based on the favourable results of the ANTHEM-HF, an open-label, randomized, study, the ANTHEM-HFrEF, is currently ongoing. 45 The study is randomizing patients with a 2:1 ratio to cVNS plus GDMT or GDMT alone, with an estimated completion date of December 2024. Stimulation is delivered using the VITARIA System (LivaNova) and according to the same stimulation principles of the ANTHEM-HF study, namely a closed loop afferent and efferent stimulation, not pursuing acute HR changes. The rationale for this kind of stimulation has been recently explored in a conscious canine model specifically assessing the contribution of afferent vs. efferent VN activation to the acute HR responses elicited during the active phase of chronic right VNS. 46 Based on frequency-amplitude-pulse width, the authors were able to identify an operating point, defined as the neuronal fulcrum, in which the HR response was null, transitioning from positive to negative. They also proved that only when the neuronal fulcrum constrains were implemented in the setting of chronic cVNS, the circadian control of HRV could be preserved. ANTHEM-HFrEF utilizes an innovative adaptive design as allowed by the new FDA breakthrough device programme: the primary outcome will be a composite of cardiovascular death, or first HF hospitalization traditionally assessed, yet the sample size determination will be performed using a Bayesian adaptive approach. Baroreflex activation therapy Arterial baroreceptors are stretch receptors that form a branching network in the adventitial-medial layers of the carotid sinus and the aortic arch walls. Nerve impulses from baroreceptors are tonically active; increases in blood pressure (BP) lead to increased rate of impulse firing, increased stimulation of the nucleus tractus solitarius, and increased inhibition of the tonically active sympathetic outflow to the heart and peripheral vasculature. Decreased mean and pulsatile BP, lead to decreased nerve firing, reduced stimulation of the nucleus tractus solitarius, and reduced inhibition of sympathetic outflow, which is thus increased. These inputs from baroreceptors are continuously integrated and balanced at the central level with afferent excitatory inputs from skeletal muscle, kidney, cardiac mechanoreceptors, and chemoreceptors, which inhibit vagal outflow and enhance sympathetic output. Even in advanced HFrEF, carotid baroreflex circuits are not intrinsically malfunctioning. 47 After cardiac (and renal) damage, the autonomic balance shifts towards a sympathetic predominance due to the offset of the baroreflex control by increased afferent pathological signalling from the other receptors. The functional baroreceptor impairment can be further enhanced because of baroreceptor unloading in case of reduced cardiac output, concurring to support the strong rationale for BAT in HFrEF. The BAT device components and the mechanisms of action of BAT and their effects on advanced HFrEF-associated changes in autonomic function are shown schematically in Figure 3. The best location for the BAT electrode in the carotid sinus and the efficacy of stimulation are confirmed at the time of surgery by acute stimulation showing a BP and HR drop. Chronic BAT proved to be very promising in animal models of HFrEF of different aetiologies. Zucker et al. 48 demonstrated for the first time in a canine model of pacing induced HFrEF, that continuous bilateral BAT (50-100 Hz, 0.5-1 ms 2 , 2.5-7.5 V, duty cycle 90%) performed using the Rheos system (CVRx, Inc., Minneapolis, MN, USA) improved survival and suppressed neurohormonal activation as assessed by plasma norepinephrine and angiotensin II levels, despite ongoing pacing for the entire study length and no differences in arterial BP, resting HR, and LV pressure. Few years later, the group of Hani Sabbah showed in a canine model of coronary microembolization-induced HFrEF (mean LVEF around 25%) that chronic bilateral BAT using the same system and parameters, improved LV function and LV remodelling. It also reduced plasma norepinephrine levels, interstitial fibrosis, and cardiomyocyte hypertrophy and normalized expression of cardiac β 1 -adrenergic receptors, β-adrenergic receptor kinase, and nitric oxide synthase. 49 The first human study of chronic BAT in HFrEF was reported in 2014 as a single-centre, open-label experience, including 11 patients with advanced HF (67 ± 9 years, all in NYHA Class III, LVEF 31 ± 7%, 46% with chronic renal disease) despite optimized medical treatment, ineligible for CRT. Patients underwent unilateral BAT (right sided in 10 patients) for 6 months using the Barostim™ neo™ system (CVRx Inc.). The decision to perform unilateral rather than bilateral BAT was largely due to safety concerns based on previous clinical experience with bilateral BAT performed using the larger stimulating electrodes of the Rheos system in the setting of arterial hypertension. 50 Also, in patients with resistant hypertension unilateral and mostly right-sided BAT had a more profound effect on BP than bilateral or left-sided BAT. 51 In patients with HFrEF, a 30% drop in muscle sympathetic nerve activity (MSNA) was observed after only 3 months of BAT and was subsequently maintained at 6 months. Baroreflex sensitivity (BRS) also improved at 3 months, with a further increase at 6 months. MSNA reduction and BRS increase were accompanied by a significant improvement in NYHA class, QoL scores, and 6MWT, and by a consistent LV reverse remodelling, as assessed by 3D echocardiography, despite no changes in HR. These findings persisted after 21 months of followup and were associated with a significant reduction in hospitalizations and emergency department visits compared with the year before BAT. 52 The efficacy and safety of BAT were then evaluated in a 1:1 randomized trial including 140 patients with NYHA Class III and LVEF ≤ 35%, (32% had a CRT), receiving GDMT alone or GDMT plus BAT performed using the CVRx Barostim Neo System. 53 Baroreflex activation therapy significantly improved NYHA class, QoL score, 6MWT (primary efficacy endpoints), and NT-proBNP and showed a trend toward fewer in-hospital days for HF. Notably, despite no evident changes in LVEF, BAT also significantly increased systolic BP and pulse pressure. A subsequent subanalysis of the study showed that the beneficial effects of BAT were more pronounced among patients with no CRT. 54 One proposed explanation for this phenomenon is that CRT, by improving electromechanical dyssynchrony, not only increases cardiac output, but also reduces abnormal afferent sympathetic signalling from both cardiac mechanoreceptors and carotid baroreceptors, therefore reducing sympathovagal imbalance and limiting the benefits of BAT. Based on the favourable results of the previous trial, a larger randomized study including 408 patients, the Baroreflex Activation Therapy for Heart Failure (BeAT-HF) trial, was conducted, enrolling patients on GMDT for HFrEF for at least 4 weeks, with NYHA Class III or II (with recent deterioration in Class III), LVEF ≤ 35%, 6MWT between 150 and 400 m, and no Class I indication for CRT. 55 Patients were randomized 1:1 to receive either GMDT alone or GDMT plus unilateral BAT. The trial was designed in collaboration with the FDA breakthrough device programme and had a complex, interactive and adaptive design. The BeAT-HF study was divided into two phases: pre-market phase and post-market phase. The details and status of the BeAT-HF study are presented in Figure 4. In the completed pre-market phase, the population intended for use was represented by the 264 patients fulfilling the enrolment criteria plus NT-proBNP levels below 1600 pg/mL. In this group, a significant 6-month decrease of all the components of the primary efficacy endpoint (6MWT, NT-proBNP levels, and QoL) was observed, combined to a 97% free rate from major adverse neurological or cardiovascular system or procedure-related events (primary safety endpoint). These data are summarized in Table 3 and Figure 5. No data were provided about the impact of BAT on LVEF or LV volumes. Also, when compared with the previous randomized trial of BAT, no significant changes were detected in BP or HR. The restriction to patients with lower NT-proBNP levels was based on a preliminary analysis of the first 271 subjects, enrolled without NT-proBNP level limitations, showing a lower efficacy of BAT among patients with NT-proBNP >1600 pg/mL (no significant impact either on 6MHW or on NT-proBNP level). In the intended for use population, additional benefits were observed, such as lower need for additional drugs compared with controls (mostly ARNI), significant improvement of the EuroQoL-5 Dimensions (EQ-5D) index, and a 51% reduction in the cardiovascular serious adverse event rate (non-HF-related events). Based on the data of the entire BeAT-HF population, on August 2019, the FDA approved BAT for the intended use population. A subsequent subanalysis of the BeAT-HF assessing potential differences in BAT response according to sex, 56 showed that woman (20% of the intended for use population of 264 subjects), despite a poorer baseline QoL compared with men, had similar improvements with BAT in 6minute hall walk (6MHW), QoL, and NYHA class. Notably, Neuromodulation devices for heart failure E21 women had a highly significant improvement in NT-proBNP levels (−43 vs. 7% with GDMT alone; P < 0.01), whereas only a trend for significance was found Baroreflex Activation Therapy for Heart Failure trial design. Baroreflex Activation Therapy for Heart Failure was designed in collaboration with the FDA breakthrough device programme and was divided into two phases: pre-market phase and post-market phase. The details and status of the Baroreflex Activation Therapy for Heart Failure study are represented. MANCE, major adverse neurological and cardiovascular events; MLWHF, Minnesota living with heart failure; 6MHW, 6 minutes hallwalk; PMA, premarket approval. Primary efficacy endpoints in the Baroreflex Activation Therapy for Heart Failure trial pre-market phase. There were significant improvements in quality of life score using the Minnesota living with heart failure questionnaire, exercise capacity measured using the 6-min hall walk test, New York Heart Association class, and N-terminal pro-B-type natriuretic peptide levels. MLWHF, Minnesota living with heart failure questionnaire; 6MHW, 6-min hall walk. E22 V. Dusi et al. and suggest that women are likely to benefit from BAT at least as much as men, if not more. In addition to examining the effects of sex on the effectiveness of BAT, Figure 6 demonstrates the very consistent effects of BAT across all baseline covariates examined in the BeAT-HF study. Two costeffectiveness analyses, one performed in Germany 58 and the other simulated based 6-month data from Figure 6 The effects of baroreceptor activation therapy across all baseline covariates examined in the Baroreflex Activation Therapy for Heart Failure study were very consistent for all four primary endpoints: quality of life score using the Minnesota living with heart failure questionnaire, exercise capacity measured using the 6-min hall walk test, New York Heart Association class, and N-terminal pro-B-type natriuretic peptide levels. MLWHF, Minnesota living with heart failure questionnaire; 6MHW, 6-min hall walk. 63 demonstrated in a rabbit model that the reduction in infarct size promoted by SCS was counteracted by α-or β-adrenergic blockade, while Olgin et al. 64 showed an increase in RR and AH intervals and a significant reduction in ventricular arrhythmias triggered by MI following SCS. These favourable effects are due to a stabilizing impact of SCS on sympathetic reflex arches occurring at lower levels, namely within extracardiac sympathetic ganglia 65 and within the intrinsic cardiac ganglionated plexus, 66 leading as a final result to a blunted neuronal cardiac release of norepinephrine. 67 Spinal cord stimulation was the first neuromodulation strategy to be explored in humans, first in the 1960s for cancer pain relief, 68 later to treat refractory neuropathic pain syndromes 69 and refractory angina pectoris, proving to be effective and safe. 70 Notably, a reduced LV deterioration was noted during adenosine-provoked ischaemia. 71 In a canine HF model induced by anterior MI and rapid pacing, SCS delivered at the T4-T5 spinal level for 2 hours three times a day, significantly improved LVEF from 18 to 47% and reduced ventricular arrhythmias. 72 Similarly, in a porcine model of ischaemic HF, SCS at a higher level (T1-T2) improved LV function and decreased myocardial oxygen consumption. 73 Following these promising pre-clinical results, two small trials were performed in humans with HFrEF, showing a possible benefit of SCS. In a prospective, randomized, doubleblind, crossover study, 74 nine NYHA Class III patients, with LVEF ≤ 30% and an ICD (CRT-D in 6), were randomized to active or inactive SCS for 3 months, with subsequent crossover. Spinal cord stimulation was delivered using an eight-electrode epidural single lead (Octrode; St Jude Medical) at the T1-T4 level, active three times daily for 2 h, at 90% of the paraesthesia threshold (PT). Spinal cord stimulation proved to be safe, free from ICD interferences, and effective in improving symptoms; LV function and BNP levels were unchanged. Notably, most patients correctly identified their active or inactive randomization periods afterwards; this was at least partially attributed to variation of the PT over time. The Spinal Cord Stimulation for Heart Failure (SCS HEART) study enrolled with an open design of 17 patients with NYHA Class III, LVEF 20-35% and ICD carriers (including 47% with CRT-D) to be implanted with dual eight electrodes thoracic SCS leads (Octrode; St Jude Medical) at the T1-T3 levels, programmed to provide SCS for 24 h/ day (50 Hz, 200 μs) at 90-110% of the PT 75 ; four patients not fulfilling the study criteria served as non-treated controls. After 6 months of treatment, there were no deaths or ICD interactions, but three patients needed device reprogramming due to back or neck discomfort, two patients suffered ventricular tachyarrhythmias requiring intervention, and two were hospitalized for HF. As opposed to controls, NHYA class, QoL, peak VO 2 consumption, LVEF, and LVESV significantly improved in SCS treated patients, despite unchanged NT-proBNP levels. The largest trial on SCS in HFrEF patients is the Determining the Feasibility of Spinal Cord Neuromodulation for the Treatment of Chronic Heart Failure (DEFEAT-HF) trial, 76 which randomized in a single-blind 3:2 fashion 66 NYHA Class III HF patients with a mean LVEF of 29 ± 5% (76% with an ICD, none with CRT), to SCS or sham stimulation with control crossing over to active SCS after 6 months. An eight-electrode single lead (Medtronic Model 3777/3877) was inserted in the epidural space at T2-T4 levels and stimulation was programmed for 12 hours/day (50 Hz, 200 μs). At 6 months, LVESV index (primary endpoint), peak VO 2 consumption, and NT-proBNP levels (secondary endpoints) were unchanged, as well as HR, QoL, functional capacity, and ventricular arrhythmias burden. The same findings were confirmed at the 12-month extended longitudinal analysis. The discordant results of the last two trials must be interpreted considering some important differences in both electrode positioning (two eight-electrode leads at T1-T3 vs. single eight-electrode lead at T2-T4) level and stimulation protocol (continuous stimulation vs. 12 h/day). Since the protective effects of SCS can extend for up to 1 hour after SCS offset, it is likely that SCS heart patients were more protected from cardiac stressors. Overall considerations Despite new devices and drugs, there is still an unmet need for additional therapeutic strategies in the management of patients with advanced HFrEF. 77 In this setting, all favourable interventions act by promoting a positive ventricular reverse remodelling through several mechanisms which always include a beneficial effect on the autonomic imbalance. The autonomic imbalance that inevitably accompanies advanced HFrEF can be directly targeted through implantable devices able to modulate cardiovascular autonomic function at different levels, with the same final aim to increase cardiac vagal output and decrease the sympathetic one with an effect that is additional to that already provided by betablockers, angiotensin-converting-enzyme inhibitor/ angiotensin II receptor blocker, and mineralocorticoid receptor antagonist. These devices have been extensively studied in the previous years at both pre-clinical and clinical level, with apparently discordant findings. It is now clear that the physiological and pathological functioning of cardiac neuraxis is extremely complex, and we are only starting to fully understand it. Electrical neuromodulation poses peculiar challenges related to the multiplicity of parameters that concur to define the therapeutic dose and to the lack of reliable means to assess a proper neuronal engagement. The conduction of clinical trials is further complicated by binding issues. Finally, our capability to properly select patients more likely to respond to electrical neuromodulation is still very limited. For instance, BAT was more effective in patients with NT-proBNP levels below 1600 pg/mL, while cVNS efficacy was suggested to be independent from NT-proBNP levels based on the ANTHEM-HF study, and the ongoing ANTHEM-HFrEF is enrolling patients with NT-proBNP levels >800 pg/mL. At present, BAT, albeit still lacking definite survival benefit data, is the only electrical ART approved for clinical use by the FDA, while cVNS is still considered investigational. 78 A possible advantage of BAT, compared with the more complex mechanism of cVNS and SCS, is its action on a well-defined autonomic afferent pathway which is known to be functionally depressed in HFrEF and a main contributor to cardiovascular autonomic imbalance. Afferent information is then integrated with other cardiovascular inputs at the central level to promote a positive autonomic remodelling. Conclusion Electrical neuromodulation has a strong pathophysiological rational for the treatment of advanced HF with depressed left ventricular function but poses some unique challenges that were not properly addressed by the first human studies. This might concur to explain why the favourable effects observed in pre-clinical studies have not been confirmed in controlled clinical trials, with the only relevant exception of BAT, that is currently approved for use. A large trial of cVNS with an adaptive design and an innovative method to titrate the therapeutic dose is currently ongoing and will soon provide further insight on the effectiveness of the technique. Funding This paper was published as part of a supplement financially supported by CVRx, Inc.
2022-08-19T15:17:40.457Z
2022-08-17T00:00:00.000
{ "year": 2022, "sha1": "557869bccd8d3ce71ac39fd1d3e1b0ccd5dc99c3", "oa_license": "CCBYNC", "oa_url": "https://academic.oup.com/eurheartjsupp/article-pdf/24/Supplement_E/E12/45470894/suac036.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "847721c6878b52998739e7a79331d614c5e730e6", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
187473751
pes2o/s2orc
v3-fos-license
Simulation research of SCR optimal operation Selective catalytic reduction technology(SCR) has high denitration efficiency. It is the main techniques adopted in our country of flue gas denitrification. Due to lack of effective NH3 spraying methods and means, stock of low efficiency, high rate of ammonia escape problems, secondary pollution is serious in the actual operation of the existing SCR system. In this paper, using computational fluid dynamics (CFD) method, establishing of three dimensional numerical model of SCR system. Flue gas flow and the NH3/NOx distribution characteristics were analyzed by numerical simulation in SCR system. Studying on the effect of different parameters on the ammonia injection of the reactor entrance speed of the first layer of the catalyst entrance section and the distribution of the NH3/NOx molar ratio, then providing optimizational parameters of the ammonia injection. Introduction The emission of nitrogen oxides from coal combustion is one of the major sources of atmospheric pollutants. NO x can directly harm human health. It can also lead to the formation of acid rain and photochemical smog and cause atmospheric greenhouse effect.One of the key technologies in the design and operation of SCR ammonia injection systems is how to ensure that the mixed flue gas flow rate and NH 3 /NO x concentration distribution at the inlet of the catalyst layer can be more uniform, thus ensuring higher denitrification efficiency and avoiding ammonia slip.Therefore, reasonable control of the NH 3 /NO x molar ratio and its uniformity of distribution are essential for improving the operational performance of the SCR system [1][2][3] . In this paper, a 660 MW supercritical unit is used as the object, and a three-dimensional numerical model of the SCR system is established using the CFD method. The flue gas flow and the NH 3 /NO x distribution characteristics are simulated. The flow of SCR reactor in different ammonia injection parameters is investigated. Study of the effects of NH 3 /NO x molar ratio distribution uniformity and deviation. The research results can provide guidance for the optimization of SCR system operation for similar units. The section size of the flue at the inlet of the SCR is 3.2m10m, the section size of the flue gas in the injection section is 3.2m13.95m, and the distance between the two layers of the sprayed ammonia grid is 1.762m, and there are 8 ammonia injection pipes on each floor. Figure 1 shows the layout of ammonia injection pipes. The size of the upper ammonia injection pipe isφ76mm2.5mm, and the size of the lower ammonia injection pipe is φ 76mm1.5mm. The monolayer catalyst size is 11.2m13.95m0.875m. Three-dimensional modeling of SCR system using pre-processing software GAMBIT. Figure 2 shows the geometric model of the flue gas denitrification SCR system. Calculating the Mathematical Model This paper uses FLUENT software to select the reasonable mathematical model based on the theory of computational fluid dynamics to carry out the optimization research of SCR reactor. Because of the limited conditions, the smoke condition in the SCR system is assumed and simplified as follows [4] : (1)Uniform distribution of inlet gas velocity, temperature, component concentration, etc.; (2) Temperature difference between the actual system inlet and outlet is small, assuming that the system adiabatic and isothermal;(3)The actual system has a small air leakage, so the air leakage of the system is not considered; (4)The ash in the flue gas has little influence on the content of this study, so the impact of ash is not considered; (5)The flow assumption is steady flow.(6)Flue gas components and reducing agent gas are assumed to be ideal gases;(7)Catalyst layer pressure drop is simulated with porous media and a pressure loss equivalent to the actual operating value is generated. The models used for the numerical simulation of SCR process are:(1)The k-ɛ model with swirl correction is used to simulate the turbulent flow;(2)The model of the porous catalyst is used for the simulation of the catalyst structure;(3)The multi-component transport model is selected for the mixing simulation of various substances. In the solution process, the turbulent kinetic energy, the turbulent flow energy dissipation rate, and the momentum equation all adopt the first-order upwind style. The pressure and velocity coupling is based on the SIMPLIE C algorithm. Standard Deviation Criteria One of the goals of the flow field optimization of the SCR system is to realize the uniformity of airflow distribution within the system [5] . At present, the most commonly used is the RMS standard of the United States, which is the relative rms method. Its formula is as follows: Cν-the velocity distribution homogeneity deviation coefficient; n-the number of measurement points of the velocity measurement section; νi-the airflow velocity of the measurement point, m/s; φ-the average cross-sectional airflow velocity measured, m/s, Its formula is Uniform distribution of NH 3 /NO x molar ratio is another objective of the flow field optimization of the SCR system. The unevenness of the ammonia injection concentration is indicated by the symbol C : Cρ-the homogeneity of the concentration distribution of the NH 3 /NO x molar ratio; n-the number of measurement points of the velocity determination section; ρ i -the NH 3 /NO x concentration molar ratio of the measurement point; ξ-the average of the measured cross sectionNH 3 / NO x concentration molar ratio, its expression is The uniformity of the mixing of NH 3 and NO x is considered to be better if the coefficient of deviation of the distribution of the flue gas velocity distribution Cv<15% and the coefficient of variation of the distribution ratio NH 3 /NO x molar ratio Cp<5%. Simulation Results and Analysis The internal structure of the deflector, rectifier grille, etc. of the SCR system in this article is well-designed and fixed, and will not be adjusted. The ammonia injection rate parameter of the ammonia injection pipe also has a great influence on the uniformity of the NH 3 /NO x molar ratio of the first catalyst inlet section in the SCR reactor. Therefore, this article mainly optimizes the ammonia injection operating parameters of the SCR system. Three programs of ammonia injection parameters were studied. The NH 3 -air mixture of the upper and lower ammonia nozzles of program 1 has the same injection rate; the mixture gas rate of the upper ammonia tube of program 2 is slightly less than the speed of the lower layer, and the speed of each layer of the nozzle remains the same; program 3 is based on program 2 , adjust and optimize the speed of different ammonia spouts for each layer. The velocity parameters of the flue gas inlets and the ammonia injection inlets of the three programs are shown as table 1. In the BMCR condition, the temperature of the inlet of the flue gas is consistent with that of the former at 657 K, the velocity of the flue gas is set at 20 m/s, and the temperature of the ammonia injection inlet is consistent with that of the former at 293 K. Figure 3 shows the velocity profile of the central section of the SCR denitration system. It can be seen that the baffles have a great influence on the velocity field of the flue gas, especially the six disk-guiding grids at the corners, which promote the mixing of flue gas and NH 3 by increasing the disturbance of the flue gas. The presence of the 10 curved baffles above the SCR reactor and the rectifier changes the flow direction of the mixed flue gas, so that the velocity of the flue gas entering the first catalyst layer remains as uniform as possible in the Z direction. Figure 4, the velocity of the cross section of the first layer of catalyst in the SCR reactor is shown. The calculation results can be obtained as follows: The velocity distribution homogeneity deviation coefficient of the program 1 is Cν 1 =15.21%, and the velocity distribution of the program 2 is uniform. The deviation coefficient is Cν 2 = 15.23%, the velocity distribution homogeneity of program 3 has a coefficient of variation Cν 3 = 15.24%. The uniformity of the velocity of the first-stage catalyst inlet cross section in the SCR reactor does not change much, mainly because the mixture of NH 3 -air is mixed with the flue gas. The fraction of gas in the SCR reactor is relatively small, changing the rate of ammonia injection while the total amount of mixed flue gas does not change. Propram1 Propram2 Propram3 Figure 4 The speed of the cross section of the first layer of catalyst in the SCR reactor (unit: m/s) Figure 5 and Figure 6 show the NO x molar concentration and the NH 3 /NO x molar ratio of the first catalyst inlet section in the SCR reactor, respectively. According to the comparison between scenario 1 and scenario 2, under the condition that the ammonia injection rate of the next layer is slightly higher than the ammonia injection rate of the upper layer, the uniformity of the NH 3 /NO x molar ratio of the first catalyst inlet cross section in the SCR reactor will be significant. The improvement in the uniformity of the velocity of the first-stage catalyst inlet cross section in the SCR reactor is not significant. This is mainly due to the fact that the relative share of ammonia gas is relatively small, and the rate of ammonia injection is changed while the total amount of ammonia injection is unchanged. The velocity of the cross section of the first layer of catalyst in the SCR reactor does not change much. However, because the amount of ammonia in the lower layer increases, the amount of ammonia in the previous time decreases, and the time for mixing the lower layer of ammonia and flue gas is long, and NH 3 and NO x are more mixed. Uniformly, the uniformity of the NH 3 /NO x molar ratio of the first catalyst inlet cross section in the SCR reactor will be significantly improved. Figure5 NO x molar concentration of the first catalyst inlet cross section in the SCR reactor (kmol/m 3 ) Figure 6 The NH 3 /NO x molar ratio of the first catalyst inlet cross section in the SCR reactor The plan 3 is the optimization made on the basis of plan 1 and plan 2. That is, on the basis that the ammonia injection rate of the next layer is slightly greater than the ammonia injection rate of the upper layer, the ammonia injection rate parameter of each nozzle at each layer is adjusted. By calculation, the uniform coefficient C ρ1 =5.42% of the NH 3 /NO x molar ratio of the plan 1, the uniform coefficient C ρ2 =3.95% of the NH 3 /NO x molar ratio of the plan 2,the uniform coefficient C ρ3 =1.75% of the NH 3 /NO x molar ratio of the plan 3. The results show that the uniformity of the NH 3 /NO x molar ratio distribution at the entrance of the first layer of catalyst in the SCR reactor has been greatly improved by reasonably adjusting the parameters of the ammonia gas vents. Analysis of simulation results of mixed smoke concentration field In summary, this paper provides three kinds of ammonia injection programs. The numerical simulation method is used to analyze and compare the flow field and mixed smoke concentration field of the three programs. The results show that the speed parameters of the ammonia injection pipe can be rationally optimized. Targeted control of the amount of ammonia injected in different regions can achieve uniform mixing of NH 3 -air and flue gas, ultimately ensuring that the SCR system has a high denitration efficiency and low NH 3 escape rate. Conclusion This paper uses computational fluid dynamics (CFD) method to establish a three-dimensional numerical model of the SCR system. The numerical simulation of the flue gas flow and NH 3 /NO x distribution characteristics in the SCR system is conducted. The SCR reactor in different ammonia injection parameters is studied. The effect of the uniformity of the inlet velocity field and NH 3 /NO x molar ratio distribution of the layer of catalyst entrance. By reasonably optimizing the speed parameters of the ammonia injection pipe, the corresponding amount of ammonia injection in different regions can be controlled in a targeted manner, uniform mixing of NH 3 -air and flue gas can be achieved, and ultimately ensuring that the SCR system has a high efficiency of denitration and low NH 3 escape rate.
2019-06-13T13:19:09.799Z
2018-10-11T00:00:00.000
{ "year": 2018, "sha1": "3fb1a7f3aaa2e30fe392e407923622d47e92f4a0", "oa_license": "CCBY", "oa_url": "https://iopscience.iop.org/article/10.1088/1755-1315/186/5/012030/pdf", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "1a4f802826c5c13bc0e02ea5fb13ee1e4805b123", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [ "Physics", "Computer Science" ] }
57375865
pes2o/s2orc
v3-fos-license
Significance of Vesicle-Associated Membrane Protein 8 Expression in Predicting Survival in Breast Cancer Purpose Vesicle-associated membrane protein 8 (VAMP8) is a soluble N-ethylmaleimide-sensitive factor receptor protein that participates in autophagy by directly regulating autophagosome membrane fusion and has been reported to be involved in tumor progression. Nevertheless, the expression and prognostic value of VAMP8 in breast cancer (BC) remain unknown. This study aimed to evaluate the clinical significance and biological function of VAMP8 in BC. Methods A total of 112 BC samples and 30 normal mammary gland samples were collected. The expression of VAMP8 was assessed in both BC tissues and normal mammary gland tissues via a two-step immunohistochemical detection method. Results The expression of VAMP8 in BC tissues was significantly higher than that in normal breast tissues. Furthermore, increased VAMP8 expression was significantly correlated with tumor size (p=0.007), lymph node metastasis (p=0.024) and recurrence (p=0.001). Patients with high VAMP8 expression had significantly lower cumulative recurrence-free survival and overall survival (p<0.001 for both) than patients with low VAMP8 expression. In multivariate logistic regression and Cox regression analyses, lymph node metastasis and VAMP8 expression were independent prognostic factors for BC. Conclusion VAMP8 is significantly upregulated in human BC tissues and can thus be a practical and potentially effective surrogate marker for survival in BC patients. INTRODUCTION Breast cancer (BC) is the leading cause of cancer mortality in women with an incidence of 660,000 new cases per year in China. Moreover, the number of BC deaths and cases has substantially increased in the past few years. Despite advancements in prevention, diagnosis, surgical techniques, and adjuvant therapy, the overall prognosis and survival of patients with BC have not satisfactorily improved [1]. Currently, Tumor-Node-Metastasis stage, molecular type, histological grade, pathological type, and lymph node metastasis are prominent prognostic risk factors in patients with BC [2]. However, these factors alone cannot explain differences in patient prognosis. Therefore, additional novel markers that are involved in patient prognosis need to be identified. The soluble N-ethylmaleimide-sensitive factor activating protein receptor (SNARE) family is a superfamily of small proteins containing more than 35 members in mammals, with varying sizes and complex structures [3]. Being essential components for cellular activities, SNARE proteins are involved in the progression of various tumors [4,5]. Vesicle-associated membrane protein 8 (VAMP8) was first identified as an endosomal SNARE that participates in diverse biological functions including endosomal fusion, exocytosis of glucose transporter type 4 and insulin, regulation of exocytosis in secretory cells, sequential granule-to-granule fusion, and autophagy. Additionally, autophagy has been previously reported to partially participate in tumorigenesis and progression. Autophagy may exert distinct effects on different tumor cell types and during different stages of tumor progression even within the same tumor. Although studies on autophagy in BC Purpose: Vesicle-associated membrane protein 8 (VAMP8) is a soluble N-ethylmaleimide-sensitive factor receptor protein that participates in autophagy by directly regulating autophagosome membrane fusion and has been reported to be involved in tumor progression. Nevertheless, the expression and prognostic value of VAMP8 in breast cancer (BC) remain unknown. This study aimed to evaluate the clinical significance and biological function of VAMP8 in BC. Methods: A total of 112 BC samples and 30 normal mammary gland samples were collected. The expression of VAMP8 was assessed in both BC tissues and normal mammary gland tissues via a two-step immunohistochemical detection method. Results: The expression of VAMP8 in BC tissues was significantly higher than that in normal breast tissues. Furthermore, increased VAMP8 expression was significantly corre-lated with tumor size (p= 0.007), lymph node metastasis (p= 0.024) and recurrence (p= 0.001). Patients with high VAMP8 expression had significantly lower cumulative recurrence-free survival and overall survival (p< 0.001 for both) than patients with low VAMP8 expression. In multivariate logistic regression and Cox regression analyses, lymph node metastasis and VAMP8 expression were independent prognostic factors for BC. Conclusion: VAMP8 is significantly upregulated in human BC tissues and can thus be a practical and potentially effective surrogate marker for survival in BC patients. patients have led to encouraging results, the potential role of autophagy regulation in BC progression remains unclear. Until now, the majority of studies have demonstrated that anticancer strategies mediate autophagy in patients with BC [6]. In addition, the level of VAMP8 has been shown to regulate malignant transformation [7]. Therefore, whether VAMP8, the main factor in autophagy regulation, is involved in BC progression needs to be urgently investigated. In this study, we statistically evaluated the correlation of VAMP8 expression and pathological features with clinical prognosis in BC patients to determine its use as a prognostic factor in BC patients. Patients Between January 2008 and December 2009, all BC patients treated with modified radical mastectomy at the Division of Breast Surgery, affiliated with China Medical University were considered for this retrospective study, excluding patients who were lost to follow-up or had died. Patients who had undergone other surgical procedures were also excluded. A total of 112 eligible patients were finally included in the study. In addition, breast tissue samples from 30 non-cancer patients were obtained as controls. All patients were female, with a median age of 52 years (range, 29-82 years), without prior chemotherapy, radiation therapy, or other related treatment. None of the patients had tumors in any other organ. Follow-up occurred for at least 8 years or until patient death. This study was approved by the First Affiliated Hospital Ethics committee of China Medical University (IRB approval number: AF-SOP-07-1.1-01), following the current regulations of the Chinese government as well as the Declaration of Helsinki. Informed consent was provided from the patients. Immunohistochemistry Rabbit anti-human VAMP8 polyclonal antibody (1:300) was purchased from Abcam (Cambridge, UK), and the Ultrasensitive TM S-P Kit (strept avidin-biotin complex) was purchased from Zhongshan Jinqiao Biotechnology Co. (Beijing, China). Dewaxing was performed using conventional methods, and ethylenediaminetetraacetic acid was used to carry out antigen retrieval. All procedures were carried out according to the specifications of the kit. Each section was randomly examined in five high-magnification fields ( × 200) to calculate a VAMP8 expression score, which took into account both staining intensity and percentage of positive cells. The staining intensity was given one of the following four scores: no color was scored as 0, pale brown as 1, claybank as 2, and deep brown as 3. The percentage of positive cells was also scored, as follows: < 5% positive cells was scored as 0, 5%-25% as 1, 26%-50% as 2, 51%-75% as 3, and > 75% as 4. The total expression score was calculated by multiplying the staining intensity and percentage of positive cells scores. Cases without VAMP8 expression (-) had a score of 0 or 1, while cases with VAMP8 expression (+) had a score of ≥ 2 [7]. Based on the scores, patients were classified into the following two groups: low VAMP8 expression ( < 4) and high VAMP8 expression ( > 4). Each slide was independently examined and scored by two pathologists in a blinded fashion. Statistical analysis Data analysis and graph plotting were performed with SPSS version 17.0 (SPSS Inc., Chicago, USA). Categorical variables were expressed as absolute and relative frequencies and compared by the chi-square test, and descriptive variables between groups were compared using nonparametric tests. A logistic regression model was used to estimate recurrence in multivariate analyses. Survival analyses were performed using the log-rank and Cox regression methods. Two-sided tests were used in all analyses. The significance level was set at p< 0.05. Expression of VAMP8 in breast cancer tissues Immunohistochemical staining was used to observe the expression levels of VAMP8 in BC tissues and normal breast tissues, as well as its subcellular location. As shown in Figure 1, VAMP8 was mainly located at the cell membrane, but not in the nucleus or cytoplasm. In addition, a statistically significant difference in the expression of VAMP8 was found between BC tissues and normal tissues; 99 (88.4%) BC tissues had VAMP8 expression, while only 14 (46.7%) normal tissues showed any VAMP8 expression (p< 0.001) ( Table 1). Association between the expression of VAMP8 and clinical parameters in patients with breast cancer The associations of clinical parameters with VAMP8 expression are shown in Table 2. The expression of VAMP8 was significantly associated with tumor size (p = 0.007), lymph node metastasis (p= 0.024), and recurrence (p= 0.001); how-ever, there was no significant association between VAMP8 expression and age, estrogen receptor expression, progesterone receptor expression, pathological type, or histological classification. Risk factors for recurrence The associations of clinical parameters with BC recurrence are summarized in Table 3. Tumor size and the presence of lymph node metastasis or level II lymph node metastasis were significantly associated with BC recurrence (p < 0.001, p < 0.001, and p< 0.001, respectively). Subsequently, to better understand of the role of VAMP8 expression in tumor recurrence, we performed logistic regression analysis on all parameters that were found to be correlated with tumor recurrence. As shown in Table 4, with regard to tumor recurrence, tumor size ( ≤ 2 cm vs. > 2 cm), lymph node metastasis (negative vs. positive), and VAMP8 protein expression (low vs. high) were identified as statistically significant independent factors (p = 0.023, p= 0.001, and p= 0.036, respectively). Prognostic value of VAMP8 for survival To further explore the role of VAMP8 in BC patient survival, we performed univariate Cox regression analysis after Kaplan-Meier analysis. As shown in Table 5, there were significant differences in recurrence-free survival (RFS) and overall survival (OS) between patients with high and low expression of VAMP8. Moreover, BC patients were stratified into groups based on VAMP8 expression, and differences in prognosis between the groups were examined. Higher VAMP8 expression indicated earlier tumor recurrence (p < 0.001) and lower OS (p< 0.001) ( Figure 2). Large tumor size ( > 5 cm) and the presence of lymph node metastasis or level II lymph node metastasis were also associated with lower cumulative RFS and OS, consistent with the results of previous studies (Table 5). In Kaplan-Meier analysis, four clinical characteristics were considered to be potential risk factors for BC patient survival, namely, VAMP8 expression, tumor size, lymph node metastasis and level II lymph node metastasis. Hence, we subjected these parameters to multivariate Cox proportional hazard regression analysis. As summarized in Table 6, high VAMP8 expression and the presence of lymph node metastasis were significantly associated with lower OS and RFS and were thus independent adverse prognostic factors in patients with BC. DISCUSSION BC is the second-highest cause of cancer-related mortality [8], and more than 90% of deaths in BC patients are caused by metastasis or recurrence [9][10][11]. One of the mechanisms by which BC development could be controlled is autophagy, both as a tumor-suppressive process in early stages of progression and as a protumorigenic process, critical for tumor maintenance and therapeutic resistance during later stages [12,13]. Despite the substantial advancement in our understanding of the role of autophagy in BC [6], the process remains incompletely understood and warrants further clarification. This study revealed the utility of VAMP8 expression in predicting recurrence and progression in BC patients after modified radical mastectomy, suggesting that better understanding of autophagic mechanisms may help us establish better systems for predicting prognosis in patients with BC. Autophagy has been shown to impact recurrence and survival in multiple ways. Chemotherapy resistance is a serious problem that puzzles researchers and threatens patient survival. Autophagy can enhance chemotherapy resistance through various pathways such as heat shock factor 1/autophagy-related protein 7 [14,15] and reactive oxygen species/extracellular signal-regulated kinase (ROS/ERK) [16,17], which reduces the effect of chemotherapy and increases the recurrence rate, thereby affecting OS. Autophagy can also reduce anoikis [18], a form of cell death that results from the loss of cell contact with the extracellular matrix or neighboring cells. Cancer cells that acquire resistance to anoikis are more likely to survive after detachment from the primary tumor and disseminate throughout the body [18]. Accordingly, autophagy has been shown to increase the metastatic ability of tumor cells, thereby affecting disease prognosis [12]. One study reported that autophagy can protect tumor cells from apoptosis or necrosis by providing the cells with nutrients under conditions of cellular stress [19]. This interdependence of autophagy and apoptosis also affects prognosis [20]. Therefore, we hypothesized that VAMP8, as a member of the SNARE family, plays an important role by regulating autophagy and apoptosis in BC and is thus involved in progression. It is well known that the fusion of autophagosomes with lysosomes is key for the degradation of autophagosomes and their contents during autophagy [21]. However, the current studies examining autophagy are all aimed at better understanding early regulatory events, and few studies have focused on the late phases of autophagy. Multiple members of the SNARE family, including VAMP7, VAMP8, and vesicle transport through interaction with t-SNAREs homolog 1B, can affect the fusion of autophagosomes with lysosomes [22]. VAMP8 has been reported to have an indispensable role in many cellular processes [23][24][25]. Chen et al. [7] found that VAMP8 could activate autophagy and was responsible for drug resistance via autophagy and that VAMP8 depletion led to a marked increase in the numbers of cell in the G0/G1 phase. It is well known that ERK2 promotes the migration of tumor cells by inhibiting the expression of ras-related protein Rab-17 (RAB17) and liprinb2, and that the tumor suppressor gene RAB17 plays a role in antitumor invasion by interacting with VAMP8 [26,27]. VAMP8 is overexpressed in BC, but the expression of VAMP8 is higher in ductal carcinoma in situ than in invasive ductal carcinoma [27]. Interestingly, we found that, higher VAMP8 expression was associated with a relatively higher recurrence risk and lower RFS and OS. Thus, VAMP8 may play a dual role in BC, but its mechanism of action remains unclear. Here, we examined VAMP8 expression using immunohistochemistry. High expression of VAMP8 was markedly correlated with tumor size, lymph node metastasis and recurrence, which has not been previously reported. It has been reported that the expression of VAMP8 is significantly associated with increased OS and reduced risk of disease progression and that expression of VAMP8 is an independent factor for favorable pathology and survival outcomes [27]. However, in this study, we obtained conflicting results, and a previous study also found that VAMP8 expression was correlated with unfavorable prognostic indicators, reduced OS and RFS, and high rates of tumor recurrence in glioblastoma multiforme [7]. Notably, biases involving selection criteria, tissue preservation, determination of cutoff value for VAMP8 expression, racial differences, and regional differences, among others, may lead to errors in analyses; however, our results also indicate that VAMP8 might play a dual role in BC and thus warrants in-depth study.
2019-01-22T22:25:22.116Z
2018-12-01T00:00:00.000
{ "year": 2018, "sha1": "aabb67c62c81774c7a72dc918c69480d185ca0af", "oa_license": "CCBYNC", "oa_url": "https://doi.org/10.4048/jbc.2018.21.e57", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "aabb67c62c81774c7a72dc918c69480d185ca0af", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
119327852
pes2o/s2orc
v3-fos-license
Correlations and Clustering in Dilute Matter Nuclear systems are treated within a quantum statistical approach. Correlations and cluster formation are relevant for the properties of warm dense matter, but the description is challenging and different approximations are discussed. The equation of state, the composition, Bose condensation of bound fermions, the disappearance of bound states at increasing density because of Pauli blocking are of relevance for different applications in astrophysics, heavy ion collisions, and nuclear structure. 2 to a condensed phase may occur. Different effects are responsible for the dissolution of bound states: In Coulomb systems (electrons, ions, and atoms in a plasma; electrons, holes, and excitons in excited semiconductors) screening of the long-range Coulomb interaction leads to the Mott transition where an atomic insulator goes over to a metallic conductor; see Ref. [1,2]. Another effect which leads to the destruction of bound states is the Pauli principle. If the constituents of the bound states are fermions, the phase space which is available to form a bound state is reduced at increasing density (Pauli blocking) so that the formation of bound states is suppressed; see Fig. 1. This phenomenon will be discussed in detail below. It is responsible for the transition from a gas of nucleons and nuclei to nuclear matter (nuclear liquid), as it appears at saturation density n sat ≈ 0.16 fm −3 . Also on the level of the quark substructure, at increasing density a deconfinement transition from the hadronic phase to a quark-gluon plasma is expected where the hadrons are dissolved. Starting from a microscopic description of matter, for instance a Hamiltonian with an effective N − N interaction, a systematic treatment is given by the quantum statistical approach where correlation functions are evaluated for the equilibrium distribution. For infinite matter, we have the temperature T and the chemical potentials µ c of the different components c. These thermodynamic variables which define the often used grand canonical ensemble are related to the average of the internal energy U and the average particle numbers N c = Ωn c , where Ω denotes a normalization volume (volume of the system) and n c the density of species c. The relations between these variables such as N c /Ω = n c (T, µ c ) are denoted as equations of state and will be considered in detail in this work. Different equations of state are possible. In particular, thermodynamic potentials such as the free energy F as function of T, n c have the property that all other thermodynamic relations can be obtained by taking the derivation with respect to the corresponding thermodynamic quantities. In the case considered here, i.e. µ c (T, n c ), the free energy is found by integration as F (T, N, Ω) = Ω n 0 µ(T, n )dn (one component). In addition to the thermodynamic properties, it also describes phase transitions when the stability condition ∂µ/∂n| T ≥ 0 is violated, see, e.g., Ref. [3] for the case of different components. A survey of the phase diagram of symmetric nuclear matter is given in Fig. 1. [4]. Baryon density nB = ρ with saturation density nsat = ρ0. The Mott line indicates the region where the formation of bound states is suppressed because of Pauli blocking. B. Quantum Statistical Calculation of Cluster Abundances in Asymmetric Hot Dense Matter Let us start with a brief historical review [5]. The thermodynamic properties of hot nuclear matter are of interest in connection with the theory of heavy-ion collisions as well as with astrophysical and cosmological problems. Of course, the behavior of dense matter under the conditions of star evolution, the expanding universe or deep inelastic relativistic ion collisions should be described by a kinetic theory which gives the detailed nuclear processes during the time evolution of hot dense matter in non-equilibrium (see, for instance, Ref. [6]). However, it is reasonable to compare the result of these non-equilibrium processes with results for the thermal equilibrium in the sense of an estimation for the most probable state the system is likely to attain [7]. Although we are fully aware of the problem of using equilibrium results, we suggest that a correctly formulated theory of thermal equilibrium leads to facts as the distribution of cluster abundances or possible phase instabilities which are also of interest for the non-equilibrium behavior of hot dense matter. For instance, a thermal model [8,9] and the concept of a freeze-out baryon density n B and a freeze-out temperature T [10] were successfully employed in the theory of heavy ion collisions to determine the production of composite fragments. In the present work, nuclear matter in thermodynamic equilibrium is investigated, confined in the volume Ω at temperature T , and consisting of N n neutrons (total neutron density n tot n = N n /Ω) and N p protons (total proton density n tot p = N p /Ω). In the thermodynamic limit, the state is given by the parameter set {T, n tot n , n tot p }, the dependence on Ω is trivial. The subsaturation region will be considered where the baryon density n B = n tot n + n tot p ≤ n sat (with the saturation density n sat ≈ 0.16 fm −3 ), the temperature T ≤ 20 MeV, and the proton fraction Y p = n tot p /n B between 0 and 1. This region of warm dense matter is of interest not only for nuclear structure calculations and heavy ion collisions explored in laboratory experiments [11,12], but also in astrophysical applications, see Ref. [13]. For instance, core-collapse supernovae at post-bounce stage evolve in this region of the phase space [14,15], see Fig. 2, and different processes such as neutrino emission and absorption, which strongly depend on the composition of warm dense matter, influence the mechanism of core-collapse supernovae. For core collapse SN the region on the phase diagram is slightly reduced. From [14]. Let us denote by an ideal nuclide gas an approximation where nuclear matter is considered as an ideal mixture of free particles and clusters which move relatively freely except for occasional nuclear reactions. At equilibrium, the abundances of clusters are determined by the temperature, the chemical potentials of the nucleons and the internal energy of the clusters according to the law of mass action [8][9][10]. No phase transition is obtained within this simple approximation, and at given temperature the part of nucleons which is bound in clusters increases with increasing density. Recently, this simple chemical picture, also called nuclear statistical equilibrium (NSE) [16], has been applied for the nuclear matter EoS in the low-density limit. Fragmentation as observed in HIC has been treated within a microcanonical approach which takes the interaction into account on form of the restriction of available space because of non-overlapping clusters; for references see [17]. A simple chemical equilibrium of free nuclei is not applicable up to saturation density because medium modifications by self-energy shifts and Pauli blocking become relevant. Alternatively, standard versions [18,19] of the nuclear matter equation of state (EoS) considering single nucleons in mean-field (Skyrme or RMF) approximation as well as a representative heavy nucleus and α-particles are available for astrophysical simulations such as supernova collapses. They have been improved, see Refs. [16,[20][21][22][23][24][25][26][27][28][29][30][31][32][33][34][35][36][37][38][39], elaborating concepts such as the heuristic excluded-volume approach or in-medium nuclear cluster energies within the extended Thomas-Fermi approach. These concepts may be applied to heavier clusters but are not satisfactory to describe light clusters that require a more fundamental quantum statistical approach. A generalization of the RMF approach (gRMF) which includes the light nuclei (A ≤ 4) as additional degrees of freedom, where the quasiparticle properties are derived from the quantum statistical approach described below, has been proposed in [23]; see also [40]. A rigorous quantum statistical (QS) approach to the thermodynamic properties of hot nuclear matter is formulated within the framework of thermodynamic Green functions [41][42][43]. The total number of neutrons N n and protons N p are introduced as conserved quantities, weak interaction processes leading to β equilibrium are not considered. Starting from the grand-canonical ensemble defined by the temperature T and the chemical potentials µ n , µ p of neutrons and protons, respectively, the chemical potentials µ n , µ p are fixed by the relations which are equations of state (EoS) that relate the set of thermodynamic quantities {T, µ n , µ p } to {T, n tot n , n tot p }. The QS approach considers correlation functions and its Fourier transform, the spectral function S τ (1, ω; T, µ n , µ p ). The single-nucleon quantum state |1 can be chosen as 1 = {p 1 , σ 1 , τ 1 } which denotes wave number, spin, and isospin, respectively. A rigorous expression for the nuclear matter EoS is found provided that the spectral function is known, (Ω is the system volume, τ = {n, p}; we take k B = 1). The spectral function S τ (1, ω; T, µ n , µ p ) is related to the self-energy Σ(1, z) for which a systematic approach is possible using diagram techniques, see [44,45]: E(1) = 2 p 2 1 /2m 1 . The EoS (2) relates the total nucleon numbers N tot τ or the particle densities n tot τ to the chemical potentials µ τ of neutrons/protons so that one can switch from the densities to the chemical potentials characterizing thermodynamic equilibrium of warm dense matter. If this EoS is known in some approximation, all other thermodynamic quantities are obtained consistently after calculating a thermodynamic potential as shown in Sec. I A. In the following sections, different approximations for Σ(1, ω)) are discussed. C. Cluster decomposition of the equation of state and quasiparticle concept Within a Hamiltonian approach to the many-particle system, the self-energy Σ(1, z) may be presented by a series of diagrams which are constructed from the free nucleon propagator G −1 0 (1, z) = z − E(1) and the nucleon-nucleon interaction potential V (12, 1 2 ). In order to obtain approximations for the equation of state (2) of nuclear matter we can proceed in a number of different ways [41,42]. The spectral function S τ (1, ω; T, µ n , µ p ) and the corresponding two-point correlation functions (density matrix) are quantities, well-defined in the grand canonical ensemble characterized by {T, µ n , µ p }. The self-energy Σ(1, z; T, µ n , µ p ) depends, in addition to the single-nucleon quantum state |1 , on the complex frequency z. It is calculated at the Matsubara frequencies, the analytical continuation to the z plane must be performed. Within a perturbative approach it can be represented by Feynman diagrams. A cluster decomposition of the self-energy with respect to different fewbody channels (c) is possible [41][42][43]46], characterized, for instance, by the nucleon number A, as well as spin and isospin variables. The cluster contributions to the self-energy are derived from an in-medium A-particle Schrödinger equation which describes the propagation of the A-particle cluster (the Fourier transform of the A-particle correlation function gives the corresponding spectral function). The Green function approach provides us with consistent approximations for these few-nucleon propagators. In particular, we introduce the quasiparticle concept to describe the propagation of few-nucleon clusters (including A = 1) in warm dense matter if the A-particle spectral function S A (ω) shows a peak structure at the energy E A,νc (P; T, µ n , µ p ). The dispersion relation E 0 A,νc (P) of the free nucleon cluster is modified at finite densities. The Green function approach describes the propagation of a single nucleon by a Dyson equation governed by the self-energy, E τ (p; T, µ n , µ p ) = E 0 τ (p) + ∆E SE τ (p; T, µ n , µ p ), as well as the few-particle states which are obtained from a Bethe-Salpeter equation containing the effective interaction kernel. Both quantities, the effective interaction kernel and the single-particle self-energy, should be approximated consistently. Approximations which take cluster formation into account have been worked out [43,47], where within the cluster mean-field (CMF) approximation correlations in the surrounding medium are taken into account. For the A-nucleon cluster, the in-medium Schrödinger equation is derived from the Green function approach after the effective occupation numbers n(i; T, µ n , µ p ) are introduced and exchange terms are neglected. is the Bose or Fermi distribution function for even or odd A, respectively, which is depending on {T, µ n , µ p }. The in-medium Schrödinger equation (4) contains the effects of the medium in the single-nucleon quasiparticle shift (nonrelativistic case), as well as in the Pauli blocking terms given by the occupation numbers n(1; T, µ n , µ p ) in the phase space of single-nucleon states |1 ≡ |p 1 , σ 1 , τ 1 . Thus, two effects have to be considered, the quasiparticle energy shift and the Pauli blocking. As example, consider the two-nucleon (A = 2) in-medium Schrödinger equation (4) [E τ1 (p 1 ; T, µ n , µ p ) + E τ2 (p 2 ; T, µ n , µ p ) − E 2ν (P; T, µ n , µ p )]ψ 2νP (12) where the Fermi distribution function is taken for the Pauli blocking term. As shown in Fig. 3, the phase space to form a bound state is reduced owing to the Pauli principle so that the interaction is effectively reduced and the binding energy is reduced. It should be mentioned that the same equation (8) describes the Bose quantum condensation when E τ2 (p 2 ; T, µ n , µ p ) = µ n + µ p , as well as the cross-over from BEC to BCS [48][49][50][51][52]. Using the cluster decomposition of the self-energy which takes into account, in particular, cluster formation, one obtains n tot n (T, µ n , µ p ) = Ω A,ν,P Zf A,Z (E A,ν (P; T, µ n , µ p )) , where P denotes the c.o.m. momentum of the cluster (or, for A = 1, the momentum of the nucleon). The internal quantum state ν contains the proton number Z and neutron number N = A − Z of the cluster. The integral over ω is performed within the quasiparticle approach, the P-dependent quasiparticle energies E A,ν (P; T, µ n , µ p ) are depending on the medium characterized by {T, µ n , µ p }. These in-medium modifications will be detailed in the following section I D. D. Different approximations As it is well known [53], the self-energy occurring in Eq. (3) may be represented by an infinite series of irreducible diagrams. We give the physical ideas how to construct the approximation for the self-energy used in different approaches. Numerical estimations of these contributions are given in Sec.II. In nuclear systems, we are mainly concerned with the strong interaction. Consequences of the N − N interaction are the formation of correlations and bound clusters, but also self-energy shifts and the Pauli exclusion principle. A free nucleon feels a mean Hartree field due to the surrounding nuclear matter consisting of free nucleons and clusters, which is modified by the Pauli exclusion principle ("Fermi hole") so that the Hartree-Fock single particle energy shift ∆ HF (1) is obtained, where Similarly, a bound state (cluster) of A nucleons with total momentum P and internal quantum state ν (including the proton number Z) feels a self-energy shift ∆E SE AνP due to the surrounding matter. In addition, the binding energy is lowered by ∆E Pauli AνP as a consequence of the Pauli exclusion principle, because phase space is occupied by the surrounding correlated matter and is not available to form a bound state. The lowering of the binding energy leads to the destruction of bound states in dense matter. Notice that also the other forces (weak, Coulomb, gravitation) are of relevance describing matter in astrophysics of compact objects not considered here. β equilibrium is not achieved in HIC because the time scales are short. Coulomb interaction contributions are of importance especially for heavy clusters because of its long range character. In the region of thermodynamic instability of nuclear matter, it is essential for the formation of pasta structures. The QS treatment of the Coulomb interaction is well elaborated, see Ref. [2]. Coming back to the strong interaction, we consider Eq. (9) together with (4) as the basic result for the EoS. They are treated at different levels of sophistication as will be explained from a very general point of view, see Tab. II. This allows us to compare different approximations presently used to describe properties of dense matter. The consistent treatment of different effects is clearly demonstrated. (1) The simplest approximation is obtained if cluster formation and mean-field effects, i.e. any effects of interaction, are neglected (zero self-energy). We obtain the ideal Fermi gas consisting of protons, neutrons, (electrons, neutrinos,. . . ) well known from text books. In Eq. (9), we have only A = 1 and the free fermion energies E 0 τ (p). This simple approximation can be improved in two directions: (1 medium) At one hand by including mean-field effects when going to higher densities. We obtain a quasiparticle quantum liquid. Expanding Σ(1, z) in a power series with respect to V (12, 1 2 ), in lowest order we obtain the Hartree-Fock approximation (10). Investigations of hot nuclear matter within the frame of such a Hartree-Fock approximation have been performed, see, for instance, Refs. [54][55][56]. A region of thermodynamic instability was found, and the (2) On the other hand, expanding Σ(1, z) with respect to the nucleon density, i. e. for n c Λ 3 c /2 1 with Λ c = (2π 2 /m c T ) 1/2 being the thermal wavelength, in the lowest order the Beth-Uhlenbeck formula is obtained for the second virial coefficient. For hot dense matter, this approximation has been discussed, e. g., in Refs. [10,57]. The Beth-Uhlenbeck formula takes into account the formation of two-particle clusters. However, no in-medium corrections for these clusters are obtained. To obtain the ideal nuclide gas approximation (NSE), all bound states should be taken into account on the same footing. This may be done using a cluster decomposition for Σ(1, z) using the t-matrices of the isolated A-particle system [41,58]. This approximation is the basis of the ordinary thermal model [8-10, 59, 60] widely used in describing the occurrence of clusters in hot dense matter. (2 medium) In the high density region, the ideal nuclide gas approximation becomes not applicable because of density corrections due to the interaction of the clusters with the surrounding matter. A systematic approach to the in-medium corrections for the free particle and bound state energies as well as the wave functions can be given within the framework of many-body theory. A self-consistent ladder Hartree-Fock approximation [58] has been applied to the quantum statistical calculation of the abundances of deuterons, tritons and alpha particles [61] and to the description of a nuclear matter phase transition [62,63]. As discussed in these papers, the in-medium corrections to the energy and wave functions of the clusters lead to an interesting effect: At high densities, bound states are destroyed because of the Pauli quenching. Beyond a certain density (Mott density n Mott A (T )) the abundances of the corresponding clusters decrease, and in the high density limit all bound states are dissolved so that a degenerate Fermi liquid of quasiparticles remains. (3) In Eq. (9), the summation refers to the mass number A, the internal quantum number ν and the c. o. m. momentum P. The internal quantum number ν includes in addition to the proton number Z also excited bound states with medium dependent energies E A,ν (P; T, µ n , µ p ) as well as continuum states. In the two-particle case, these continuum contributions are expressed by the scattering phase shifts as function of the energy of relative motion. Only after including the continuum contributions, the correct expression for a virial expansion is obtained. The corresponding relation is known as Beth-Uhlenbeck equation. (3 medium) To pass to higher densities, in addition to medium modified bound state quasiparticle energies also the medium modified scattering phase shifts have to be calculated. For A = 2, the generalized Beth-Uhlenbeck formula [48] is obtained. As a peculiarity, when introducing the quasiparticle description one has to subtract contributions of the continuum scattering phase shifts to avoid double counting. (4) This concept of inclusion of scattering state contributions is given in the cluster Beth-Uhlenbeck approach where arbitrary mass numbers A are considered [72]. Only estimates for the continuum correlations are known at present [3], and the in-medium modifications of the corresponding cluster-virial coefficients are relevant for the composition and the EoS. 8 (4 medium) A consistent description of the medium effects should also take into account correlations in the medium. Extending the Hartree-Fock approximation to arbitrary clusters, the CMF approximation has been derived [3,43]. With respect to the Pauli blocking (4), the phase space occupation numbers n(1; T, µ n , µ p ) are not given by the free particle fermion distribution f τ1 (p 1 ) but the effective occupation numbers (5) which considers also the phase space occupation owing to correlations and to the formation of clusters. An estimate is given in Ref. [3]. For a more systematic approach, the self-consistent determination of in-medium correlations and cluster formation has to be treated which is not solved until now. For instance, the unified description of α matter and nuclear matter is not known at present. II. MODEL CALCULATIONS FOR IN-MEDIUM CORRECTIONS OF CLUSTER ENERGY VALUES Model calculations are performed to describe correlations and cluster formation in nuclear systems. We have to go beyond single-particle descriptions which have ben proven to be very successful not only in nuclear structure calculations where the shell model has been worked out, but also in the thermodynamic properties of nuclear matter where mean-field approaches are very popular. Nuclear systems at low density and low excitation energy are dominated by correlations and cluster formation, if the kinetic energy characterized by the temperature or the Fermi energy is small compared with the potential energy. Starting from the interaction potential V (12, 1 2 ) which contains the long range Coulomb interaction as well as the short range nucleon-nucleon interaction, ab initio calculations of correlations, in particular structure of bound states (AνP ) are rather involved. In addition, in contrast to the fundamental Coulomb interaction which is known, the N − N interaction is introduced in an empirical way, fitted to measured properties of nuclear systems. Consequently, QS calculations for nuclear systems should implement as much as possible measured data, for instance the empirical binding energies of isolated nuclei [64]. We focus on the contributions of the N −N interaction to the structure of nuclear systems. The Coulomb interaction is treated using approaches known from plasma physics [1,2]; see Sec. II F. It has to be taken into account for nuclear structure and nuclear reactions. In particular, in context with the EoS and the composition of nuclear matter, it becomes relevant for heavy nuclei and pasta-like structures. A. Single-quasiparticle approximation Within the QS approach, the influence of the medium is given by the self-energy Σ(1, z). It fixes the spectral function, Eq. (3) which then allows to calculate the EoS (9). For the self-energy, a systematic approach is possible using diagram techniques, see [44,45]. The ideal Fermi gas is a trivial approximation of the EoS, Eq. (9), neglecting any self-energy contribution in Eq. (3) so that only A = 1 with the free dispersion relation E(1) = 2 p 2 1 /2m 1 remains. We discuss the approximation (1, medium) of Tab. II, considering single-nucleon quasiparticle states in dense nuclear matter. In lowest order with respect to the interaction, neglecting correlations in the surrounding matter the Hartree-Fock (HF) result (10) is obtained. The approximation Σ HF (1, z) = ∆ HF (1) is real and not depending on the frequency z. This is in correspondence to the Kramers-Kronig relation that any frequency dependence of Σ(1, z) would produce an imaginary contribution Im Σ(1, z). Because Im Σ HF (1, z) = 0, the spectral function (3) depends on ω as In HF approximation, we have a sharp quasiparticle with the shifted energy E qu,HF (1) = E(1) + ∆ HF (1). This modification of the single-particle energy E(1) is denoted as mean field. In higher order with respect to the interaction, contributions are expected for Im Σ(1, z) describing collisions in the system. The δ-like spectral function obtained from the quasiparticle pole ω = E qu (1), with the self-consistent solution of is broadened by Im Σ(1, z) which describes the finite life-time of the quasi-particle. The Hartree-Fock approximation has been improved taking into account two-particle correlations within the Brueckner approach, see [48,65]. From the self-energy the spectral function is calculated (3). The position of the peak, i.e. the self-consistent solution of Eq. (12) gives the quasi-particle self energy E τ (p; T, µ n , µ p ). However, the peak structure is not always clearly seen from the spectral function, for instance at low densities, where bound states appear. Near the saturation density phenomenological values for different properties such as the saturation density, binding energy, compressibility are known which are not correctly obtained within the Brueckner theory using a simple form of N − N interaction. Instead of microscopic calculations of the quasiparticle energies, based on a semi-empirical N − N interaction, we can also directly parametrize the quasiparticle energies using the known properties of nuclear matter. A simple parametrization is given by Skyrme; see [66,67]. The Hartree-Fock shift (10) is estimated using, for instance, a zero range effective interaction V (n B )δ(r 1 − r 2 ) so that ∆E SE τ (1) is independent of momentum and temperature. A more advanced parametrization is the relativistic mean-field theory (RMF) where the quasiparticle energies are parametrized by a scalar S(T, n B , Y p ) and a vector potential V τ (T, n B , Y p ); see Refs. [68,69], adapted to empirical data for nuclear systems: In the limit p → 0, the quasiparticle dispersion relation leads to the effective mass approximation with the shift ∆E SE . Explicit expressions for S(T, n B , Y p ) and V τ (T, n B , Y p ) in form of Padé approximations which are suitable for numerical applications are found in [3,23,47]. As a mean-field approach, in the low-density limit a linear dependence S, V τ ∝ n B is assumed. This may be a good choice for the shift of the quasiparticle peak as solution of Eq. (12). The EoS (9) obtained taking only A = 1 in quasiparticle approximation gives already a reasonable description near the saturation density. In addition, a region of thermodynamic instability ∂µ/∂n B < 0 is obtained below a critical temperature (symmetric matter) T mf crit ≈ 13.72 MeV [23] indicating the occurrence of a phase transition. No cluster formation is described in the single-quasiparticle approximation. Therefore, the RMF approximation is not sufficient in the low-density, low-temperature region where bound states occur. The concept to parametrize the single-nucleon quasiparticle energies is related to the density functional theory used in condensed matter theory. A problem is the exact treatment of correlations. Part of the interaction is already implemented in the parametrization of the energy density functional. If the correlations are considered separately one has to avoid double counting. We come back to this issue below. B. Cluster-quasiparticle approximation To obtain the formation of clusters and correlations, the term Im Σ in the spectral function (3) has to be analyzed. A cluster decomposition of the self-energy gives the contribution of the A-nucleon correlation, in particular the bound states. Neglecting all medium effects in the few-nucleon wave equation (4), the ideal nuclide gas approximation (NSE) for the EoS (9) is obtained. Instead solving the isolated A-nucleon Schrödinger equation with an appropriate N − N interaction to obtain the energy eigenvalues E (0) A,ν (P), we can directly use the empirical values for the masses of nuclei as given, e.g., by Ref. [64]. To describe medium effects consistently with the single-quasiparticle approximation, the few-nucleon wave equation (4) has to be solved. Within a perturbative approach, the cluster-quasiparticle shift has two contributions (A > 1), In addition to the single-quasiparticle energy shift ψ AνP | i ∆E τi (p i )|ψ AνP , the Pauli blocking − ψ AνP | i =j n(i)V (ij, i j )|ψ AνP must be considered to obtain the approximation (2, medium) of Tab. II. Within a simple estimate for the medium-modified cluster-quasiparticle energies, the cluster self-energy shift is easily calculated in the rigid shift approximation neglecting the effective mass corrections. To improve it, the effective mass approximation (14) can be performed using empirical values for m * τ , see [3]. In the Pauli quenching term the interaction potential V (12, 1 2 ) may be approximately eliminated if the cluster wave function is represented by the antisymmetrized product of single particle wave functions φ n (i) [5,58]. In the case A = 2 this can be done rigorously. After separating the c. o. m. motion P, the relative motion (relative momentum p) of the free deuteron obeys a Schrödinger equation With the bound state energy E [71]. In the low-density limit where f τ (p) = n tot τ Λ 3 /2 exp(− 2 p 2 /2mT ) with the thermal wave length Λ = (2π 2 /mT ) 1/2 the result reads with X 2 d = 2 P 2 /8mT and d = 2 k 2 d /2m = 2.02 MeV. In addition to the linear dependence on the nucleon density n B as a consequence of the perturbation approach, the strong variation of the Pauli blocking shift with P and T is remarkable, in contrast with the single-nucleon mean-field shifts. For a rigorous result, the empirical form factor |ψ (0) d (p)| 2 has to be used instead of a Gaussian form factor, fitted to the rms radius. A more sophisticated evaluation of ∆E Pauli d,P is found in Refs. [3,70] where also Padé approximations for the dependence on {P, T, n B , Y p } are given, see For the evaluation of the Pauli quenching term for the other light elements A ≤ 4, i.e. t, h, α, assumptions for the interaction potential are necessary. Even for the product ansatz, the binding energy E (0) AνP is not given by the sum of the single nucleon state energies E n so that shell model calculations have to be used, Estimates based on the harmonic oscillator model are given in [5,58]. A more sophisticated calculation where the c.o.m. motion is eliminated, has been performed using a separable potential in Ref. [3,70]. Padé approximations for ∆E Pauli A0P (P, T, n B , Y p ) are given there which are suited for the evaluation of the EoS (9) and the corresponding thermodynamic potentials. A more simple parametrization of the quasiparticle shifts of the light elements has been considered in Ref. [23] where also various thermodynamic functions have been presented. The quasiparticle shifts ∆E A0P of light clusters, depending on {P, T, n n , n p }, have been investigated recently [3,70,71]. In-medium bound state energies are calculated for the light clusters (d, t, h, α), and fit formulae are presented there. Clusters with 5 ≤ A ≤ 11 (light metals) are weakly bound and therefore very sensitive to the medium effects, similar to the deuterons. These elements are weakly bound (note that 8 Be is unbound) and experience a strong influence of the medium. A strong depletion because of the Pauli blocking is expected. Estimates of ∆E Pauli A0P for 4 < A ≤ 16 are given in Refs. [5,43,58,72]. Different approaches are used to calculate the effect of medium modification for large nuclei. The heavy clusters A ≥ 12 are usually considered as droplets of a dense phase with n B ≈ n sat . As a semiempirical treatment of medium effects, in particular Pauli blocking, the excluded-volume model [25,73] may be introduced. As an alternative, local density approaches such as the Thomas-Fermi model can be applied to calculate the modification of the cluster in a dense medium [18,19,26,30,33,35,36] which provide us with a microscopic treatment of large nuclei in a dense medium. We give here an approach [5,43,58] which is closely related to the treatment of light clusters. The total energy shift of a cluster (AνP ) has a simple structure if within a homogeneous Fermi gas model the wave functions φ α (i) are taken as momentum eigenfunctions. At not too high temperatures, a strong compensation between the self-energy and the Pauli quenching term results. Thus large clusters remain nearly unshifted. Applying the homogeneous Fermi gas model to a cluster with densities n A τ (r) for the protons and neutrons, respectively, we obtain An analysis of experimental data was performed to fit the following symmetrical density distribution function (see, e.g., Ref. [74]) Appropriate values for A > 16 are R = 1.05 fm A 1/3 , b = 0.57 fm. Furthermore we take n A p (r) = n A B (r)Z/A. C. Mott points and Mott momenta Of special interest are the binding energies with E cont A,ν (P) = N E n (P/A) + ZE p (P/A), that indicate the energy difference between the bound state and the continuum of free (scattering) states at the same total momentum P. This binding energy determines the yield of the different nuclei according to Eq. (9), where the summation over P is restricted to that region where bound states exist, i.e. B bind A,ν (P; T, n B , Y p ) ≥ 0. We denoted the density n Mott A,ν (T, Y p ) as Mott density [41][42][43]46] where the binding energy of a cluster {A, ν} with c.o.m. momentum P = 0 vanishes, see Fig. 4. For baryon densities n B > n Mott A,ν (T, Y p ) we introduced the Mott momentum P Mott A,ν (T, n B , Y p ), where the bound state disappears, At n B > n Mott A,ν (T, Y p ), the summation over the momentum to calculate the bound state contribution to the composition (9) is restricted to the region |P| > |P Mott A,ν (T, n B , Y p )|. The condition (27) may be replaced by further restrictions in the P -space if further decay modes are considered. Especially, the decay into α quasi-particles is of interest. This effect takes place, e.g., in the region 6 ≤ A ≤ 11 where also the stability of the clusters with respect to the decay into other fragments such as deuterons and tritons/ 3 He must be checked in order to determine P Mott Aν . The Mott point where the binding energy vanishes is determined by the Pauli blocking term, E 0 A,ν + ∆E Pauli c (P; T, n Mott A,ν , Y p ) = 0. The self-energies ∆E SE A,ν for the bound state and the continuum states compensate if the momentum dependence is neglected. Crossing the Mott point by increasing the baryon density, part of correlations survive as continuum correlations so that the properties change smoothly. Therefore, the inclusion of correlations in the continuum is of relevance. D. Excited states, continuum correlations and virial expansions The chemical equilibrium in Eq. (9) contains the sum over all components. This includes not only the A-nucleon clusters in the ground state but also the ν-summation over all excited cluster states. This can be replaced by an integral after introducing the density of states [75] D A (E) = 1 12 with a = A/15 MeV −1 for the homogeneous Fermi gas model and arbitrary values Z. Then the abundance of the clusters with mass number A is given by In this formula, it was assumed that Coulomb corrections, self-energy and Pauli blocking corrections to the cluster energy do not strongly depend on the excitation state. A lower bond E A min +E F /A (E F -nuclear matter Fermi energy) was introduced to take into account that below this energy the density of states (28) is not applicable, and the discrete structure of the excitation spectrum of the clusters should be considered. This lower limit can be flexible depending on the number of states which are explicitly taken into account. The upper bond E A max (x) is introduced into (29) to take into account that excited clusters may become instable with respect to the decay into smaller fragments as given, e.g., by . The summation over all excited states ν is not restricted to the bound states but includes also scattering states. Only taking into account the contribution of scattering states, the correct low-density limit of the EoS and related thermodynamic quantities is obtained. Expanding the EoS in powers of n B , the lowest order gives the result for ideal quantum gases n 13 with Λ = (2π 2 /mT ) 1/2 being the baryon thermal wavelength (the neutron and proton masses are approximated by m τ ≈ m = 939.17 MeV/c 2 ), and δ tot 2,T I =0 (E) = S,L,J (2J + 1)δ2S+1 L J (E) the isospin-singlet (T I = 0) scattering phase shifts with angular momentum L as function of the energy E of relative motion. A similar expression can also be derived for the isospin-triplet channel (e.g. two neutrons) where, however, no bound state occurs. The relation (30) gives an exact relation for the second virial coefficient in the low-density limit where in-medium effects are absent. For data see [77] where detailed numbers are given. Note that the second viral coefficient is expressed in terms of measured data, the binding energies and scattering phase shifts δ 2 (E), These second virial coefficients cannot directly be used within a quasiparticle approach. Because part of the interaction is already taken into account when introducing the quasi-particle energy, one has to subtract this contribution from the second virial coefficient to avoid double counting, see [47,48]. Instead of Eq. (30) we obtain for the d channel Comparing (31) with the ordinary Beth-Uhlenbeck formula (30) there are two differences: i) After integration by parts, the derivative of the scattering phase shift is replaced by the phase shift, and according to the Levinson theorem for each bound state the contribution −1 appears. The EOS (9) is not free of ambiguities with respect to the subdivision into bound state contributions and continuum contributions, compare (31) and (30). The continuum correlations in the second virial coefficient are reduced if the quasiparticle picture is introduced. The remaining part of the continuum contribution in Eq. (30) is absorbed in the quasiparticle shift. This has been discussed in detail in [40,47,48]. At higher densities, we can introduce also the quasiparticle picture for the A = 2 channel so that the deuteron energy E 0 d is replaced by the in-medium (quasiparticle) deuteron energy, and the phase shifts δ c (E) contain also the medium modifications, see [48]. The approximation based on the solution of the in-medium two-particle problem (8) leading to the generalized Beth-Uhlenbeck formula is denoted as (3, medium) in Tab. 2. E. Cluster virial approach and correlated medium A more advanced approach [(4, medium) in Tab. 2] to the nuclear matter EoS would include cluster with arbitrary A. A cluster Beth-Uhlenbeck approach is discussed recently [47] to include also higher-order (A > 2) correlations [3]. For the A-nucleon cluster, the in-medium Schrödinger equation (4) is derived, depending on the occupation numbers n(1; T, µ n , µ p ) of the single-nucleon states |1 . It is obvious that the nucleons found in clusters contribute to the mean field leading to the self-energy, but occupy also phase space and contribute to the Pauli blocking. The cluster meanfield (CMF) approximation [3,43,47] considers also the few-body t-matrices in the self-energy and in the kernel of the Bethe-Salpeter equation. The CMF approximation leads to similar expressions (10) but the free-nucleon Fermi distribution f 1,τ1 (1) replaced by the effective occupation number (5). Because the self-consistent determination of n(1; T, µ n , µ p ) for given {T, µ n , µ p } is very cumbersome, as approximation the Fermi distribution with new parameters {T eff , µ eff n , µ eff p } (effective temperature and chemical potentials) have been introduced [3], The effective chemical potentials µ eff τ realize the normalization to the given baryon densities n tot τ . A simple relation T eff ≈ 5.5 MeV + 0.5 T + 60 n B MeV fm 3 (33) was given in Ref. [3] as an approximation for the region 5 MeV < T < 15 MeV and densities n B < n sat /2 of the parameter space. More detailed investigations are necessary to derive a more general expression for the effective temperature as function of T, n B , Y p including, for instance α matter where the medium consists of α nuclei. The present simple fit formula (33) may be considered as a first step in this direction. F. Coulomb correlations and Debye-Thomas-Fermi screening In Refs. [5,46,72] the Debye-Thomas-Fermi theory known from plasma physics was adapted to calculate the Coulomb corrections to the cluster energies E 0 AνP (T, n B , Y p ) and the nucleon density variations (pair distribution function) within the screening cloud around a cluster. A more sophisticated approach to these Coulomb corrections can be formulated by considering the cluster self-energy due to the Coulomb part of the interaction in the dynamically screened potential approximation, see, for instance, Refs. [1,2]. The screened potential V D (r) is obtained from the Poisson equation, and the screened density follows from the self-consistent solution of the linearized Poisson-Boltzmann equation. A typical parameter is the screening length 1/κ where Two effects are obtained from the Coulomb interaction: (i) The density of the surrounding charged particles is reduced in the vicinity of the cluster. The total proton density n * p . at the surface of a cluster is smaller than the average density n tot p because of the Coulomb repulsion. This effect will diminish the in-medium corrections due to the short range nucleon-nucleon interaction especially for large clusters. (ii) In addition to the Coulomb energy of the isolated nucleus, i. e. n B = 0, a self-energy shift due to the finite charged nucleon density n tot p is obtained. For globally charge-neutral matter, in particular in high-density astrophysical objects, the Coulomb term in the Bethe-Weizsäcker formula is reduced. The Coulomb field of the charged nucleus extends for n B = 0 over the entire space, but is confined to the compensating screening cloud at finite matter density. This means a reduction of the electric field energy. To give an estimate, the simple Debye theory gives shift −Z 2 e 2 κ/2. This effect makes the large clusters more stable. With increasing density, the valley of stable nuclei is shifted towards the symmetry line Z = N . Larger clusters can be formed because the destabilizing influence of the Coulomb term is reduced. Expressions to calculate both effects (i) and (ii) are given in the literature [5,46,72]. Note that a simplified description of the effects of Coulomb correlations is given by the Wigner-Seitz cell method as used, for instance, in Ref. [55]. In the spirit of the Wigner-Seitz model, all protons are removed from a sphere with the radius R WS A = (3Z/4πn tot p ) 1/3 so that n WS p (r) = n * p = 0, r < R WS , and n WS p (r) = n tot p elsewhere. Instead of the Debye shift which reads in the classical limit (uncorrelated medium) we have (R A = 1.25A 1/3 = (3/4πn sat ) 1/3 fm) Light clusters A ≤ 4 are not significantly influenced by Coulomb interaction. In contrast, Coulomb interaction is of fundamental interest for the stability of large clusters, and it is very important in the phase transition region determining the pasta-like structures. The simple Wigner-Seitz model can give only a rough estimate of the effects of Coulomb interaction. It has to be replaced by more accurate calculations which treat the Coulomb correlations within a QS approach. The solution of the Poisson-Boltzmann equation with the consistent account of normalization gives an adequate treatment of Coulomb effects. III. EXPERIMENTAL EVIDENCES AND RELEVANCE We discuss several properties which can be observed. We give not an exhaustive description of these properties, but intend only to point out the relevance of clustering in nuclear systems and the necessity to describe medium effects. The experimental verification of clustering in nuclear systems is not simple. First, in laboratory experiments we have not infinite nuclear matter, but always finite systems. Second, experiments like heavy ion collisions (HIC) are non-equilibrium processes and are only approximately described by equilibrium properties, for instance within a freeze-out approach. Despite the consistent quantum statistical description of cluster formation in HIC may be solved by future transport codes, the local thermodynamic equilibrium is a prerequisite for such a nonequilibrium approach, not only as a test case for the quality of the equilibrium limit solution, but also for the formulation of the kinetic theory beyond the Boltzmann single-nucleon description. The cluster yields and respective energy spectra have been discussed intensely to derive the thermodynamic properties of nuclear matter at finite temperature. Recently investigations have been published [11] which clearly rule out the NSE, but demonstrate the relevance of medium corrections. Within a QS approach including quasiparticle shifts and correlations in the continuum [3], it was possible to reproduce the data for the chemical constants of the light elements d, t, h, α obtained from the cluster yields, see Fig 5. More simple models for medium corrections such as the excluded volume concept [25] can be adapted to reproduce the data [37]. So-called Mott points have also been discussed when comparing the theory of cluster formation in dense matter with experiments [78]. Note that for n B > n Mott A,ν the abundance of the cluster {A, ν} is not vanishing. Contributions from high momenta P > P Mott A,ν (T, n B , Y p ) and the continuum remain. B. EoS and gas-liquid nuclear matter phase transition The evaluation of the EoS (9) for symmetric matter (Y p = 0.5) is shown in Fig. 6 for different T . Further EoS such as the Free energy F (T, N n , N p , Ω) are obtained by integration; see Sec. I A. We will not present results for all thermodynamic quantities, see Ref. [23], but discuss only the influence of clustering. RMF is applicable at high densities where clusters disappear. The ideal Fermi gas, valid at very low densities, is improved by the NSE which becomes invalid around n B = 0.001 fm −3 . An interesting point is the stability of the EoS for homogeneous matter with respect to phase transitions. For 16 Symmetric matter: chemical potential instance, if ∂µ/∂n | T < 0, separation of phases with different densities will occur. We would like to point out that the parameter values of T and n B considered here lie within that region of the temperature-density plane of matter where a nuclear matter phase transition is possible [41, 42, 54-56, 62, 79]. Of interest is the reduction of the region of phase instability obtained from RMF, if correlations and cluster formation are taken into account; see Fig. 6. The critical temperature T mf cr = 13.72 MeV, see Sec. II F, is reduced to T QS cr = 12.42 MeV [3,41,43], se also [28,32]. The spinodal instability has been considered [80,81] and a limiting temperature for large clusters has been discussed as limit for thermalization owing to spinodal vaporization. This may be improved by considering the intrinsic partition function, in particular the continuum contributions. For homogeneous matter, phase separation is suppressed because of the long-range Coulomb interaction. Structures (droplets, wires, sheets, etc.) are formed separating highdensity regions from low-density regions. Such so-called pasta phases give lower values for the free energy. So-called nuclear pasta phases which may have complex structures, are discussed to derive the EOS also within the region of thermodynamic instability, see [26,30,32,33]. Note that cluster formation in hot dense matter has also been treated as phase separation and droplet formation near the phase transition region. In Ref. [54], the dense phase is assumed to be distributed as droplets (identical nuclei) immersed in the low dense phase which consists of free nucleons and ideal α particles. Within a finite temperature Thomas-Fermi approach, abundances of clusters and probabilities of fluctuations of the droplet size were considered [55,82]. A mass formula approach which takes into account excited nuclei and Coulomb effects was given in Ref. [60]. These approaches, however, do not seem to be adequate to describe the abundance of small clusters where a more rigorous quantum statistical approach is necessary. Confusion arises when a nucleus which is a bound quantum state is treated as droplet of a dense phase which is a classical object. C. Symmetry energy Solving the EoS (9), after calculating the free energy as discussed above further thermodynamic quantities are obtained such as the internal energy per nucleon U (T, n B , Y p ). The symmetry energy U sym (T, n B ) describes the dependence of the internal energy on the asymmetry parameter Y p and is related to the difference of the internal energy of neutron matter and symmetric matter, see [83,84]. The symmetry energy is sensitive to cluster formation, see Fig. 7. Whereas quasiparticle approaches such as Skyrme Hartree-Fock and relativistic mean-field (RMF) models predict in the low-density limit U sym (T, n B ) ∝ n B [83], the QS calculations [3,84] show strong deviations at low densities compatible with the NSE. Cluster formation is strongly T dependent, and the low-density, low-temperature limit is dominated by the binding energy per nucleon in nuclei which is ≈ 8 MeV. Such a finite value of U sym (T, n B ) in the low-density region in contradiction to the mean-field approaches has been confirmed experimentally by [11,85,86]. At low density the symmetry energy changes mainly because additional binding is gained in symmetric matter due to formation of clusters and pasta structures [87]. D. Low temperatures and quantum condensates A special feature of correlations and bound state formation are quantum condensates. According to Eqs. (6), (9), the Bose distribution function exhibits a singularity when the energy eigenvalue E Aν (P; T, µ n , µ p ) coincides with the cluster chemical potential µ A = (A − Z)µ n + Zµ p (Thouless criterion). The case A = 2 is well investigated. Whereas in the low-density limit, depending on the asymmetry, Bose-Einstein condensation (BEC) of deuterons is expected to occur, at high densities Bardeen-Schrieffer-Cooper pairing (BCS) of continuum states happens. The dissolution of bound states is connected with the crossover from BEC to BCS [50]. These effects are described solving the two-nucleon wave equation (8). However, in symmetric matter the formation of a BEC of deuterons interferes with the BEC of α particles (quartetting) which are strongly bound (7.1 MeV/A for α in contrast to 1.1 MeV/A for d) so that, at finite temperature, with increasing chemical potential (increasing density) the BEC of α particles occurs prior to the BEC of deuterons [88]. The BEC of α particles disappears abruptly when the bound states are dissolved owing to the Pauli blocking, which follows from the solution of the in-medium wave equation (4) for the special case A = 4, E α = µ 4 . Quantum condensates are not rigorously incorporated in present standard EoS. Whereas pairing is rather well described, the transition from α matter to nuclear matter at low temperatures is not clearly described until now. Experimentally, signatures of pairing (rigorously defined for infinite matter) are seen in the even-odd staggering of the binding energies of nuclei. Signatures of quartetting have been identified for the Hoyle state of 12 C [91,92], see the following section III E. E. Nuclear structure Results derived for homogeneous nuclear matter cannot directly applied to finite nuclei. Only within a density functional approach, the local density approximation uses these results. Neglecting any correlations, the mean-field approximation [(1, medium) of Tab. 1] has been applied very successful as shell model of nuclei. The many-nucleon wave function is approximated by the antisymmetrized product of single-nucleon states which are not momentum eigenstates as in homogeneous matter but have to be determined self-consistently. Inclusion of correlations is possible, for instance, by superposition of shell-model states which needs some effort to realize a cluster structure. Other approaches implement the formation of clusters from the beginning, for instance the resonating group method and related approaches, for a review see [90]. Numerical simulations such as fermionic/antisymmetrized molecular dynamics (FMD/AMD) are at present restricted to small numbers (A ≤ 12) of nucleons. There is clear evidence for clustering in nuclei if the density is low. We give some examples for α clustering in nuclei. Symmetric 4n nuclei ( 8 Be, 12 C, 16 O, etc) in dilute gas states near the nα threshold have been investigated, experimentally and by theory. A famous example is the Hoyle state which is an excited state of 12 C, see Refs. [90,91]. Antisymmetrization of the total wave function (Pauli blocking) is responsible that in contrast to the low-density Hoyle state (large rms radius ≈ 4.29 fm) where α-like clusters are well established, in the dense ground state 12 C (rms radius ≈ 2.65 fm) the uncorrelated product state (Slater determinant) is a good approximation. Clustering is also visible in low-lying breathing excitations of nuclei as well as in nuclear reactions. Another signature of quartetting my be the Wigner contribution [93] to the binding energy of near Z = N nuclei. A long-standing issue of correlations in nuclei is the α decay of heavy nuclei where a preformation of the α particle is assumed, see Ref. [94][95][96] and further references given there. α-like correlations appear in the low-density tails at the surface of heavy nuclei which are α emitters. For example, in 212 Po a quartet {n ↑ , n ↓ , p ↑ , p ↓ } moves on top of the double-magic 208 Pb core nucleus. Outside a critical radius r cr = 7.44 fm the nucleon density of the core nucleus is small, n B < 0.03 fm −3 , so that an α-like bound state can be formed, whereas inside the core nucleus the quartet is described by uncorrelated single-nucleon states. Improving the local density approach, a rigorous description of the quartet state moving on top of a core nucleus has to be worked out in future. F. Astrophysics An important application for the nuclear matter EoS is the structure and evolution of compact objects. Simulations of supernova explosions are performed recently to compare with observed signals. The thermodynamic parameters of the scenario of core collapse, Fig. 2, are found in the warm dense matter regime. Cluster formation is relevant to reproduce a consistent EoS [20]. In particular, the physics of core collapse supernovae enters the parameter region where cluster formation with A ≤ 4 in the subsaturation region occurs [14]. The presence of clusters modifies the thermodynamic properties and affects, for instance, the neutrino transport [21,26,27,97]. Whereas previous approaches [18,19] considered only α particle formation, recently also other light elements have been taken into account, within a quantum statistical model [20] or using the excluded volume concept [25]. A detailed knowledge about the supernova explosion including the neutrino transport is necessary to answer different questions such as the emission of matter by explosions or the cooling rates of pre-neutron stars, which is influenced by cluster formation and the occurrence of quantum phases. The formation of heavy elements is an essential, unsolved problem in astrophysics. It needs a hot, neutron-reach environment. At present it remains unclear to determine the site, for instance supernova explosions or neutron star mergers, where the r-process occurs [98,99]. Another topic is the structure of neutron stars. Parameter values {T, n B , Y p } appear in the crust where clusters formation occurs. Heavy nuclei are immersed in an environment consisting mainly of neutrons and electrons. At increasing density, pasta phases are formed. The crust/core transition is presently discussed [33], in particular the existence of a first-order phase transition. Inside the core, clusters (nuclei) are dissolved because of Pauli blocking. IV. RESULTS AND DISCUSSION Based on a quasiparticle concept, the present work aims at deriving the EoS for warm dense matter in the subsaturation region, incorporating the known low-density virial expansions as well as mean-field theories near saturation density. Different ingredients have been used: (i) A cluster-virial expansion describes not only the different bound state contribution to the EoS, like the NSE, but takes into account also the contribution of the continuum. For A = 2, according to the Beth-Uhlenbeck formula the contribution of the continuum is given by the two-nucleon scattering phase shifts. Introducing single-nucleon quasiparticle energies, double counting of the mean-field terms has to be avoided. (ii) Medium modified bound state energies and scattering phase shifts are used. They result from the solution of a few-nucleon wave equation which contains mean-field single-nucleon energy shifts as well as Pauli blocking terms. Both should be calculated taking into account self-consistently correlations and bound state formation in the medium. (iii) In homogeneous (stellar) matter, screening of the Coulomb interaction contributes to the medium modification of quasiparticle energies, in particular for large clusters, and pasta-like structures in the phase transition region. Whereas for charged particle systems the Coulomb interaction is exactly known and a systematic quantum statistical treatment is well elaborated, see, e.g., [2], a fundamental N − N interaction is not at our disposal. Nevertheless, the quasiparticle properties are well defined from correlation functions which can, in principle, be measured. A semiempirical approach has been used to calculate the quasiparticle properties after introducing an effective N − N interaction adjusted to known properties of nuclear systems. In the zero-density limit, we can avoid the solution of the A-nucleon wave equation (4) using the empirical energies E 0 AνP [64] to evaluate the EoS (9) resulting in the NSE. The second virial coefficient is determined by the measured scattering phase shifts. Similarly, empirical data for properties near the saturation density are used to parametrize the single-nucleon quasiparticle energy shifts E τ (p; T, n B , Y p ) by a Skyrme force or RMF expressions. The same is also possible for the quasiparticle energy shift E AνP (T, n B , Y p ) of the light clusters d, t, h, α where empirical values for the rms radius or more details about the wave function can be used [70]. The contribution of continuum correlations as well as a correlated medium is discussed in [3] for the light elements A ≤ 4. A microscopic approach for the quasiparticle energy shifts solving the Brueckner equations for the single-nucleon case or the in-medium Schrödinger equation (4) for the A-nucleon case would be of interest for future work, but demands an expression for the N − N interaction. At present, only the case A = 2 has been treated this way [48,49]. Evidence for clustering at low densities and medium modifications are obtained from nuclear structure investigations. Less investigations have been performed for the light metals 5 ≤ A ≤ 11. Because their binding energies are weak ( 8 Be is unbound) and strongly influenced by the medium, see [72], their abundances are strongly reduced if comparing with the NSE, see results of HIC experiments [100] but also calculations for stellar matter [101,102] and for cosmic rays [103,104]. Comparing with the light elements, the internal structure of heavy elements A ≥ 12 (including excited states) is not drastically influenced by medium effects. The interaction with the surrounding nucleons is determined by Pauli blocking which is reflected by the concept of excluded volume. For heavy nuclei, an upper limit for the account of excited states has been introduced to get convergent results at higher temperatures. In addition to the strong interaction also the Coulomb interaction has to be treated, see Sec. 2.6. In particular, it is of relevance for large clusters and for pasta structures in the region of phase instability. Future investigations are necessary the include heavy elements as well as pasta structure formation, especially in the region which is characterized by the thermodynamic instability of symmetric nuclear matter. Another challenging issue for a general EoS is the account of quantum condensates (pairing, quartetting) at low temperatures. V. OUTLOOK For the interpretation of HIC results, the thermodynamic relations such as the EoS describing infinite systems in thermodynamic equilibrium are not directly connected with the measured cluster yields and their energy spectra because the nuclear system is inhomogeneous in space and, because of strong non-equilibrium, inhomogeneous in time. An adequate description should consider kinetic equations for the distribution functions (Wigner functions) of all clusters which have as equilibrium solutions not the ideal Fermi gas but an appropriate approximation of the EoS, see Tab. II. Thus, the EoS containing quasiparticle clusters (medium-modified nuclei) may be considered as a prerequisite to formulate a transport code for the nonequilibrium evolution. This is described by the extended von Neumann equation for the statistical operator (t) = lim ε→0 ε (t) [105], The relevant statistical operator rel (t) is obtained from the maximum of entropy reproducing the local, time dependent composition with parameter values T (r, t), µ n (r, t), µ p (r, t), but contains in addition the cluster distribution functions f Wigner Aν (p, r, t) as relevant observables [106,107]. Even if we can define a freeze-out state (temperature and chemical potentials) which determines the main features of the composition, further reaction and decay processes will occur before the cluster yields, observed in the detectors, are established. In this context it is of interest not only the decay of excited and unbound (e.g. 8 Be) nuclei, but what 20 happens with the continuum correlations which are present at high densities. For instance, in the n − n channel where no bound state arises, all continuum correlations contribute to the neutron distribution function. In contrast, in the n − p channel part of the continuum correlations contributes to the deuteron distribution, whereas the remaining part is found in the distribution of free neutrons and protons; for a discussion see Ref. [3]. Future work is necessary to devise a transport theory for HIC which is compatible with the thermodynamic properties and the EoS, described in this work, as equilibrium solution [108][109][110]. In this context it is also of interest to find optimum parameter sets {T, n B , Y p } for the reproduction of observed abundances of clusters. This has been done for HIC where the yields of light clusters have been used to infer the thermodynamic parameter values [11,111]. In contrast to a simple chemical equilibrium such as the Albergo thermometer or densitometer which is connected with the ideal mixture of different components (NSE), density effects are of relevance. The freeze-out parameter represent a state during the evolution where local thermodynamic equilibrium ceases to be realized approximately. The further evolution is characterized by the different cluster distribution functions. It is determined by collisions, reactions, and decay processes. This description can also be applied to astrophysical abundances of elements. Only the gross properties of elemental distribution are described by a freeze-out approach. Details are related to further reactions during cooling and expansion, forming local (with respect to the N − Z plane) deviations. Based on the cluster distribution functions f Wigner Aν (p, r, t) as relevant observables, a reaction network can describe this stage of evolution of the nuclear system. As an example, we can compare solar element abundances X A = n A /n B [112] with calculations of the abundances for hot dense matter. Reasonable agreement with the gross behavior of the solar abundances was obtained with parameter values temperature k B T = 5 MeVand nucleon density n B = 0.016 fm −3 [5,72,101,102]. Other parameter values, for instance k B T = 5 MeV, nucleon density n B = 0.0156 fm −3 as well as k B T = 5.5 MeV, nucleon density n B = 0.0168 fm −3 can be associated with the chemical composition of different stars [103]. Cosmic ray abundances are parametrized with a higher temperature k B T = 5.88 MeV, nucleon density n B = 0.018 fm −3 [104]. The asymmetry variable Y p = 0.5 was not optimized. Medium modifications are of relevance for these parameter values. In addition to the deviations from the NSE for light elements, we have a strong depletion due to Pauli blocking for weakly bound light metals 5 ≤ A ≤ 11. Note that the origin of elements is not completely solved until now. In particular, the site where heavy elements are produced, is not identified yet. In this connection it is of interest to ask for parameter sets {T, n B , Y p } which optimally reproduce the observed abundances.
2017-03-20T13:32:55.000Z
2017-03-20T00:00:00.000
{ "year": 2017, "sha1": "ee1215380b8f891a9ea0106fcad504c780c1858b", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "ee1215380b8f891a9ea0106fcad504c780c1858b", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
221115595
pes2o/s2orc
v3-fos-license
Antiglycation Activities and Common Mechanisms Mediating Vasculoprotective Effect of Quercetin and Chrysin in Metabolic Syndrome Multiple risk factors combine to increase the risk of vascular dysfunction in patients suffering from metabolic syndrome (MetS). The current study investigates the extent to which quercetin (Q) and chrysin (CH) protect against vascular dysfunction in MetS rats. MetS was induced by feeding rats a high-salt diet (3%) and fructose-enriched water (10%) for 12 weeks. Thoracic aorta was isolated from MetS rats and from control rats, with the latter being injured by methylglyoxal (MG). Aortae were incubated with CH and Q, and vascular reactivity was evaluated through the analysis of aortic contraction and relaxation in response to PE and ACh, respectively. The formation of advanced glycation end products (AGEs) and the free radical scavenging activity of 1,1-diphenyl-2-picrylhydrazyl (DPPH) were also evaluated following the introduction of CH and Q. The increased vasoconstriction and impaired vasodilation in MetS aortae were significantly ameliorated by Q and CH. Similarly, they ameliorated glycation-associated exaggerated vasoconstriction and impaired vasodilation produced by MG in control aortae. In addition, both Q and CH were effective in reducing the formation of AGEs and inhibition of glycosylation in response to MG or fructose treatment. Finally, Q successfully scavenged DPPH free radicals while CH showed significant vasodilation of precontracted aorta that was inhibited by L-NAME. In conclusion, Q and CH provide protection against vascular dysfunction in MetS by interfering with AGEs formations and AGEs-associated vascular deterioration, with CH being largely dependent on NO-mediated mechanisms of vasodilation. Introduction e condition "metabolic syndrome (MetS)," describes a group of conditions including central obesity, dyslipidaemia, hyperglycaemia, and hypertension. e presence of all of the mentioned conditions, rather than only a few, is required in order to draw a conclusive diagnosis of MetS [1]. CVD and type 2 diabetes mellitus (T2DM) develop more commonly in individuals who have any of these interrelated metabolic risk factors [2]. e effect of the syndrome upon the population is significant, and as the incidence of MetS is rising, it is important to devise well-defined diagnostic criteria and treatment guidelines. It is estimated that 37% of adults in the Kingdom of Saudi Arabia have MetS. e prevalence of the syndrome is lower in rural areas than urban populations, reflecting lifestyle differences in the levels of activity between the two populations [3]. In particular, the rising obesity trend, especially central obesity, is considered a key factor in the development of MetS [4]. Central obesity is associated with glucose intolerance, in which the body is less able to use glucose [5]. In MetS, an insulin-resistant state develops, ultimately leading to hyperglycaemia [6]. e adverse side effect of hyperglycaemia upon the cardiovascular system comes from the frequent accompaniment of impaired fibrinolytic pathways and hypercoagulation [7]. Moreover, in MetS, persistent hyperglycaemia promotes glycation, the nonenzymatic reaction between monosaccharide sugars and protein. e glycation of proteins is further heightened by increased quantities of the reactive sugar derivative, methylglyoxal (MG), which is also elevated secondary to hyperglycaemia. is reaction is considered irreversible, producing compounds known as AGEs [8]. AGEs cause blood vessels to become rigid, which, when combined with other pathological manifestations of diabetes, leads to persistent microvascular complications [9,10]. Furthermore, as an adjunct to the production of AGEs, the glycoxidation products, dityrosine and N′-formyl kynurenine, are formed. ese are useful markers that can be quantified to determine the degree of oxidative protein damage [11,12]. Hyperglycaemia is not the only MetS condition that is damaging to the vasculature. Persistent hypertension and hyperlipidaemia also contribute to increased inflammation and oxidative stress and reduced production of the vasodilator, nitric oxide (NO) [13,14]. e adverse side effects to vessel function manifest themselves as an attenuation of vasodilation and an increase in vasoconstriction [15]. A number of synthetic pharmaceuticals are used to attenuate vascular dysfunction, but their side effects are undesirable; this highlights the need to identify other effective compounds that are therapeutically safe and do not carry the same adverse side effects. To this end, researchers have investigated a number of naturally occurring compounds, many of which are flavonoids [16]. Two such examples are the flavonol quercetin (Q) and the flavone chrysin (CH). Consistent with other flavonoids, both compounds bear the distinctive tricyclic polyphenolic structure [17,18]. Q is a ubiquitous compound, found in many fruits and vegetables including apples, peppers, and onions. On the other hand, sources of CH are less common but include chamomile, honey, and passionflower (Passiflora caerulea) [18,19]. Q is reported to offer diverse health benefits, such as anti-inflammation, antioxidation, and the ability to stimulate endothelial production of NO [20][21][22]. Such reactions are extremely valuable in the prevention and even treatment of grave conditions such as cancer and the wide array of disorders that make up cardiovascular diseases. e established beneficial effects of Q offer a strong basis on which the current research can build. Based on such findings, Q is a potential compound that could be used to reduce MetS-initiated vascular damage. Similar to Q, CH has also been described as offering antioxidant and anti-inflammatory benefits which, as mentioned before, are key reactions involved in the prevention of many diseases. Such benefits of both Q and CH have been strongly established, and the results obtained conclude a significantly desirable effect, rather than weak conclusions based on speculation. It must be said however that research indicates that CH has less pronounce effects when compared to Q [23,24]. erefore, this study aims to explore the extent to which CH and Q offer vascular protection to MetS aorta and the potential mechanisms behind any effects observed. Experimental Animals. Male Wistar rats aged 7 weeks and weighing between 180 and 200 g were obtained from King Fahd Medical Research Center (King Abdulaziz University, Jeddah, Saudi Arabia). Groups of four rats were kept in polypropylene animal housing; adequate ventilation, purified water, and standard rodent diet pellets were constantly available. Approval for the study's experimental protocol was granted by the Research Ethical Committee, Faculty of Pharmacy, King Abdulaziz University, Jeddah, Saudi Arabia (approval number 1071439). e Saudi Arabia Research Bioethics and Regulations were observed throughout the study. To induce MetS, for 12 weeks, the 30 rats were given fructose-enriched water (10%) and high-salt diet (3%) [25] while unadulterated food and water were given to control rats. Weight of MetS rats was 250 g +. Study Protocol. e rats were randomly allocated to either the control (C) or MetS group. After 12 weeks, the rats were decapitated by rodent guillotine and their descending thoracic aortae harvested. Termination of the rats by guillotine was necessary in order to avoid blood clotting inside the aorta that affects the current experiments. Measurements of MetS Indices. Systolic blood pressure (SBP) was measured in control and in rats fed high-fructose and high-salt diet at the end of study by tail cuff method as described in detail in previous work of our laboratories [26]. Briefly, the measurement is preceded by an equilibrating 5-10 min period for the rats in the warming chamber, followed by 10 repetitions of the automated inflation-deflation cycles. Serum insulin level was determined using enzymelinked immunosorbent assay kit using antirat insulin antibodies (Millipore, Billerica, Massachusetts). Effect of Q and CH on MetS Vascular Dysfunction. To determine vascular reactivity, previously established techniques were used to isolate the aorta [26,27]. Excised aortae from MetS animals were defatted, sectioned into 3 mm rings, and then suspended in an automated organ bath (Panlab, Barcelona, Spain). In addition to containing a single section of aorta, each channel contained 25 ml of Krebs Henseleit buffer (118 mM NaCl, 4.8 mM KCl, 2.5 mM CaCl 2 , 1.2 mM MgSO 4 , 1.2 mM KH 2 PO 4 , 25 mM NaHCO 3 , and 11.1 mM glucose). e bath was incubated at 37°C and continuously aerated with a 95% O 2 and 5% CO 2 gas mixture. Every 30 min, the buffer solution was replaced. To measure aortic tension, an isometric force transducer (ADInstruments, Bella Vista, Australia) was used. Data were collected and recorded using the PowerLab Data Interface Module; this was connected to a PC operating LabChart software v8 (ADInstruments). To achieve the equilibrium state, aortic tension was adjusted to 1500 mg ± 50 for 20 min. e primary initiator of aortic contraction was PE (10 −5 M), and relaxation was stimulated by ACh (10 −5 M). Once resting-state tension was achieved (1500 mg ± 50), Q and CH at concentrations 10 and 30 μM were added to the channels and then incubated at 37°C for 60 min. e vehicle (0.1% DMSO) was added to the control channels. To study the contractile response of the aorta following incubation, PE (10 −8 to 10 −5 M) was cumulatively added. e similar cumulative addition process was applied using ACh (10 −8 to 10 −5 M) to evaluate the relaxation response but in submaximal PE-precontracted vessels. To establish contraction, tension increase was measured in mg, and, conversely, the percentage of the PEinduced contraction was used to establish relaxation. Effect of Q and CH on MG-Induced Vascular Dysfunction. For the MG experiment, the same procedure was followed, except that the aorta rings were isolated from control animals and suspended. en aortic contraction and relaxation were initially stimulated by PE and ACh, respectively, before being incubated with MG (100 μM) in absence or presence of Q or CH (10 and 30 μM) at resting tension (1500 mg ± 50) for 60 min. en, as previously described, cumulative concentrations of PE and ACh (10 −8 to 10 −5 M) were added. Effect of Q and CH on AGE Production . 96-well black plate was used to evaluate the effect of Q and CH on the formation of AGEs [28]. Bovine serum albumin (BSA), 10 mg/ml in phosphate buffer saline (PBS), was added first to the wells, followed by the addition of Q and CH (10-100 μM). AG (1 mM) was used as a glycation inhibitor and was added to the wells containing the BSA and compounds. To this mixture, MG (50 mM) was added directly after preparation and left for incubation in the dark at 37°C for 1 h. e same was done with fructose (50 mM) except that the incubation time was 2 weeks and sodium azide 0.02% was added to all of the wells. A row of wells was kept as control where PBS was added instead of MG or fructose. To establish the production AGEs, the fluorescence intensity was measured at λex � 325 and λem � 440 nm using a Monochromator SpectraMax ® M3 plate reader (Molecular Devices, Sunnyvale, CA, USA). To measure the quantities of dityrosine, N′-formyl kynurenine, and kynurenine, fluorescence intensity was measured at λex � 330, 325, and 365 nm, and λem � 415, 434, and 480 nm [29]. Antioxidant Activity of Q and CH. e compound's antioxidant potential or ability to scavenge reactive oxygen species (ROS) was researched (with modification) as described by [30]. In a 96-well clear plate, Q and CH (10-100 μM) in methanol were added with the blank kept as only methanol. DPPH solution (240 μM) in methanol/tris (1 : 1 v/v), prepared immediately prior to use, was then added to the wells. e same concentrations of Q and CH were added to only methanol/tris (1 : 1 v/v) for control. Using the Monochromator SpectraMax ® M3 plate reader (Molecular Devices, Sunnyvale, CA, USA), the absorbance was measured every minute for 10 min at 520 nm. Direct Relaxation Effect of Q and CH. In accordance with the procedure described earlier, aorta rings were suspended in KHB-filled channels. ey were kept at resting tension using the same method previously described, tested for PE contraction and ACh relaxation, and then directly incubated at 37°C for 30 min with the nitric oxide synthase inhibitor, L-NAME (1 mM). A submaximal dose of PE was added to initiate preconstriction; once contraction plateaued, ascending concentrations (10-100 μM) of CH and Q were added. Before the addition of the subsequent concentration, the tissue was given time to reach the relaxation plateau. e vehicle was added to time control channels to eliminate any possible effect of DMSO whose final concentration did not exceed 0.1%. Before termination of the experiment, a single dose of ACh 10 −5 M was added to all of the channels to achieve the complete aortic relaxation. Statistical Analysis. One-way analysis of variance (ANOVA) followed by Dunnett's post hoc test was used to analyze the glycation experiment data. Two-way ANOVA followed by Bonferroni post hoc test was used for the vascular reactivity, free radical scavenging activity, and direct relaxation experiments. Unpaired Student's t-test was conducted to analyze MetS data. ese analyses were carried out using GraphPad Instant software, version 5 (GraphPad Software, Inc., La Jolla, CA, USA). Statistical significance was set at p < 0.05. Experimental values were expressed as mean ± SEM (standard error of the mean). Effect of High-Fructose High-Salt Diet on MetS Indices. Feeding rats on high-fructose (10% in drinking water) and high-salt diet (3%) for 12 weeks led to development of metabolic syndrome in these animals as indicated by the significant elevations in systolic blood pressure and serum insulin and increase in body weight (Table 1). As the data in Figures 2(a) and 2(b) demonstrate, vascular relaxation in the MetS aorta was impaired in response to ACh (10 −8 to 10 −5 M) compared to the control. ere was a significant difference between the MetS aorta and the control, with the effect appearing at approximately 10 −6.5 M ACh and reaching a maximum relaxation at 72% of the control values. Incubating the MetS aorta with 10 and 30 μM of Q or CH corrected the attenuated vasorelaxation to control levels. e graphs show that these concentrations of the compounds were similarly effective in lowering the aorta tension during relaxation. Effect of Q and CH on MG-Induced Vascular Dysfunction. As Figures 3(a) and 3(b) show, the addition of MG to the aorta caused exaggerated vascular contraction in response to PE (10 −8 to 10 −5 M) when compared to control. is is demonstrated by the tension in MG-treated aorta being significantly higher, starting from 10 −6.5 M PE concentration. Both concentrations of Q and CH (10 and 30 μM) significantly reduced the extent of contraction, returning it back to normal and, in the case of CH, even below that of the control. Figures 3(c) and 3(d) show that, following incubation with MG, smooth muscle dilatation of the aorta in response to ACh (10 −8 to 10 −5 M) was impaired compared to control. As portrayed in Figures 3(c) and 3(d), there was a significant difference between the MG-treated aorta and the control during vascular relaxation, with the effect appearing at approximately 10 −7 M ACh. Both concentrations of Q and CH (10 and 30 μM) were able to ameliorate this impaired vasorelaxation, restoring it to values that were comparable to those of control. . Figures 4(a)-4(c) show that, compared with the control, incubation of BSA (10 mg/ml) with MG (50 mM) resulted in a significant increase in AGE production, as well as the protein oxidation products, dityrosine, kynurenine, and N-formyl kynurenine. Addition of AG (1 mM) in the reaction mixture significantly suppressed the levels of AGEs and the protein oxidation products. Both concentrations of Q and CH were significantly able to inhibit the formation of MG-mediated AGEs and resultant protein oxidation in a concentrationdependent manner. Effect of Q and CH on AGE Production A similar pattern of results is seen for the fructosemediated glycation reaction (Figures 5(a)-5(c)). Incubating BSA with F (50 mM) resulted in a significant increase in AGE and protein oxidation products compared with the control. is was significantly inhibited by AG (1 mM). With the exception of 10 μM of Q, levels of AGE and protein oxidation were significantly reduced by all concentrations of CH and Q. e results indicate that only Q possessed DPPH free radical scavenging activity, which is translated into an antioxidant effect. Figure 6(a) shows that both concentrations of Q operate in a concentration-dependent manner significantly influencing antioxidant activity. Furthermore, the graph reveals that that the reaction took place within the first few minutes and then reached a plateau. e time taken to reach that plateau varied according to the concentration used. As expected, a faster reaction was seen with the 30 μM Q, lasting for 6 minutes, compared to a 10 minute reaction time when Q 10 μM was added. Direct Relaxation Effect of Q and CH. Following a single dose of PE (10 −5 M) to induce contraction, the addition of cumulative concentrations of Q (at 10 and 30 μM) to the aorta did not have any effect (Figure 7(a)). However, addition of cumulative concentrations of CH (at 10 and 30 μM) to the aorta brought about a decrease in tension and hence concentration-dependent vasodilation (p < 0.05, Figure 7(b)). CH produced potent vasodilation that reached 90% relaxation at concentration of 30 μM of CH. In addition, the aorta incubated with L-NAME (1 mM) completely blocked the mentioned CH potent vasodilation (p < 0.05, Figure 7(b)). Discussion e purpose of this study was to investigate whether Q and CH possess vasculoprotective effects to ameliorate vascular damage commonly seen in MetS and to determine possible mechanisms of action. To our knowledge, this is the first study to investigate the direct vasculoprotective effects of the natural compounds Q and CH on MetS aorta and the role of AGE inhibition in their effects. e results showed that Q and CH protect against MetS associated vascular dysfunction. e common effect of Q and CH compounds was to (1) interfere with AGEs-induced exaggerated vasoconstriction and impaired vasodilation and (2) significantly inhibit AGEs forming in a dose-dependent manner. In addition, Q had extra free radical scavenging activity, while CH showed nitric oxide-dependent vasodilation. Vascular dysfunction is associated with several MetSrelated factors, the most important of which are the Figure 2: Effect of (a) Q and (b) CH on isolated aorta responsiveness to ACh in fructose-induced MetS after incubation with Q or CH at 37°C for 60 minutes. Results are expressed as mean ± SEM (n � 6-8). # p < 0.05 when compared to control, * p < 0.05 when compared to MetS by two-way ANOVA followed by Bonferroni post hoc test. advanced glycation end products (AGEs) [31]. Such vascular dysfunction is characterized by increased vasoconstriction and attenuated dilatation [15]. Other than MetS, vascular dysfunction can be directly induced in vitro with the use of the glycation intermediate methylglyoxal (MG). Being a highly reactive sugar derivative, MG can result in acute production of AGEs, which is largely responsible for vascular damage. is finding is in agreement with that of Dhar et al. [32] who proved that MG results in vascular dysfunction characterized by attenuated ACh-induced aortic relaxation. e results show that Q and CH significantly alleviated the exaggerated vasoconstriction of the aortic rings and attenuated ACh-induced vasodilation found in MetS aortae. is finding is consistent with that of Sánchez et al. [33] who demonstrated that in vivo treatment with Q in hypertensive rats enhanced aortic vasodilation. In addition, El-Bassossy et al. [34] also proved the ability of CH treatment to improve the exaggerated vasoconstriction in insulin-resistant rats. e effect on AGEs-induced vascular damage was investigated as a possible mechanism of action of Q and CH. In this regard, both Q and CH inhibited MG-induced exaggerated vasoconstriction and impaired vasodilation in a very similar way to that observed in MetS aortae. is suggests that AGEs can be a common important pathway for Q and CH vascular protection effect. e effect on AGEs formation was studied in order to further investigate the effect of Q and CH on different sides of the AGEs pathways. e current study showed that both Q and CH significantly inhibit both fructose-and MGproduced AGEs in a concentration-dependent manner. One hypothesized mechanism for Q's antiglycation action is that the compound has a polyphenolic structure, enabling it to scavenge free radicals. e structure, which is abundant with hydroxyl groups, is essential to the antioxidant effect shown by the compounds [35]. Such compounds donate a hydrogen atom to reduce free radicals, hence preventing protein oxidation, a main step in AGE formation. is is further proven by the strong antioxidant activity of Q in its reaction with the DPPH free radical. is is consistent with several previously conducted studies, which have researched the antiglycation activity of Q [36][37][38]. Regarding the flavonoid CH, its inability to scavenge DPPH free radicals has been investigated by Kang et al. [39], who reported that CH is missing two hydroxyl groups, which are essential for the compound (flavone) to exhibit antioxidant activity. is is further proven by Naso et al. [40], who stated that CH was unable to diminish the levels of DPPH radicals. However, despite CH possessing lower antioxidant potential than flavonoids such as Q, it is still able to prevent protein glycation significantly. is finding coincides with that of Matsuda et al. [41] who stated that various flavonoids with strong AGE formation inhibition may be poor scavengers of DPPH radicals. A suggested mechanism of action of CH in inhibiting the formation of AGEs could be the strong affinity of the flavone to BSA, which prevents the protein's glycation [42]. e ability of CH to decrease AGE levels has also been demonstrated in vivo where the flavonoid significantly inhibited the increase in serum AGEs in diabetic animals [34]. A mechanism of action for such an effect might be that, in response to CH and Q, NO is released from endothelial tissue, directly causing PE-precontracted aorta to relax significantly. Evidence in support of this NO-dependent ACh (log M) * * * * * * * (d) Figure 3: Effect of (a) Q and (b) CH on isolated normal aorta responsiveness to PE after incubation with MG (100 μM) in the presence and absence of Q or CH at 37°C for 60 minutes. Effect of (c) Q and (d) CH on isolated normal aorta responsiveness to ACh after incubation with MG (100 μM) in the presence and absence of Q or CH at 37°C for 60 minutes. Results are expressed as mean ± SEM (n � 6-8). # p < 0.05 when compared to control, * p < 0.05 when compared to MG by two-way ANOVA followed by Bonferroni post hoc test. mechanism comes from the compounds' direct relaxation effect on precontracted aorta, which was significantly reduced in the presence of the eNOS inhibitor L-NAME. A closer observation of the results shows that both the MGtreated aorta and the CH-treated MetS aorta display vasoconstriction much lower than that seen in control animals. is effect was unique for CH rather than Q. Furthermore, L-NAME inhibited a direct relaxation effect of CH more than that of Q. Both these findings may suggest that CH in particular causes potent eNOS stimulation producing greater levels of NO when compared to Q. is is in accordance with the findings of Villar et al. [43] who concluded that CH, unlike several other flavonoids, brought about aortic relaxation in vitro mainly through endothelium-dependent methods involving increased endothelial NO production. Duarte et al. [44] further argued that CH may in fact induce vascular relaxation in isolated aorta through additional NO-related mechanisms other than Figure 6: Effect of (a) Q and (b) CH on ROS production as initiated by DPPH (240 μM). Control is a reaction mixture including only DPPH. Results are expressed as mean ± SEM (n � 3). * p < 0.05 when compared to each corresponding control by two-way ANOVA followed by Bonferroni post hoc test. Evidence-Based Complementary and Alternative Medicine eNOS stimulation. e study proved that CH was able to improve the response of the aorta to authentic NO through scavenging superoxide anions (O2•-) which broke down NO. In addition, CH also potentiated the cGMP pathway, signaling cascade involved in NO-induced vasodilation. is was proven by previous studies which reported an increase in cGMP accumulation in isolated rat aorta by CH (Villar et al. 2005). e results of this study prove that, apart from the NOreleasing mechanism, both Q and CH protect aorta sections from vascular damage by preventing AGEs from forming. e AGEs alter the structure of proteins and their functions. In the case of blood vessels, glycated collagen results in reduced vessel elasticity [45]. In addition, elevated levels of AGE increase oxidative stress, activate inflammatory mediators, alter the lipid profile, and quench NO, all of which contribute to endothelial damage [46]. e obvious effect of these compounds on ameliorating vascular dysfunction, as proven by this study, can be used as a basis for future research. In addition, the possible effects that the compounds may have in vivo, when added to the diet, and the possible improvement in the same parameters measured can be further investigated. Conclusions e current study shows that the natural flavonoid compounds Q and CH have direct vasculoprotective effects on isolated thoracic aorta from MetS rats. is is demonstrated by the compounds' ability to ameliorate the exaggerated vasoconstriction and attenuated vasodilation typical of MetS vascular dysfunction. is effect can be attributed to the compounds' ability to reduce AGE production as well as the concomitant protein oxidation products. In addition, there is significant dependence on NO-mediated mechanisms and, in the case of Q, antioxidant activity, which contribute to the vascular protection. Data Availability e data used to support the findings of this study are available from the corresponding author upon request. Figure 7: Direct vasorelaxation effect of the compounds (a) Q and (b) CH on PE-precontracted isolated aorta with and without incubation with L-NAME (1 mM) at 37°C for 30 minutes. Results are expressed as mean ± SEM (n � 6-8). * p < 0.05 when compared to control by twoway ANOVA followed by Bonferroni post hoc test.
2020-07-30T02:09:11.283Z
2020-07-27T00:00:00.000
{ "year": 2020, "sha1": "12f13d621eb47258317694c9d6697b705e664628", "oa_license": "CCBY", "oa_url": "https://downloads.hindawi.com/journals/ecam/2020/3439624.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "330bcc2afb0490fed3c191fe017a41e0dd36fa93", "s2fieldsofstudy": [ "Medicine", "Environmental Science" ], "extfieldsofstudy": [ "Chemistry", "Medicine" ] }
250041811
pes2o/s2orc
v3-fos-license
Adjuvant Denosumab therapy following curettage and external fixator for a giant cell tumor of the distal radius presenting with a pathological fracture: A case report Introduction and importance Denosumab is used as a neoadjuvant therapy for giant cell tumours (GCT) prior to surgery to improve surgical clearance and reduce the rate of recurrence. However, the use of denosumab as adjuvant therapy following an external fixator for GCT of the distal radius has not been commonly described. We describe the use of adjuvant denosumab following curettage and external fixation in a patient with GCT of the distal radius presenting with a pathological fracture. Case presentation A 23-year-old male presented with a right distal radius fracture. Imaging was suggestive of a Campanacci grade 3 GCT at the distal radius with a pathological fracture. His chest X-ray was normal. He was managed with a dorsal open distal radius curettage and stabilization of the fracture with an external minifixator. Histology confirmed a GCT and adjuvant denosumab therapy was given. The response was satisfactory and the external fixator was removed at 5 months. At 42 months post-treatment, he had satisfactory function with no evidence of recurrence. Clinical discussion The extensive involvement of the distal radius and local invasion precluded the use of internal fixation after thorough curettage. Therefore, an external minifixator was applied to stabilize the fracture and started on denosumab following oncology opinion. Conclusion External fixation and adjuvant denosumab may be considered as an option in patients who are not suitable for internal fixation. However, cohort studies with long term follow up is necessary before it can be recommended in routine practice. Introduction Giant cell tumor (GCT) is a bone neoplasm that grows from clusters of neoplastic, mononuclear cells amongst uniformly distributed large osteoclast-like giant cells. It is generally benign in nature with a tendency for local infiltration and rarely metastasize to lungs [1]. Furthermore, it has a recurrence rate of 15-45 % [2]. This osteolytic lesion results from activated osteoclast-mediated bone resorption via the RANK/RANKL pathway [3]. Distal radius is the third commonest site for GCT following knee joint and proximal humerus [4]. Surgery is the main stay of treatment however, the location of the tumor may preclude radical resection due to the functional demand of the involved site. Denosumab is used as a neoadjuvant therapy for GCT prior to surgery to improve surgical clearance and reduce the rate of recurrence. However, the use of denosumab as adjuvant therapy following an external fixator for GCT of the distal radius has not been described. We describe the use of adjuvant denosumab following curettage and external fixation in a 23-year-old male patient with a Campanacci grade 3 GCT of the distal radius presenting after 3 months following a pathological fracture. The work has been reported based on the SCARE 2020 criteria [5]. Case presentation A previously healthy, 23-year-old, right hand dominant Sri Lankan male presented with pain along the radial side of the right wrist for 3 months following a fall on the outstretched hand. He had no significant drug or allergy history, family history or psychosocial history. Clinical examination was unremarkable except mild swelling and tenderness over the distal radius. X-ray films showed an osteolytic lesion at the distal radius with a pathological fracture (Fig. 1). Magnetic resonance imaging (MRI) showed a probable Campanacci grade 3 GCT at the distal radius with a pathological fracture. The lesion was expansile with extension into the pronator quadratus and showed intense heterogeneous enhancement following intravenous contrast (Fig. 2). His chest Xray was normal. He was managed with a dorsal open distal radius curettage and stabilization of the fracture with an external minifixator. An articulating external fixator was used to allow early range of motion (Fig. 3). The surgery was performed by a senior orthopedic surgeon in a tertiary care hospital. Histology confirmed the diagnosis of a GCT. He was referred to the oncologist and was started on denosumab 120 mg every 4 weekly for a total duration of 5 months as an adjuvant therapy. The dental status was checked and there was no evidence of osteonecrosis. There was a good response to treatment and the external fixator was removed after clinical and radiological confirmation of union at 5 months ( Fig. 1). He was started on hand physiotherapy and occupational therapy. At 42 months post-treatment, he had satisfactory hand function with no evidence of recurrence (Fig. 4). His wrist flexion and extension was 60 • while radial and ulna deviation were 15 and 20 • respectively. He underwent an MRI at 1 year after completion of treatment followed by serial 6 monthly X-rays which did not reveal any recurrence. He is currently on regular surveillance to monitor for recurrence. Discussion GCT of the distal radius is a benign bone tumor however, higher incidence of local invasion and recurrence rate has been reported. Preservation of structural and functional integrity while achieving adequate surgical resection is essential in the management. Therefore, in patients with GCT of the wrist, optimized balance between oncological cure and functional outcome should be achieved [6]. Extra cortical involvement and soft tissue expansion is thought to be common which complicates primary resection of the tumor. In high grade tumours, despite extensive curettage and bone cementing, local recurrence has been reported as high as 88 % [3]. En bloc resection may achieve better local control however, may compromise adequate reconstruction and long term function. Therefore, extensive resection may not be suitable as a standard method of treatment for GCT of the distal radius [6]. Furthermore, some form of reconstruction is necessary after resection of the primary tumor. There are several options available for reconstruction such as osteoarticular allograft, allograft arthrodesis and vascularized or non-vascularized fibular autograft with or without arthrodesis [7]. In cases of pathological factures, some form of fixation is necessary for stabilization. Successful treatment with open reduction and internal fixation has been described in few cases [8,9]. However due to its rarity, there are no long term data on reconstruction techniques and outcomes of pathological fractures of the distal radius with GCT including sufficient number of patients. Denosumab, a monoclonal antibody was approved for the treatment of unresectable GCT or where surgical resection is likely to result in severe morbidity [10]. A systematic review by Jamshidi K et al., elaborated the use of denosumab a as neoadjuvant therapy in grade 2 and 3 lesions in order to improve resectability [2]. Recently studies have shown benefits of denosumab as neoadjuvant therapy for GCT in terms of pain reduction and tumor suppression. However, the use of denosumab in patients with GCT in the distal radius is controversial [6]. That is because recent studies have shown that, although it helps in tumor regression, it had no benefits of improving recurrence free survival [2,6]. Cell culture studies have shown that denosumab was able to clear the giant cells, however, the effect on neoplastic stromal cells were minimal and were continuously proliferating [6]. However, our patient was recurrence free after 42 months of treatment completion. Overall experience of the patient was satisfactory with good functional outcomes. Our patient presented late with a pathological fracture of the distal radius with GCT. The extensive involvement of the distal radius and local invasion precluded the use of internal fixation after thorough curettage. Therefore, we used an external minifixator to stabilize the fracture and started on denosumab following oncology opinion. Although there are reports of successful use of internal fixation, the successful use of external fixator is not commonly described [8,9]. Our case report suggests the possibility of the usage of external fixator and adjuvant denosumab for selected patients with GCT who are not suitable for internal fixation. The external fixator was removed after satisfactory union was achieved. At 42 months of follow-up, there was no evidence of recurrence and the functional outcome was satisfactory. However, the patient requires close long term surveillance to monitor for recurrence. Conclusion We described the use of adjuvant denosumab following curettage and external fixator for a late presentation of pathological fracture of the distal radius with GCT. The extensive involvement of the distal radius and local invasion precluded the use of internal fixation after thorough curettage. External fixation with adjuvant therapy may be considered an option in patients who require extensive curettage and who are not suitable for internal fixation. However, cohort studies with long term follow up is necessary before it can be recommended in routine practice. GCT giant cell tumor MRI magnetic resonance imaging Sources of funding None declared. Ethical approval Our institution does not require ethical clearance for case reports. Consent Written informed consent was obtained from the patient for publication of this case report and accompanying images. A copy of the written consent is available for review by the Editor-in-Chief of this journal on request. Author contributions Author AA, CP, AF, UJ and RS contributed to collection of information and writing of the manuscript. Authors UJ and RS contributed to the final approval of the manuscript. All authors read and approved the final version for publication.
2022-06-26T15:16:42.220Z
2022-06-01T00:00:00.000
{ "year": 2022, "sha1": "56cc6be623d1fc00f0df053ebc95eed26306d4fd", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1016/j.ijscr.2022.107342", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "44e29ccec1bb7d941533d620fcea5e09b8ceaa95", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
14970749
pes2o/s2orc
v3-fos-license
A new fingerprint matching approach using level 2 and level 3 features Fingerprint friction ridge details are generally described in a hierarchical order at three levels, namely, Level 1 (pattern), Level 2 (minutiae points) and Level 3 (pores and ridge shape). Although high resolution sensors (~ 1000dpi) have become commercially available and have made it possible to reliably extract Level 3 features, most Automated Fingerprint Identification Systems (AFIS) employ only Level 1 and Level 2 features. As a result, increasing the scan resolution does not provide any matching performance improvement. We develop a matcher that utilizes Level 3 features, including pores and ridge contours, in conjunction with level 2 features (minutiae) for matching. The aim is to reduce the error rates, namely FAR (False Acceptance Rate) and FRR (False Rejection Rate) in the existing minutiae based systems. The hierarchical matcher has been tested on two diverse databases in public domain. The obtained results are promising and verify our claim. Introduction Fingerprint recognition is a widely popular but a complex pattern recognition Problem. It is difficult to design accurate algorithms capable of extracting salient features and matching them in a robust way. There are two main applications involving fingerprints: fingerprint verification and fingerprint identification. While the goal of fingerprint verification is to verify the identity of a person, the goal of fingerprint identification is to establish the identity of a person. Specifically, fingerprint identification involves matching a query fingerprint against a fingerprint database to establish the identity for an individual. To reduce search time and computational complexity, fingerprint classification is usually employed to reduce the search space by splitting the database into smaller parts (fingerprint classes) [1]. There is a popular misconception that automatic fingerprint recognition is a fully solved problem since it was one of the first applications of machine pattern recognition (it dates back to more than 50 years ago). On the contrary, fingerprint recognition is still a challenging and important pattern recognition problem. The real challenge is matching fingerprints affected by: o High displacement or rotation which results in smaller overlap between template and query fingerprints (this case can be treated as similar to matching partial fingerprints), o Non-linear distortion caused by the finger plasticity, o Different pressure and skin condition and o Feature extraction errors which may result in spurious or missing features. The approaches to fingerprint matching can be coarsely classified into three classes: Correlation-based matching, Minutiae-based matching and Ridge-feature-based matching. In correlation-based matching, two fingerprint images are superimposed and the correlation between corresponding pixels is computed for different alignments. During minutiae-based matching, the set of minutiae are extracted from the two fingerprints and stored as a sets of points in the two dimensional plane. Ridge-featurebased matching is based on such feature as orientation map, ridge lines and ridge geometry. [2], [3] The information contained in a fingerprint can be categorized into three different Levels, namely, Level 1 (pattern), Level 2 (minutiae points), and Level 3 (pores and ridge contours). The vast majority of contemporary automated fingerprint authentication systems (AFAS 1 ) are minutiae (Level 2 features) based [4]. Minutiae-based systems generally rely on finding correspondences 2 between the minutiae points present in "query" and "reference" fingerprint images. These systems normally perform well with high-quality fingerprint images and a sufficient fingerprint surface area. These conditions, however, may not always be attainable. 1 Automated Fingerprint Authentication Systems 2 A minutiae in the "query" fingerprint and a minutiae in the "reference" fingerprint are said to be corresponding if they represent the identical minutiae scanned from the same finger As shown in Figure 2, even though these two fingerprint images are from the same individual, the relative positions of the minutiae are very different due to skin distortions. This distortion is an inevitable problem since it is usually associated with several parameters [28], [29], including skin elasticity, nonuniform pressure applied by the subject, differentfinger placement with the sensor, etc. In many cases, only a small portion of the "query" fingerprint can be compared with the "reference" fingerprints as a result, the number of minutiae correspondences might significantly decrease and the matching algorithm would not be able to make a decision with high certainly. This effect is even more marked on intrinsically poor quality fingerprints, where only a subset of the minutiae can be extracted and used with sufficient reliability. Although minutiae may carry most of the fingerprint's discriminatory information, they do not always constitute the best trade-off between accuracy and robustness. This has led the designers of fingerprint recognition techniques to search for other fingerprint distinguishing features, beyond minutiae which may be used in conjunction with minutiae (and not as an alternative) to increase the system accuracy and robustness. It is a known fact that the presence of Level 3 features in fingerprints provides minutiae detail for matching and the potential for increased accuracy. Ray et al. [5] have presented a means of modeling and extracting pores (which are considered as highly distinctive Level 3 features) from 500 ppi fingerprint images. This study showed that while not every fingerprint image obtained with a 500 ppi scanner has evident pores, a substantial number of them do have. Thus, it is a natural step to try to extract Level 3 information, and use them in conjunction with minutiae to achieve robust matching decisions. In addition, the fine details of Level 3 features could potentially be exploited in circumstances that require high-confidence matches. Moosavi [21] developed a matcher that utilizes Level 3 features, including pores and ridge contours, in conjunction with Level 2 features (minutiae) for matching. The types of information that can be collected from a fingerprint's friction ridge impression can be categorized as Level 1, Level 2, or Level 3 features as shown in Figure 1. At the global Level, the fingerprint pattern exhibits one or more regions where the ridge lines assume distinctive shapes characterized by high curvature, frequent termination, etc. These regions are broadly classified into arch, loop, and whorl. The arch, loop, and whorl can further be classified into various subcategories, by noticing Delta and core. Features of Level 1 comprise these global patterns and morphological information. They alone do not contain sufficient information to uniquely identify fingerprints but are used for broad classification of fingerprints. Level 2 features or minutiae refer to the various ways that the ridges can be discontinuous. These are essentially Galton characteristics, namely ridge endings and ridge bifurcations. A ridge ending is defined as the ridge point where a ridge ends abruptly. A bifurcation is defined as the ridge point where a ridge bifurcates into two ridges. Minutiae are the most prominent features, generally stable and robust to fingerprint impression conditions. Statistical analysis has shown that Level 2 features have sufficient discriminating power to establish the individuality of fingerprints [6]. Level 3 features are the extremely fine intra ridge details present in fingerprints [7]. These are essentially the sweat pores and ridge contours. Pores are the openings of the sweat glands and they are distributed along the ridges. Studies [8] and [12] have shown that density of pores on a ridge varies from 23 to 45 pores per inch and 20 to 40 pores should be sufficient to determine the identity of an individual. A pore can be either open or closed, based on its perspiration activity. A closed pore is entirely enclosed by a ridge, while an open pore intersects with the valley lying between two ridges as shown in Figure 4. The pore information (position, number and shape) are considered to be permanent, immutable and highly distinctive but very few automatic matching techniques use pores since their reliable extraction requires high resolution and good quality fingerprint images. Ridge contours contain valuable Level 3 information including ridge width and edge shape. Various shapes on the friction ridge edges can be classified into eight categories, namely, straight, convex, peak, table, pocket, concave, angle, and others as shown in Figure 5. The shapes and relative position of ridge edges are considered as permanent and unique. On the perpetual quest for perfection, a number of techniques devised for reducing FAR 3 and FRR 4 were developed; computational geometry being one of such techniques [10]. Matching is usually based on lower-Level features determined by singularities in the finger ridge pattern known as minutiae. Given the minutiae representation of fingerprints, fingerprint matching can simply be seen as a point matching problem. As mentioned before, two kinds of minutiae are adopted in matching: ridge ending and ridge bifurcation. For each minutia usually extract three features: type, the coordinates and the orientation. Where θ is the orientation and (x0, y0) is the coordinate of minutiae. M. Poulos et al. develop an approach that constructs nested polygons based on pixels brightness; this method needs some image processing techniques [11]. Another geometric topologic structure, Nested Convex Polygons (NCP) [13], used in [17], Khazaee and others establish a matching using minutiae. This approach was invariant from translation and rotation. They also had a local matching with using of the most interior polygon (Reference Polygon) and then apply global matching. They use reference polygon that is unique for every fingerprint; this uniqueness helps to reject non matching fingerprints with minimum process and time. We use in our approach, this point of view and continue this idea to use it for pores and ridges in Level 3 features. In this paper, we proposed a new fingerprint matching method that utilizes Level 3 features (pores and ridge contour) in conjunction with Level 2 features (minutiae) for matching, using of the most interior polygon (reference polygon) and apply matching in 2 Levels. Three main steps of our proposed method are: In Section 2, we review some feature extraction methods. In Section 3, we introduce nested convex polygons [13]. In section 4, we describe matching approach. Then, in section 5, show how we can construct a NCP with minutiae and pores. A new approach for fingerprint matching using NCP is described in section 6. Experimental results on FVC2006 and NIST 9 are presented in Section 7. The paper is concluded in Section 8. Feature Extraction To compensate for the variations in lighting, contrast and other inconsistencies three preprocessing steps are used: Gaussian blur, sliding window contrast adjustment, and histogram based intensity Level correction. Gaussian blurring is used to remove any noise introduced by the sensor. The lighting inconsistencies are adjusted by using sliding-window contrast adjustment on the Gaussian blurred image. To further enhance the ridges and valley a final intensity correction is made by using Histogram-based Intensity Level Adjustment. The preprocessed image is enhanced using a popular fingerprint enhancement technique by Sharat [14] which uses contextual filtering in Fourier domain. The enhanced fingerprint image is not suitable for extracting pores as the pore information is lost during enhancement. Thus for pore extraction the preprocessed image is used. The minutiae points are extracted from the enhanced image by using NIST's5 minutia extraction software (NFIS). The method first generates the image quality maps by checking the regions with high curvature, low flow and low contrast. And then, a binary representation of the fingerprint is constructed. Minutiae are generated by comparing each pixel neighborhood with a family of minutiae templates. Finally, spurious minutiae are removed by using a set of heuristic rules. The NFIS also counts the neighbor ridges and assigns each minutia point a quality (in the range 0 to 100) determined from image quality maps. The minutia representation generated by NFIS consists of the location information, orientation, minutia type (bifurcation or ending) and minutia quality [26]. The proposed minutia matcher does not differentiate between different minutiae types. This is because the minutia types are difficult to distinguish when the applied finger pressure during acquisition varies. Figure 9 shows one such example where the same minutiae extracted from two different impressions appears as a bifurcation in one image and as a ridge ending in other. (a) (b) Figure 9: Effect of differential finger pressure [16] The Level 3 features are extracted only when the minutiae based matching fails to match the query fingerprint image with the template image. The ridge contours are extracted from the enhanced fingerprint image by using the approach proposed by Jain et al. [9]. The extracted ridge contours by using this approach is shown in Figure 10. For extracting pores the technique proposed in [16] and [25] by International Biometric Group is used. Figure 11 show the extracted pores by using this approach. The pore information thus extracted contains the location information and orientation information of pores. (a) Sample 500 dpi fingerprint (b) extracted pore Figure 11: Pores extracted from an image canned with Cross Match Verifier 300 scanner at 500 dpi Fingerprint Preprocessing Before extracting the proposed ridge features, we need to perform some preprocessing steps (see Figure 12). These steps include typical feature extraction procedures as well as additional procedures for quality estimation and circular variance estimation. We first divide the image into 8 8 pixel blocks. Then, the mean and variance values of each block are calculated to segment the fingerprint regions in the image. We then apply the method described in [29] to estimate the ridge orientation and the ridge frequency is calculated using the method presented in [30]. The Gabor filter [31] is applied to enhance the image and obtain a skeletonized ridge image. Then, the minutiae (end points and bifurcations) are detected in the skeletonized image. The quality estimation procedure [32] is performed in order to avoid extracting false minutiae from poor quality regions and to enhance the confidence Level of the extracted minutiae set. Furthermore, in regions where ridge flows change rapidly, such as the area around a singular point, it is hard to estimate the ridge orientations accurately or to extract the thinned ridge patterns consistently. Therefore, to detect regions which have large curvature, we apply circular variance estimation [33]. The circular variance of the ridge flows in a given block is calculated as follows: Where and n represent the estimated orientation of the ith block and the number of neighboring blocks around the ith block, respectively. In our experiments, we use eight neighboring blocks. Quality estimation and circular variance values are used to avoid generating feature vectors in poor quality regions or in regions around singular points. Moreover, we adopt some post-processing steps [34] to remove falsely extracted ridges, such as short ridges and bridges. We can then extract the ridge structures consistently against various noise sources. Nested Convex Polygons Let { , ,..., } 12 S x x x n = denote n points in two dimensional spaces. We use Quick Hall algorithm iteratively to construct nested polygons [13]. The purpose of fingerprint matching is to determine whether two fingerprints are from the same finger or not. In order to this, the input fingerprint needs to align with the template fingerprint represented by its minutiae pattern [18]. The following rigid transformation can be performed: represent a set of rigid transformation parameters: (scale, rotation, translation). In our research, we assume that the scaling factor between input and template image is identical since both images are captured with the same device. Table 1 shows features that we use in Level 2 matching and table2 shows the features we use in Level 3 matching. In the table1, Length is the length of edge; Ө1 is the angel between the edge and the orientation field at the first minutiae point; Type1 denote minutiae type of the first minutiae [10]. Using onion layer algorithm we construct nested polygons; for every fingerprint we store edge properties that mentioned in table1 of the reference polygon, plus its depth, and Minutiae points features that mentioned in first row of table1 in database as template (fingerprint Registration). Also, we construct nested polygons for pores and for every fingerprint store edge properties that mentioned in table2 of the reference polygon, plus its depth, and pore points features that mentioned in first row of table2 in database as template (fingerprint Registration). At the Figure 14 the polygon at depth 6 is the reference polygon that used for Level 2 matching in order to calculate rigid transformation parameters; these parameters apply to the whole remain minutiae of input fingerprint in order to align with template fingerprint, then Level 3 matching is employed, and if the score of matching is higher than predefined threshold, two fingerprints are identical, otherwise they are from different fingers. The purpose of fingerprint matching is to determine whether two fingerprints are from the same finger or not. At this step input fingerprint image goes through preprocessing, NCPs construction and determining its class like registration step. Afterward depend on Identification (1→n matching) or verification (1→1 matching) we perform matching. In verification mode we do not have to determine the class of fingerprint; retrieve fingerprint from database template and perform matching. But the purpose of identification is to identify unknown person, so in this mode, the class of input fingerprint is detect and matching of unknown fingerprint with templates at that class continue until happen a matching or rejection at the end. We divided registration in two steps: firstly, in preprocessing step, some image processing techniques that customize for fingerprint, like segmentation, normalization, Gabor filter and etc apply on fingerprint input image [19]. Next, Binarization and thinning are employed and valid minutiae are extracted from thin image. Secondly, we apply onion layer algorithm and construct NCPs. We store invariant feature (table1) for reference polygon plus its depth and variant features for other polygons in the database as a template. We also do the same procedure for pores and apply onion layer algorithm and construct NCPs for them too and store invariant feature ( are respectively thresholds of length of edges, and minutia angle and orientation of edge, respectively. 4. Repeat step 3, until find two adjacent edges in Pi that have two adjacent edges corresponding in P t . If such couple adjacent edges don't exist in two RPs, matching reject at this step. 5. Using of such couple adjacent edges, a triangle constructed as Reference Triangle (RT). One more step needed to ensure that two triangles are exactly corresponded, that's satisfied with following condition: Where, 3 T is threshold of angle between two adjacent edges in two RT. 6 (7) Where, r0 and Ө0 are thresholds of distance and orientation respectively. Where, m and q are the number of minutiae in two fingerprints and n is the number of matched minutiae. If p be greater than predefined value, so two fingerprint are the same, otherwise go back to step 3. This iteration continues until either no candidate at step 4 exists, or accepts at step 9. Experimental Results We perform experiments using the fingerprint database of FVC 2006 and NIST 9 database to evaluate the correctness of algorithm and show the results of experiments. The experiment uses DB1_a in database FVC 2006 [20]. Each database contained 800 fingerprints from 100 different fingers and in each database dry, wet, scratched, distorted and markedly rotated fingerprints were also adequately represented. The performance of the algorithm is measured in terms of receiver operating characteristics (ROC). For the NIST 9 database, at ~ 0% false acceptance rate (FAR) the genuine acceptance rate (GAR) observed is ~ 74% and at ~ 11% FAR, GAR is ~ 92%. For the FVC2006 database at ~ 0% FAR the GAR observed is 71% and at ~ 18% FAR, GAR is 89%. Proposed algorithm yields better GAR at low FAR with reduced computational complexity. Conclusion In this paper, we have designed a completely new method of fingerprint matching using an onion layer technique regarding computational geometry. This matching approach works by using Level 3 characteristics (pores along with ridge contour) jointly with Level 2 characteristics (minutiae) for matching. Using an onion layer technique, we assemble nested convex polygons regarding minutiae, after determining by polygons property, we perform matching regarding fingerprints; we use the almost all inside polygon in order to analyze the rigorous transformations parameters along with perform Level 2 matching, next we employ Levels 3 matching. The theory research regarding computational complexity demonstrates the NCP approach for fingerprint matching can be extremely effective than the normal minutiae based matching methods. Three principal phases for the presented fingerprint matching are: Minutiae extraction and perform matching throughout Level 2, Pores extraction along with perform matching throughout Level 3 and next fingerprint recognition. The most important characteristics on the presented algorithm are: rapid detection, very fast in rejection, far more accurate compared to typical minutiae matching. Another advantage of presented technique can be that none of image processing techniques are require for matching. Our next objective can be considering brand-new computational geometry design for matching along with classification in order to far more resistant against low quality and noise in fingerprints.
2016-04-07T00:00:00.000Z
2011-05-16T00:00:00.000
{ "year": 2011, "sha1": "2508ebf6f0014ed9634626e5c4005d214afe4999", "oa_license": "CCBY", "oa_url": "https://zenodo.org/record/3813426/files/10)%20A%20new%20fingerprint%20matching%20approach%20using%20level%202%20and%20level%203%20features.pdf", "oa_status": "GREEN", "pdf_src": "ElsevierPush", "pdf_hash": "11674b308f6bff64e0285b348a0f35971bf5ef39", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
26428660
pes2o/s2orc
v3-fos-license
On the Efficient Broadcasting of Heterogeneous Services over Band-Limited Channels: Unequal Power Allocation for Wavelet Packet Division Multiplexing Multiple transmission of heterogeneous services is a central aspect of broadcasting technology. Often, in this framework, the design of e ffi cient communication systems is complicated by stringent bandwidth constraint. In wavelet packet division multiplexing (WPDM), the message signals are waveform coded onto wavelet packet basis functions. The overlapping nature of such waveforms in both time and frequency allows improving the performance over the commonly used FDM and TDM schemes, while their orthogonality properties permit to extract the message signals by a simple correlator receiver. Furthermore, the scalable structure of WPDM makes it suitable for broadcasting heterogeneous services. This work investigates unequal error protection (UEP) of data which exhibit di ff erent sensitivities to channel errors to improve the performance of WPDM for transmission over band-limited channels. To cope with bandwidth constraint, an appropriate distribution of power among waveforms is proposed which is driven by the channel error sensitivities of the carried message signals in case of Gaussian noise. We address this problem by means of the genetic algorithms (GAs), which allow flexible suboptimal solution with reduced complexity. The mean square error (MSE) between the original and the decoded message, ψ which has a strong correlation with subjective perception, is used as an optimization criterion. INTRODUCTION Unequal error protection (UEP) is a channel coding technique used to increase the robustness of data that exhibit different sensitivities to transmissionerrors.This is often the case of digital multimedia compressed streams such as JPEG2000 [1] or MPEG [2].Due to the extensive use of predictive and variable length codes, a compressed stream is in general more vulnerable to data losses and transmission errors, which can desynchronize the decoder causing spatial and temporal error propagation [3].In broadcasting, feedback channel is not available, thus UEP relies on differentiated forward error correction (FEC) coding [4]: depending on their sensitivities to channel errors, data are protected with codes with higher or lower error correcting capabilities.Reed-Solomon (RS) or Turbo Codes (TC) are frequently used [5,6], but also more performing techniques, based on Rate-Compatible (RC) codes [7], have been proposed by the research community.Unequal power allocation (UPA) is an alternative UEP technique which is deployed when, for several reasons, FEC coding is not efficient [8].For broadcasting multiplexed communications (e.g., DVB, DAB), for instance, the available channel bandwidth per service is a key constraint and the use of FEC-based UEP schemes is barely suitable.In fact, FEC is a discrete nature coding scheme.It is subjected to some constraint which restricts the protection level (i.e., the code rate) only to a set of fixed values.Therefore, the overhead introduced by FEC codes can be a significant limitation for the efficient use of the bandwidth.On the other hand, UPA aims at distributing the available budget power over the parts of the stream, according to their sensitivities to channel error, to achieve improved final quality on transmitted data without any increase of the transmission bandwidth.Basically, UPA is performed by assigning different power weights to the data according to their "importance" (i.e., channel error sensitivities) within the stream: higher transmission power is assigned to more sensible data.As to this, UPA is a "continuous" process in the sense that International Journal of Digital Multimedia Broadcasting weights are chosen in a real set with an accuracy which can be a priori selected and in theory infinite.Therefore, against FEC, UPA allows more flexibility in the protection of sensible data. Wavelet packet modulation for orthogonally multiplexed communication was introduced as a promising technique to improve performance of conventional FDM and TDM schemes in both Gaussian and impulsive noises [9][10][11].The properties of wavelet packets are exploited to embed data into waveforms which are mutually orthogonal both in time and frequency.Several studies conducted on this technology have shown that opportune design allows minimizing the energy of timing error interferences, which impair conventional TDM systems [10].The overlapping bandpass nature of the transmission pulses (i.e., wavelets) allows better exploitation of the bandwidth respect to classical FDM [10], and it also intrinsically mitigates fading effects [12].Moreover, due to the scalability of its structure, wavelet packets permit to multiplex data with different format (e.g., JPEG2000 and MPEG-2), therefore being a desirable choice for broadcasting heterogeneous services. In this work, a UPA scheme for wavelet packet division multiplexing (WPDM) is proposed.UPA applied to WPDM consists on assigning different power to wavelet packets according to the importance of the message signals carried on.In other words, considering a generic bit pattern, individual bits are weighted differently taking the channel conditions (i.e., the signal-to-noise ratio (SNR)) into account and transmitted on separate wavelet packets.As to the optimization, we use the mean square error in the parameter domain with u(τ) and u(τ) being the transmitted and decoded parameter, respectively.The nontrivial complexity of the problem does not allow closed-form analytical solution, which, thus, has to be sought by numerical approach.In literature, solutions based on the gradient algorithm have been proposed [8].The complexity of such optimization methods increases with the size (i.e., number of bits) of the frame to be transmitted.In this work, we address UPA by exploiting the potentialities of the Genetic Algorithm (GA) to reduce the computation complexity.The use of GA as to the weights optimization is one of the novel aspects of this work.A genetic algorithm [13] is a search technique used in computing to find true or approximate solutions to optimization and search problem.GAs are extensively used in literature in different application fields of communication engineering such as network design, unicast, and multicast routing [14][15][16].They allow finding iterated numerical solution to complex problems with accuracy dependent on the number of iterations selected.The major advantage of genetic algorithms is their flexibility and robustness as a global search method.They can deal with highly nonlinear problems and nondifferentiable functions as well as functions with multiple local optima.They are also readily amenable to parallel implementation, which renders them appropriate in real-time adaptive communications, extensively used for reconfigurable broadcasting services. Results show that the proposed UPA-WPDM scheme allows increasing resilience of data which exhibit different sensitivities to channel errors during their transmission over AWGN channel.The performance improvement in terms of quality achieved in the parameter domain (i.e., MSE u ) has been proved against an equally distributed WPDM-and FEC-based UEP systems, in the presence of similar bandwidth constraint.Moreover, the bandwidth gain for target quality (i.e., fixed MSE u ) at a fixed bit error rate has been evaluated beside UEP FEC-based techniques. In the following section, an overview on the WPDM technology is given.Section 3 formally defines UPA for WPDM by describing in detail the weighting optimization procedure and the GA-based proposed solution.The performance of the proposed UPA-WPDM scheme on Gaussian channel is analyzed and compared to equally power distributed equivalent schemes and to channel coding UEP systems in Section 4. Conclusions follow in Section 5. Let g 0 [n] be a unit-energy real causal FIR filter of length N which is orthogonal to its even translates; that is, where δ[m] is the Kronecker delta, and let g 1 [n] be the (conjugate) quadrature mirror filter (QMF), satisfies some mild technical conditions [17,31], we can use an iterative algorithm to find the function φ 01 (t) = √ 2 n g 0 [n]φ 01 (2t − nT 0 ) for an arbitrary interval T 0 .Subsequently, we can define the family of functions φ lm , l≥0, 1≤m≤2 l in the following (binary) tree-structured manner: where T l = 2 l T 0 .For any given tree structure, the function at the leafs of the tree forms a wavelet packet.They have a finite duration, (N − 1)T l , and are self-and mutually-orthogonal at integer multiples of dyadic intervals, and hence they are a natural choice for scalable multiplexing applications [9,10].In Figure 1, the wavelet packet functions (a) and the relevant power spectrum (b) for three-level (i.e., eight size wavelet packet) standard 12-tap Daubechies filters decomposition [23]. In WPDM, binary messages x lm [n] have polar representation (i.e., x lm [n] = ±1), waveform-coded by pulse amplitude modulation (PAM) of φ lm (t − nT l ) and then added together to form the composite signal s(t).WPDM can be implemented using a transmultiplexer and a single modulator x 21 [n] x 22 [n] x 23 [n] x 22 [n] x 23 [n] x 24 [n] [10] as Figure 2 illustrates for a two-level decomposition.In this case, where with Γ being the set of terminal index pairs and f lm [k] the equivalent sequence filter from the (l, m)th terminal to the root of the tree, which can be found recursively from (2).The original message can be recovered from x 01 [k] using An example of WPDM tree for a system that can be used for broadcasting heterogeneous services is shown in Figure 3(a).In this case, the transmission system uses two International Journal of Digital Multimedia Broadcasting wavelet packets composed by two and four waveforms (i.e., wavelets), respectively.In Figure 3(b), the relevant subband structure is displayed: the total bandwidth is equally shared between the two packets, but a different partitioning (two against four) is implemented within each packet.Differently formatted streams can be transmitted by associating them to the appropriate wavelet packets. UNEQUAL POWER ALLOCATION FOR WPDM Without loss of generality to model, a generic bitstream exhibits different error sensitivities to channel conditions, we consider a discrete periodic (period τ) memoryless source S: ∀τ→u(τ) and an analog to digital process AD: k is then multiplied with the specific weight w i ∈ R + of the diagonal matrix W = diag (w 1 , w 2 , . . ., w M ).The weighted bit pattern y(τ) = W•x (τ) is then transmitted by a Mth order WPDM over a channel affected by additive white Gaussian noise (AWGN) n(t) with zero mean and variance N 0 /2.The signal at the receiver front end is r(t) = s(t) + n(t) with s(t) as in (1) and T 0 = 2 −l τ. After demodulation, the distributed vector is z(τ) = y(τ) + n rel (τ), where n rel = (n (1) rel , n (2) rel , . . ., n (M) rel ), represents the demodulated noise along the M signal message components (i.e., relevant noise).Following decision based on Maximum Likelihood (ML) criterion, the estimate u(τ) is produced by inverse digital to analog (DA) process.A sketch of the system is depicted in Figure 4. Weight optimization Considering bipolar binary representation x (i) k = ±1, if bits in x (τ) are inverteddue to AWGN, a wrong decision x(τ) is made at the receiver, thus producing a distortion d(τ) = [u(τ) − u(τ)].Aim of the optimization process is to calculate optimal weights in the sense of a minimized expected value E{[d 2 (τ)]}.Assuming ergodicity, it is possible to calculate E{d 2 } as follow: where d σ,η = u σ − u η are the different possible parameter values, P(x k ) the occurrence of the reproduction levels u k , and P( x h | x k ) the transition probabilities between transmitted and received bit patterns.Due to the orthogonal properties of WPDM waveforms and to the independence of the noise samples, the transition probabilities are [4,32]: By imposing w 2 i E b and impose the following constraint on the weights WPDM is based on binary amplitude modulation, thus, the bit error probabilities in ( 6) are [33] Mathematically, the optimization problem is to minimize (5) under the constraint (7).In other words, UPA raises (w i > 1) the immunity to noise channel for more significant bits, paying as a counterpart lower robustness (w i < 1) on less significant one, to achieve average improved performance on the transmission of parameter u(τ) in the sense of minimum expected distortion The complexity of the above optimization problem, which increases with the size of frames M, does not allow closed form solutions.Therefore, to identify the solution, we use a numerical approach based on Genetics Algorithms (GAs). Genetics alghoritms (GAs) GAs are implemented as a computer simulation in which a population of abstract representations (chromosomes) of candidate solutions (genes) to an optimization problem evolves toward better solutions.The evolution usually starts from a population of randomly generated chromosomes and happens in generations.In each generation, the fitness of every chromosome in the population is evaluated, multiple chromosomes are stochastically selected from the current population (based on their fitness), and modified (mutated or recombined) to form a new population.The new population is then used in the next iteration of the algorithm. In the proposed system, the chromosomesare defined as arrays of M genes w i ∈ R + .The range of possible values of w i is constrained by (7).An initial population {INIT} of L chromosomes is randomly selected.The fitness function is as defined as in (5).Two operations are allowed to determine the evolution of the initial population: crossover (with probability P cross ) used to interchange the elements of two chromosomes and mutation (with probability P mut ) which modify the value of one or more genes within a chromosome with the aim of leading the search out of local optima.In particular, the most fitting part of the population {BEST} is selected and directly inserted in the new generation, while the rest of the population {WORST} is discarded and replaced by a subpopulation created by means of the crossover and mutation operators.In the case of two identical chromosomes resulted after the crossover and mutation operations, two individuals are randomly generated.The termination condition is satisfied once either the algorithmreaches a selected number of iterations (IT) or the fitness function maintains the same value for IT MAX iterations.At the end of the process, the chromosome with low score in the fitness function (i.e., . . .lower distortion on the reconstructed frame) will be selected for the transmission.Figure 5 gives an example of the crossover and mutation operations. In this particular case, chromosomes are composed by four genes; at iteration k + 1, the crossover operator swaps the first two genes of the chromosomes p and q as they were at iteration k, whereas the mutation varies the chromosome r by multiplying the second and fourth genes for the quantity Δ i ∈ R + with i = {2, 4}, respectively.The flowchart of the proposed GA is shown in Figure 6. The accuracy of such approach is strictly dependent on the values of IT and IT MAX , whereas the complexity of the algorithm depends also on the definition of chromosomes, on the size L of the initial population and on the P cross and P mut probabilities.Chromosomes are arrays of genes which are real values.The higher the precision on the representation of the genes (i.e., the number of decimal digits used to approximate real values), the higher the accuracy achieved by the UPA, but also, the higher the complexity of the algorithm.Similarly, big-size populations guarantee higher performance, but also lead to time consuming processing.A critical matter is the selection of P cross and P mut probabilities: high values can determine instability of the GA which could diverge, whereas, on the other side, low values likely lead to slow convergence. RESULTS A WPDM system which deploys two packets of size M = {4, 8} is used to multiplex two streams having same rate, but different format (see Figure 7).Standard Daubechies minimum-phase scaling filters of length N = 12 [31], which guarantee short delay and substantial capacity advantage over conventional FDM systems [10] u 1 (τ) x 1 (τ) Figure 7: UPA-WPDM system for broadcasting two heterogeneous services.At first, we have run some preliminary tests to analyze the importance of the GA parameters.The crossover operator was allowed to interchange int[0.4•M]genes whereas the mutation occurred on int[0.1•M]genes,int[•] being the operator which produces the integer part of the argument.In other words, at each iteration, a maximum of 40% of the chromosome parents could appear on the next generation of chromosomes and only 10% of a chromosome could vary.According to this, L was varied in the range {8,16,32,64,128}, P cross and P mut in the range 0.3÷0.7 and 0.01÷0.3,respectively.Finally for mutation Δ i varied within the range {0.1, 0.2, 0.3}•w i .The maximum difference in terms of fitness function value among all the solutions was observed to be less than 5%.Therefore, the following considerations can be made: huge-size populations bring to better solutions at the expense of a higher-processing time; the P mut probability is suggested to be set equal to or higher than 0.1, whereas Δ i above 0.2•w i to avoid an excessive number of iterations; the P cross probability does not sort significant effects in the range used.As to the outcome from the preliminary tests on GA behaviour applied to the UPA problem, in the following experiments the {INIT} population was composed by L = 32 chromosomes, eight decimal digits were used to represent genes (i.e., w i ), the probability P mut = 0.3, 0.25•w i and P cross = 0.5, whereas IT MAX = 100 and IT = 1000.For the sake of clearness, Table 1 summarizes the parameter setting for the experiments. Achieved quality in the parameters domain is expressed in terms of the signal-to-noise ratio (SNR u ) measured in decibel SNR u [dB] = 10•log 10 [E{u 2 (τ)}/MSE u ] with MSE u as in (1).SNR u [dB] is evaluated at varying average bit error probabilities P b = (1/M) M i=1 P (i) b with P (i) b as in (8).We have compared the proposed UPA with a benchmark equal power allocation (EPA) WPDM system and an UEP scheme based on FEC coding.In the latter system, we have deployed Reed-Solomon (RS) codes [33].RS codes are nonbinary cyclic codes with symbols made up of m-bit sequences, where m is any positive integer having a value greater than where k is the number of data symbols being encoded, and n is the total number of code symbols in the encoded block.The error-correcting capability of the generic RS (n, k) code is t = (n−k)/2.UEP is implemented by protecting data with codes with higher-or lowercode rate R i c = k i /n i .At varying the channel error rate, for every WPDM channel, an appropriate RS (n i , k i ) code is selected for data protection according to the sensitivity to channel errors of the data carried on.More significant data (e.g., MSB) are protected by codes with higher error-correcting capabilities (i.e., higher-code rates).In particular, for any average error rate P b , the optimization procedure aims at selecting the M codes so that the SNR u is minimized under the bound of constant average code rate For our experiments, we have selected m = 8 and R c = 32/38 = 0.84 which corresponds to an increase of the total bandwidth of about 16%.To reduce the complexity of the coding process, we have fixed the number of code symbols in the encoded block n i = 38.The average error correcting capability of the system is therefore t = (38 − 32)/2 = 3 symbols per codeword.In other words, on the average, such a scheme is able to correct up to 3 symbols that contain errors in a codeword.Tables 2 and 3 report the details (i.e., actual code rate R i c and error correcting capability t i ) of the codes used at P b = 10 −3 for the transmission of u 1 (τ) and u 2 (τ), respectively. In Figures 8 and 9, we refer to UEP RS-based coding as RS (38,32).The analysis of the graphics reveals that UPA outperforms EPA along all the variation ranges of the average bit error probability within the transmitted frame with a peak gain of 6.84 dB at P b = 10 −3 in case of u 2 (τ).Same behaviour is noticeable with respect to RS coding for P b > 10 −4 , with 3.57 dB the peak gain for P b = 3.5 × 10 −3 and for u 2 (τ).For P b < 10 −5 , all the systems perform similarly with slight prevalence of the RS coding which is more evident for u 1 (τ).Superior performance in case of u 2 (τ) transmission can be justified by the higher precision obtained by a finer power distribution performed with eight weights with respect to a coarser allocation based on only four weights as for u 1 (τ).More generally, the UPA prevalence is due to the capability of the optimization procedure to obtain high accuracy by selecting weights in a range of real values.In Figures 10 and 11 show how for severe channel conditions the weights relevant to higher significant bits (i.e., w 11 , w 21 , and w 22 ) are emphasized with respect to all the others.For P b approaching 10 −3 , a decrease of the above weights corresponds to an increase of w 12 and w 23 which become also higher than 1.For P b < 10 −5 , all the weights converge to equal unitary value, but still remaining slightly different for P b > 10 −6 . Figure 12 shows the percentage bandwidth gain achieved by UPA with respect to UEP based on RS coding for target quality (i.e., fixed SNR u [dB]) on the transmitted parameters u 1 (τ) and u 2 (τ), at fixed P b = 10 −3 , for the WPDM system used for experiments as represented in Figure 5.A minimum bandwidth gain above 20% is noticeable whereas similar high variations are observed in both cases.This is due to the discrete nature of RS codes, which are constrained to only a definite set of possible code rates.On the other hand, UPA is a continuous process which guarantees more flexibility in the protection of sensitive data. In order to assess the suitability of the proposed scheme for real applications, such as audio and video broadcasting, as a further test, we have considered the specific multiplexed International Journal of Digital Multimedia Broadcasting transmission of a standard image and a stereo-audio sequence.Referring to the system proposed in Figure 7, we have used the well-known image "Lena" of size 512 × 512 in RGB format coded at 8 bpp per color component (see Figure 13), as a transmission source S 1 .We have measured the quality on the reconstructed image by standard PSNR metric expressed in decibel.On the other hand, we have ripped a 5 seconds from a stereo-audio CD signal sampled at 44.1 KHz coded at 16 bps and used as a source S 2 .For the evaluation of the quality on the received audio signal, we have used the perceptual evaluation of audio quality (PEAQ) strategy [34].PEAQ is technique recommended by the ITU, which evaluates the quality of an audio signal by a single number, called objective difference grade (ODG), which varies within a range [−4 ÷ 0], with 0 the highest quality score.PEAQ has proven to be more performing than conven-tional metrics based on mean square error on the evaluation of the performance of the conventional audio codecs [34].Table 4 shows the results achieved in case of P b = 10 −3 .The quality on the reconstructed image is slightly below 30 dB, whereas the PEAQ measured on the received audio sequence is just up −2.9.This result is in line with the typical performance of low-bit-rate audio and video codecs.For the transmission of audio\video at a rate of 64 Kbit/s, MP3/MPEG-4 codecs achieve PSNR approaching 30 dB for the reconstructed frames [2] and PEAQ of around −3.36 for the audio sequence [35].Since conventional DAB and DVB broadcasting systems work at P b 10 −3 , the proposed system could be an alternative solution for the broadcasting of multimedia heterogeneous contents in case of extremely hard transmission condition, when only little quality requirements are set. CONCLUSION In this work, we have presented an orthogonal multiple transmission system based on wavelet packet modulation suitable for the resilient broadcasting of data which demonstrate different sensitivities to transmission errors.A novel unequal error protection technique based on differentiated allocation of the transmission power over the modulated waveforms allows improving the final quality of the received parameters in case of AWGN channel, without any increase of the transmission bandwidth.The optimization of the weights has relied on Genetic Algorithms which allowed to achieve reduced complexity.Due to its scalability properties, the proposed scheme is able to provide for multiple transmissions of heterogeneous services which can be independently protected according to their specific format.Therefore, unequal power allocation applied to wavelet packet division multiplexing offers improved flexibility to broadcaster.Nevertheless, it is worthy to point out that particular attention has to be given to the design of the wavelet filters which are real-valued and under the approach of UPA could impair the performance of the transmission in case of wireless systems.In fact, the proposed UPA scheme may increase the dynamic range of the input signals to the WPDM modulator in Figure 4. Since g 0 [n] is real-causal FIR filter, the bigger input amplitude range may increase the complexity of these filters.This may be a disadvantage of UPA for implementation. Future work on this subject will investigate the capability of the proposed scheme to deal with real-time varying transmission conditions including the presence of fading effects and the broadcasting of reconfigurable heterogeneous services. Figure 3 : Figure 3: (a) WPDM tree structure suitable for broadcasting heterogeneous services.(b) Symbolic subband structure of the system in (a). Figure 5 : Figure 5: Example for crossover and mutation operators in case of chromosomes composed by four genes. Figure 6 : Figure 6: Flowchart of the proposed GA. Figure 12 : Figure 12: Percentage bandwidth gain for fixed quality on the transmitted parameter u 1 (τ) (a) and u 2 (τ) (b) achieved by UPA against UEP by RS codes for the WPDM scheme in Figure 7. , are deployed.Without Table 1 : Parameters setting for experiments. 2. RS (n, k) codes on m-bit symbols exist for all n and k for which
2018-04-03T00:44:23.038Z
2008-02-25T00:00:00.000
{ "year": 2008, "sha1": "7b1709e6acce305dd990f4bfc24e49665ee0a2df", "oa_license": "CCBY", "oa_url": "https://downloads.hindawi.com/journals/ijdmb/2008/523649.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "7b1709e6acce305dd990f4bfc24e49665ee0a2df", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
216124316
pes2o/s2orc
v3-fos-license
INTERACTION NETWORK OF TaRHA2b OF WHEAT (TRITICUM AESTIVUM L. ) BASED ON HIGH-THROUGHPUT YEAST TWO-HYBRID SCREENING . To explore the signal transduction pathway that TaRHA2b gene is involved in regulating, yeast two-hybrid system was used to screen and identify proteins interacting with TaRHA2b. The homogenized cDNA library was constructed from the embryo of mixed barley. The decoy protein vector pGBKT7-TaRHA2b was constructed and transformed into yeast AH109 cells for self-activation detection. Candidate positive clones were screened by one to one yeast co-transfection, SD/-Leu/-Trp and SD/-Leu/-Trp/-Ade/-His medium screening and β-galactosidase chromogenic reaction. The in vitro co-transfection and GST-pull down experiments were used for further verification. The homogenized cDNA library of the barley embryos with titer of 1.2 10 6 cfu/ml and storage capacity of 1.1 10 6 was successfully constructed. The inserted fragment size was between 1000-3000 bp. The plasmids of the cDNA library were extracted and sequenced on a large scale, and 5000 plasmids were obtained. The decoy vector without self-activation function was successfully constructed. The x-gal filter paper of eighty-four hybrid clones showed strong blue color and forty-nine pieces of effective sequence information were obtained. Eight of them were selected for co-transfection verification. YTH2450 was selected for GST-pull down verification. The results showed that they were positive hybrid clones. TaRHA2b interacted with YTH2450 and other proteins. The information of above genes could be used for genetic improvement of crops, further improving the adaptability of crops to the environment. The vector pGBKT 7 -TaRHA2b transfected AH109 cells library plasmid. SD/-Trp-Leu-His-Ade 7 and pGADT -T, pGBKT after days incubation at Introduction Preharvest sprouting (PHS) refers to the phenomenon of direct germination of grains on the ear in wet environment at the mature stage of wheat (Triticum aestivum L.). The negative effects of spike germination on wheat production are mainly reflected in decreasing yield, deteriorating quality and lower seed value. At present, the resistance of most wheat varieties to spike germination is not strong. The shortage of wheat resistance gene resources severely restricted the breeding and application of wheat varieties resistant to heading germination. Cys-rich RING domain was identified for the first time in proteins encoded by Really Interesting New Gene. In Arabidopsis thaliana proteome, more than 5% of the predicted proteins (more than 1400 genes) are involved in ubiquitination/26S proteasome pathway (Smalle and Vierstra, 2004). Among these proteins, only a few encode E1 enzymes (two isoforms), 37 predicted proteasome components of E2, 26S, and other factors (such as deubiquitinase), while more than 1400 genes encode E3 ubiquitin ligases to participate in ubiquitination-dependent proteasome degradation pathways (Smalle and Vierstra, downstream reporter gene can be directly expressed, which makes no sense to use the decoy protein vector to screen the library. In order to further understand the function of TaRHA2b and its mechanism of PHS resistance, some genes related to TaRHA2b were screened by yeast two-hybrid method. The present study can not only screen new stress-resistant genes, but also provide a theoretical basis for elucidating the mechanism of TaRHA2b-mediated PHS resistance. Construction and identification of homogeneous cDNA library Wheat is hexaploid and its genome structure is too complex. Barley and wheat are both gramineous plants. The genetic distance between them is very small. Therefore, the interaction of TaRHA2b protein in barley can be studied first, and then the function of TaRHA2b in common wheat can be studied. Preparation and purity detection of total RNA from barley (Hordeum vulgare L.) embryo Barley seeds were treated with 5 µM ABA and distilled water, respectively. The seeds amount of each treatment was 50, and the sampling time was 0 h, 6 h, 12 h and 24 h. After the obtained materials were degerminated, mixed samples were prepared and total RNA was extracted using the RNAiso Plus Kit (Takara, Japan). Total RNA was determined by spectrophotometer at OD260/OD280 values. 1-2 µg RNA was treated with thermal denaturation (65 °C, 10 min). Then Agarose gel electrophoresis was used to detect the purity of RNA. Synthesis of 1 st strand and 2 nd strand cDNA The total RNA was taken as the template and the experimental steps were followed in the Creator Smart cDNA Library Construction Kit (Clontech, the United States). The electrophoresis on agarose gel was determined with 5 µL. Purification, enzyme digestion, binding and library transformation of dscDNA The dscDNA was purified using the QIAquick PCR Purification Kit. The purification of the ds cDNA was treated with SfiⅠ enzyme, 65 °C warm bath 2 h. The products were purified by CHROMA SPINTMTE-400 Column. Purified product was connected with pGADT7-Rec carrier, 16 °C, for the night. Connect the product to E. coli DH10B transformation, which was placed in the resistance of ammoniac benzyl LB tablet (15 cm in diameter) on 37 °C training for the night. Identification of library Observe and record the number of colony, then calculate its storage capacity. Thirty clones were randomly selected for PCR. The product was detected by agarose gel electrophoresis, the recombination rate was calculated and the size of the inserted fragment was predicted. Extraction, sequencing and bioinformatics analysis of cDNA library plasmids 2*96 monoclonal extraction templates were selected and sequenced with ABI3730. Positive sequencing was used. The sequence was analysed by BLAST. The NCBI and Swiss-Prot databases were used for bioinformatics. Construction and self-activation detection of decoy protein vector Screening of interacting proteins is done in accordance with Matchmaker TM Gold Yeast Two-Hybrid System User Manual (PT4084-1). Construction of bait protein vector Both pMD19-TaRHA2b and pGBKT7 vectors were digested by EcoR I and PST I at the same time, and the corresponding target fragments and linear pGBKT7 vectors were recovered. The recombinant vector pGBKT7-TaRHA2b was finally obtained and transformed into E. coli. Monoclones were selected and plasmids were extracted after shaking bacteria culture. The obtained plasmids were identified by enzyme digestion, and the rest were stored in a refrigerator at -80 °C with 50% glycerol. Self-activation detection of bait carrier Yeast strain AH109 was inverted cultured on YPDA plate at 30 °C. Yeast with diameters of 2-3 mm were selected and streaked on SD/-Trp, SD/-Leu, SD/-His and SD/-Ade media, respectively. The growth of yeast was recorded by inverted culture at 30 °C for 3-5 days. The bait vectors pGBKT7-TaRHA2b and pGADT7 were cotransfected into yeast AH109 competent state, and were separately delineated on SD/-Trp-Leu, SD/-Trp-Leu-His-Ade medium. The positive control group was cotransformed with pGBKT7-53 and pGADT7-T, while the negative control group was co-transformed with pGBKT7-lam and pGADT7-T. The growth of yeast was recorded after 3-4 days of inverted culture at 30 °C on all co-rotating plates. One-to-one transfection of bait plasmids and library plasmids into yeast competence The bait vector pGBKT7-TaRHA2b was transfected into yeast AH109 cells together with the library plasmid. The lines were drawn on SD/-Trp-Leu and SD/-Trp-Leu-His-Ade medium, respectively. At the same time, the yeasts transformed with pGBKT7-53 and pGADT7-T, pGBKT7-lam and pGADT7-T were used as positive control and negative control, respectively. The growth of yeast was observed and recorded after 3-4 days incubation at 30 °C. Identification of yeast positive clones The large-scale library plasmid and bait protein vector were co-transformed into yeast AH109 cells. The activity of beta-galactosidase co-transforming yeast colonies on SD/-Trp-Leu-His-Ade culture plate was detected by X-gal filter paper colorimetry. The colour reaction of yeast colonies was observed within 8 h. Sequencing and analyzing the AD-ORF of positive candidates. The plasmids corresponding to the yeast positive clones were sequenced. The obtained sequences were analyzed by bioinformatics. The NCBI and Swiss-Prot databases were used for bioinformatics. The co-transfection verification The barley DNA stored in laboratory was used as template and amplified with primers ( Table 1). The amplification procedures were as follows: 94 °C, 5 min; 94 °C, 30 s; 57 °C, 30 s; 72 °C, 1 min 20 s; 72 °C, 10 min; 30 cycles. The size of the target fragment was 933 bp. After gel recovery, it was connected with pGEM-T and transformed into DH5a to extract plasmid. The above plasmids were digested by EcoRI/BamHI and pGADT7-T by EcoRI/BamHI. The target fragment was linked with the digested pGADT7-T vector by T4 ligase and transformed into DH5a. The plasmid was extracted and identified by PCR, and the corresponding candidate protein prey was finally obtained. The bait vector and prey vector co-transformed yeast were applied to SD/-Trp-Leu and SD/-Trp-Leu-His-Ade solid medium plates. Colony growth was observed after incubation for 2-4 day. The GST-pull down verification Cloning of YTH2450 gene and TaRHA2b gene PGADT7-YTH2450 was used as the template, and 59/60 primers were used for amplification. The amplification procedure was as follows: 94 Construction of prokaryotic expression vector BamHI/SalI double enzyme digestion was performed on the plasmid pGEM-T-YTH2450. At the same time, BamHI/SalI double enzyme digestion was performed on the pMAL vector, and the target products were recovered after electrophoresis. The target fragment and the target vector were linked with T4 ligase, and the linked product was transformed into E. coli. The plasmid pMAL-YTH2450 was extracted. BamH I/Xho I double enzyme digestion was performed on the plasmid pGEMT-T-TaRHA2b. At the same time, the pLEICS vector was subjected to BamH I/Xho I double enzyme digestion, and the target products were recovered after electrophoresis detection. The target fragment and the target vector were linked with T4 ligase, and the linked product was transformed into E. coli. The plasmid pLEICS-TaRHA2b was extracted. Induced expression and purification of fusion protein Transfer 500 μL positive bacteria fluid to 10 ml LB liquid medium containing 50 μg/ml ampicillin, 37 °C for the night. Transfer 2 ml bacteria liquid to 50 ml LB liquid medium containing 50 μg/ml ampicillin, 37 °C for the night. Take 1 ml bacteria liquid as a control, to add IPTG residual cultures to the final concentration of 0.3 mM, 37 °C to continue to develop. 1 ml of bacterial fluid was collected at different induction stages (1 h, 2 h, 3 h, 5 h). Centrifuge for 2 min, discard the supernatant, and suspend the cells with 100 L buffer solution. The samples were boiled in boiling water for 5 min, centrifuged for 1 min, and the cells were suspended with buffer solution. PAGE electrophoresis was performed on the obtained samples. Freeze thawing at room temperature, place on ice immediately, add 10-20 ml of bacterial lysate (PBS + l%Triton-100 + PMSF) for every 500 ml of medium, and mix well. Ultrasonic crushing on ice, open 2 s, stop 9 s, total 40-60 min. Until the lysate is sufficiently cool. 11000 rpm, 15 min, 4 °C on the centrifugal separation, -80 °C saved for later use. GST (glutathione S-transferase) pull down Three purified columns containing Glutathione sepharose 4B were balanced by adding a binding buffer (50 mM Tris-HCl, pH 7.4, 100 mM NaCl). The bacterial lysate was added by ultrasonic crushing and centrifugal filtration. Two of them were added with pLEICS-TaRHA2b to induce GST-TaRHA2b lysates in large quantities, while the other one was added with pLEICS to induce GST lysates in large quantities. Placed in 4 °C, horizontal oscillation apparatus, slow upside down with rights and cracking liquid contact with column material, promote the combination. After 2-4 h, discard the lysate and rinse 5-8 times with pre-cooled combined buffer. A large amount of pMAL-YTH2450 induced bacterial lysate (ultrasonic crushing, centrifugal filtration) was added to the GST-TaRHA2b column and the GST-bound column respectively. Also 4 °C, fluctuation slow oscillation 4 h, abandon cracking fluid, clean with combined buffer after 5-8 times, with the elution buffer (50 mM Tris HCl, pH 7.4, 10 mM NaCl, 10 mM reducing GSH) elution 3 purification column, collect samples for SDS-PAGE analysis. Statistical analysis The GraphPad Prism 8 was used for statistical analysis and drawing. For comparing results of different treatments, Variance analysis is followed by a post-hoc test in order to determine pairwise differences. Differences were considered significant for P < 0.05. Construction and identification of homogeneous cDNA library of barley embryos When the prepared RNA was detected by spectrophotometer, the OD260/OD280 value was 2.03, which indicated that the purity of the RNA was very high. When RNA agarose gel electrophoresis, the brightness of the two bands of 28S and 18S is close to 2:1 (Fig. 1A). Normal proportion indicates that the integrity of the RNA is good. In summary, the prepared RNA can be further tested. The size of double-stranded DNA fragments was uniformly distributed in the range of 300-5000 bp by LD-PCR (Fig. 1B), with slightly high abundance gene bands. The storage capacity of the plate is about 1200. The titer of the library was 1.2 10 6 cfu/ml and the storage capacity was 1.1 10 6 . The results of colony PCR identification of randomly selected clones in the library showed that the size of inserted fragments in the http://www.aloki.hu • library ranged from 1000 to 3000 bp (Fig. 1C). The results showed that the library was of high quality and could be used in the follow-up yeast two-hybrid screening library. Large-scale extraction, sequencing and bioinformatics analysis of library plasmids The bacterial solution of barley cDNA library was amplified by PCR, and 5000 plasmids were finally extracted. Most of the amplified fragments are over 1000bp in size (Fig. 2). Considering the integrity of the DNA fragments, 120 plasmids with longer amplified fragments were selected and sequenced. Sequence data of 80 plasmids were obtained. A total of 71 genes were obtained after the duplication was removed ( Table 3). The information about the sequences includes query coverage, percentage of identity, accession number, species and domain description. Information about 51 of these genes is known. Domain functions of some genes are related to Aldo/keto reductase family, EPSIN1, Zinc finger B-box type profile, Serine/Threonine, protein kinases active-site signature, AGD5, TPR repeat profile, NAF domain profile, AP2/ERF domain profile, Myb-type HTH DNA-binding domain profile, BARE-2, SAC9, Leucine-rich repeat profile, HEAT repeat profile, UBX domain profile, GRAS family profile, autophagy-related protein 7c (ATG7c), delta-1-pyrroline-5-carboxylate synthase, MO25-like protein and so on ( Stress-resistant genes mining is an important work in molecular breeding. The acquisition of these data lays a foundation for the discovery of genes. Construction and self-activation detection of recombinant vector of decoy protein RT-PCR was used to obtain a fragment size close to 500 bp, which was consistent with the expected fragment size (Fig. 3A). After sequencing, the size was confirmed to be 477 bp, which was found to be consistent with TaRHA2b gene sequence of wheat by sequence blast. The enzyme digestion results of decoy vector pGBKT7-TaRHA2b were shown in Figure 3B. After plasmid sequencing, sequence blast was compared and found to be consistent with the TaRHA2b gene sequence of wheat, indicating that TaRHA2b fragment was correctly integrated into the polyclonal site of pGBKT7 vector. Yeast strains AH109 on SD/-Leu/-Trp/-Ade/-His medium did not grow after 3-5 d (Fig. 3C). This indicated that the yeast strain AH109 had no phenotypic mutation in the secondary culture. The experimental group, positive control group and negative control group grew on the SD/-Leu/-Trp medium (Fig. 3D). The positive control grew on the SD/-Leu/-Trp/-His/-Ade medium, but neither the experimental group nor the negative control grew on the SD/-Leu/-Trp/-His/-Ade medium (Fig. 3E). The results show that the decoy carrier pGBKT7-TaRHA2b has no self-activation function. A pair of positive plasmids were co-transformation for in vitro validation pGADT7-YTH2450 and pGBKT7-TaRHA2b were co-transformed into yeast AH109 cells and then cultured on SD/-Trp-Leu-His-Ade medium for 3d (Fig. 5). The results showed that YTH2450 and RHA2b had a strong interaction. The primers were designed according to the homologous sequences obtained by bioinformatics analysis of YTH2450. The obtained plasmid pMAL-YTH2450 was sequenced. After bioinformatics analysis has determined that the fragment is correct, subsequent operations can be carried out. The final pMAL-YTH2450 recombinant plasmid and its enzyme digestion identification results showed that it was correct (Fig. 6A). PCR was performed on the bacterial solution containing pLEICS-TaRHA2b plasmid, and the results were shown in Figure 6B. The size of the target fragment was consistent with the expectation. Sequencing was performed, bioinformatics analysis was performed to determine the correct fragment, and subsequent operations were carried out. The size of the expressed protein was about 75.0 KDa, which was consistent with the expected size of 75.0 KDa, indicating the successful expression of prokaryotic protein (Fig. 6C). The size of the expressed protein was about 63.0 KDa, which was consistent with the expected size of 63.0 KDa, indicating the successful expression of prokaryotic protein (Fig. 6D). GST-TaRHA2b was fixed to Glutathione-Sepharose4B column and GST (inducing expression of empty vector pLEICS-14) was fixed as negative control. After repeated washing, SDS-PAGE analysis was performed. YTH2450 can bind specifically to GST-TaRHA2b (swim lane 1), but not to negative control GST (swim lane 2) (Fig. 6E). It indicates that there is an interaction between TaRHA2b and YTH2450 in vitro. There are not many studies on the cDNA library of barley, and the construction of the cDNA library based on the embryo of barley treated by ABA treatment has not been studied. This project aims to explore the specific regulatory network of TaRHA2b gene in ABA signal transduction pathway and study the interaction between proteins. The yeast two-hybrid technique is the first choice. The construction of high-quality homogeneous cDNA library is the basis of subsequent experiments. Homogenization can ensure the existence of low-abundance mRNA as far as possible, which is necessary for the successful construction of cDNA library. In SMART technology, LD PCR method is adopted to obtain dscDNA, which can obtain longer fragments as far as possible, fully reflecting the integrity of mRNA (Schuler, 1997;Wellenreuther et al., 2004). In this experiment, the homogeneous cDNA library was constructed with a capacity of 1.1 10 6 . Random clone sequencing results showed that the fragment size was within the range of 1000-3000 bp. The storage capacity requirement of cDNA library is above 1.7 10 5 (Thanh et al., 2011). Only when this standard is reached, the low abundance mRNA can be well processed and the constructed library can be used for the yeast two-hybrid screening library experiment. In the present study, the yeast AH109 was deficient in the synthesis ability of Leu, Trp, His and Ade. The synthesis of Leu and Trp was remedied when the plasmid was transformed into yeast. As for His, there is a possibility of background expression in yeast. However, it was found that AH109 could not grow in the SD/-His medium without 3-AT in the defect validation experiment on yeast. Therefore, 3-AT was not added in this experiment. In this experiment, the decoy vector pGBKT7-TaRHA2b for yeast two-hybrid system was successfully constructed. The decoy vector can be expressed normally in yeast without toxicity. It is proved that it has no self-activation in the system. Yeast two-hybrid screening consists of two methods. The first method was to screen AH109 by co-transposing decoy protein granules with ds cDNAs. Although this method is simple, the library can only be used once and it is difficult to screen the interacting proteins. The second was screened by conjugation of two inverters, including Y187 yeast inverter containing decoy protein and AH109 yeast inverter containing pGADT7-Rec and ds-cDNAs. The operation of this method is complex. In this experiment, a pair of total transformed yeast AH109 was used. If positive interaction was confirmed, specific plasmids could be directly corresponding. The follow-up operation is more efficient. Using the protein expressed by the limited PHS resistance gene RHA2b as bait, we screened some corresponding fragments of TaRHA2b-related proteins by yeast twohybrid technology, which not only has important significance in searching for new PHS resistance genes, but also provides new information for studying the mechanism of TaRHA2b-mediated PHS resistance. The interaction between TaRHA2b and interacting proteins was fully demonstrated by positive clones identification, sequencing, verification of the pair of repeated cotransformation and GST-pull down. These conclusions lay a foundation for studying the interaction and mechanism of TaRHA2b/YTH2450, TaRHA2b/YTH2456 and TaRHA2b/YTH2476 in the stress tolerance pathway of wheat. The complex formed by yeast hybrid candidate proteins and TaRHA2b may act on different signal transduction pathways. Bioinformatics analysis showed that the function of YTH2450 may be malate/ketoglutarate transporter, the function of YTH2456 may be RACD protein, and the function of YTH2476 may be TIP1 protein. TaRHA2b is a protein of the type of transcription factor positively regulated by ABA signal transduction, which is essentially an ubiquitinated ligase and mainly participates in the process of protein degradation in cells. RHA2b targeted MYB30 degradation to regulate ABA signal transduction (Zheng et al., 2018). Ketoglutaric acid/malate transporter was involved in the tricarboxylic acid cycle. The present study showed a strong interaction between ketoglutarate/malate transporter and TaRHA2b. The process of ubiquitination involves the formation of multiple complexes and changes in energy metabolism. α-ketoglutarate carrier protein (OGCP), located on mitochondria, encodes 314 amino acids and has the following important functions: scavenging endogenous oxygen free radicals in mitochondria and participating in cellular energy metabolism (De Palma et al., 2010; Regalado et al., 2013). These results indicated that the complex formed by ketoglutarate transporters and TaRHA2b could be involved in ABA signal transduction. RAC/ROP involved in plant (ROP) family is related to small molecule G protein, which is a molecular switch of signal transduction in many cells, regulating the development process of cells and the response to the environment. RAC/ROP has three conformations, including an active GTP binding state, a transient free state, and a GDP bound state. In the active GTP binding state, RAC/ROP interacts with effector proteins to initiate downstream signaling (Haitina et al., 2006). These results indicate that the complex formed by RAC/ROP and TaRHA2b can activate the downstream signal and then play a role in a specific pathway. TIP1 protein is a water channel protein, which can theoretically be expressed in all tissues, but the expression amount is not the same (Wudick et al., 2009). Therefore, in response to ABA, an abiotic stress, or in the climate where spikelet germination occurs, it is very likely that the complex formed by TIP1 and RHA2b participates in these regulatory pathways. These results lay a foundation for further study on the interaction and mechanism of TaRHA2b and protein-protein interaction in wheat stress resistance pathway. Conclusion The information of cDNA library lays a foundation for the discovery of stress resistance genes. TaRHA2b interact with YTH2450 and other proteins. The TaRHA2b interacting proteins could be used for genetic improvement of crops, further improving the adaptability of crops to the environment. The detailed regulatory mechanism mediated by TaRHA2b and interacting proteins needs to be further studied.
2020-04-09T09:20:08.715Z
2019-01-01T00:00:00.000
{ "year": 2019, "sha1": "e77729c737e08178d6b5d8defa70084f8ad46f89", "oa_license": null, "oa_url": "https://doi.org/10.15666/aeer/1706_1310513124", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "866f7830ef0bfc22de8717a74b49bc03b6c7025f", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology" ] }
238182439
pes2o/s2orc
v3-fos-license
Biological behavior exploration of a paclitaxel-eluting poly-l-lactide-coated Mg–Zn–Y–Nd alloy intestinal stent in vivo As a new type of intestinal stent, the MAO/PLLA/paclitaxel/Mg–Zn–Y–Nd alloy stent has shown good degradability, although its biocompatibility in vitro and in vivo has not been investigated in detail. In this study, its in vivo biocompatibility was evaluated by animal study. New Zealand white rabbits were implanted with degradable intestinal Mg–Zn–Y–Nd alloy stents that were exposed to different treatments. Stent degradation behavior was observed both macroscopically and using a scanning electron microscope (SEM). Energy dispersion spectrum (EDS) and histological observations were performed to investigate stent biological safety. Macroscopic analysis showed that the MAO/PLLA/paclitaxel/Mg–Zn–Y–Nd stents could not be located 12 days after implantation. SEM observations showed that corrosion degree of the MAO/PLLA/paclitaxel/Mg–Zn–Y–Nd stents implanted in rabbits was significantly lower than that in the PLLA/Mg–Zn–Y–Nd stent group. Both histopathological testing and serological analysis of in vivo biocompatibility demonstrated that the MAO/PLLA/paclitaxel/Mg–Zn–Y–Nd alloy stents could significantly inhibit intestinal tissue proliferation compared to the PLLA/Mg–Zn–Y–Nd alloy stents, thus providing the basis for designing excellent biodegradable drug stents. Introduction Bowel obstruction due to intestinal stricture formation is a wellknown complication of enteral diseases, including malignant and benign strictures. Factors causing benign intestinal obstructions include Crohn's disease and anastomotic stenosis. 1 The recommended treatment for strictures involves self-expanding metal or plastic stents. However, the application of these stents is related to several common problems, including new stricture formation, perforation, migration, tissue ingrowths, and repetitive endoscopy. [2][3][4][5][6] To avoid the complications of permanent metal and plastic stents, biodegradable polymer alloy stents have recently been introduced. Biodegradable stents (BD S ) made of magnesium alloys demonstrate superior performance to their polymeric counterparts due to excellent mechanical properties similar to SS316L stainless steel, which cannot be achieved by polymers. 7 Some studies have shown that magnesium alloys can be used as orthopedic implants or cardiovascular stents. [8][9][10][11][12][13][14][15] However, it is well known that magnesium alloys will gradually corrode under natural conditions. And the major limitation of biodegradable magnesium alloy stents is their low corrosion resistance. To lengthen the degradation time, alloying is used as one of the methods to enhance corrosion resistance and mechanical strength. Moreover, scaffold surface polymer coating is also important for mechanical strength. 16 Degradable polymers, such as PLLA, PLGA, have good plasticity, mechanical properties, and biocompatibility, which are most commonly used biomaterials for surface modication of alloys to enhance the corrosion resistance of alloys. [17][18][19][20][21] Therefore, PLLA is considered as a coating material to improve the corrosion resistance of magnesium alloy surface in this paper. At present, there are many ways to improve the corrosion resistance of the alloy surface, besides polymer coating, micro-arc oxidation (MAO) process is also one of the surface modication technologies of magnesium alloys. At high breakdown voltage, various processes such as electrochemical, thermodynamic and plasma chemical reactions occur, accompanied by spark discharge to produce thicker, harder and wear-resistant ceramic coatings that can effectively improve the corrosion resistance of the alloy surface. 22,23 However, to reduce neointimal growth, biodegradable magnesium alloy stents can be coated with polymers containing antiproliferative drugs. Even though previous research report refered to that the clinical efficacy of paclitaxel was affected by drug resistance, in view of its excellent antiproliferation property in the treaty of breast cancer paclitaxel was regarded as one of the effective antiproliferative drugs. 24 At the same time, it has been reported in literatures that both paclitaxel and sirolimus can effectively inhibit neointimal hyperplasia of coronary artery within a month dose range, and the effect has a certain degree of safety. [24][25][26][27] It can effectively reduce the degree of vascular restenosis. Therefore, paclitaxel and sirolimus are commonly used in drug-eluting stents for the treatment of coronary stenosis. However, whether paclitaxel that can effectively inhibit the occurrence of vascular endothelialization, as well as the biological safety of the paclitaxeleluting intestinal stent to important organs and tissues of human body, are unknown. Therefore, paclitaxel is selected as a sustained-release drug of the stent to explore the compatibility of the drug-eluting stent in intestinal environment in this study. However, few studies to date have reported on the biocompatible and biodegradable properties of magnesium alloy in the intestine. Mg-Zn-Y-Nd alloys have excellent biodegradability and biocompatibility, so they have been used as biomaterials for cardiovascular stent research. 28 In this paper, we selected Mg-Zn-Y-Nd alloys as the alloy material of degradable intestinal drug-eluting stents. The biocompatibility in vitro was evaluated by the cytotoxicity test. In addition, animal experiments are carried out by placing scaffolds in different treatment groups. We evaluated the in vivo biocompatibility of a paclitaxel-eluting poly-L-lactide-coated Mg-Zn-Y-Nd alloy applied as a biodegradable intestinal stent through animal study. Materials preparation The paclitaxel-eluting poly-L-lactide-coated Mg-Zn-Y-Nd alloy stents used in this research were 25.0 mm in length, with a 10.0 mm diameter. A new Mg-Zn-Y-Nd alloy was prepared by induction in a low carbon steel crucible at about 740 C in the carbon dioxide/sulphur hexauoride atmosphere (volume fraction of 3000 : 1) using high purity magnesium, high purity zinc, and magnesium-25nd (99.97 wt%) and magnesium-25y (wt%) master alloys. 28 Magnesium alloy stent bers were fabricated using a single screw extruder. PLLA ((C 6 H 8 O 4 ) n , intrinsic viscosity: 3.4 dl g À1 (CHCL 3 /25 C)) and paclitaxel were of reagent grade and purchased from the Science and Technology Company of Solarbio (Beijing, China) and the Academy of Pharmaceutical Sciences (Jinan, China). In this study MAO/ PLLA and MAO/PLLA/paclitaxel coatings were prepared using the dip coating method. The PLLA was dissolved in dichloromethane with a proportion of 0.03 g ml À1 . Paclitaxel (>98% content) was dissolved in a PLLA solution with a concentration of 0.008 g ml À1 . Because paclitaxel has higher drug loading accuracy in the concentration range of 5 mg ml À1 to 15 mg ml À1 . 29 Mg-Zn-Y-Nd alloy stents were immersed in the solution for 30 min, pulled with a dip coating method, and evaporated for 0.5 h. The process was repeated according to thickness requirements and the nal product placed in a vacuum-drying oven to remove the solvent for 24 h. In this study, the thickness of PLLA coating is about 15.1 mm AE 3.1. Before the in vivo experiments, all samples were sterilized with 29 kGy of cobalt-60 radiation. And PLLA coated magnesium alloy specimens and PLLA/paclitaxel coated magnesium alloy specimens were also prepared by dip coating method as the above. The control stent was identical, except for the absence of paclitaxel and poly-Llactide. In this study, some wafer samples were treated with MAO to improve the corrosion resistance. The composition of the electrolyte used for MAO was listed in Table 1. When all reagents are dissolved, deionized water was used to determine the volume and mix evenly. High frequency single pulse power supply (ys9000d-300-40) was used in micro arc oxidation process. The alloy stent was used as the anode to connect the power supply, and the stainless steel plate was used as the cathode to connect the power supply. At a constant voltage, the voltage increases from 0 V to 260 V at a rate of 1.6 V s À1 . The samples were treated for 20 minutes, and then washed with deionized water and dried naturally. Animal model Animal experiments were performed according to the Guidelines for the Nursing and Use of Laboratory Animals and approved by the Ethics Committee of Luoyang Central Hospital Affiliated with Zhengzhou University. Forty-eight clean adult New Zealand white rabbits with an average body weight of 2000 g (AE121 g) were randomly divided into four groups. In the rst group, rabbits did not receive any intestinal implants and were assigned to the negative control group. In the second group of 12 rabbits, the Mg-Zn-Y-Nd alloy stents were implanted into the intestine. This group was named the Mg-Zn-Y-Nd alloy stent group. In the third group of 12 rabbits, the PLLA/Mg-Zn-Y-Nd alloy stents were implanted into the intestine. In the last group of 12 rabbits, the MAO/PLLA/paclitaxel/ Mg-Zn-Y-Nd alloy stents were implanted into the intestine. The operation was performed under intraperitoneal anesthesia with 3% pelltobarbitalum natricum (2 ml kg À1 ). The skin was incised layer-by-layer and a 10 mm intestinal incision was made parallel to the length-diameter of the intestine. The Mg-Zn-Y-Nd alloy stents were embedded in the intestinal incision and the abdomen was sutured layer-by-layer. Aer the operation, about 10 ml of sodium chloride were administered to the rabbits. The rabbits were able to conduct regular activities following recovery of consciousness, including eating and drinking. Cell toxicity assay for paclitaxel loaded in the stent. Indirect contact cytotoxicity test was performed to prove indirectly that paclitaxel is loaded in the stent designed in this study. Pure DMEM culture medium containing 10% fetal bovine serum (FBS) was used as the negative control and DMEM containing 0.64% phenol was the positive control. NCM-460 cells (human colonic epithelial cells) were suspended in complete medium. Cell concentration was adjusted at 1.0 Â 10 5 /ml, inoculated in 96 well plates. The cells were cultured in the incubator (37 C) for adherence. The incubator was maintained at 37 C and 5% CO 2 , and the extraction medium was replaced every 24 h. NCM-460 cells were cultured in different concentrations of extract medium for 72 h, respectively. To indirectly determine the cytotoxicity of magnesium alloy extracts. Whether paclitaxel was loaded into magnesium alloy scaffolds was judged indirectly by observing the cell proliferation of different treatment groups. Macroscopic and SEM analysis. Magnesium alloy stents implanted in the intestinal tracts of New Zealand white rabbits were removed at different time points (2, 5, 8, and 12 days). First, the overall structure and corrosion morphology of the stents were observed macroscopically. Scanning electron microscopy (SEM) was used to analyze the corrosion morphology of every stent group (bare body, PLLA/Mg-Zn-Y-Nd, and MAO/PLLA/paclitaxel/Mg-Zn-Y-Nd alloy scaffolds). Energy dispersion spectrum (EDS) was used to evaluate the corrosion degree of magnesium alloy stents. 2.3.3 Biocompatibility analysis of in vivo study. Before stent removal, 3 ml of blood from every rabbit in the MAO/PLLA/ paclitaxel/Mg-Zn-Y-Nd alloy stent group were taken for serological analysis of liver and kidney functions to assess whether systemic inammation was induced. In this study, hematoxylin-eosin (HE) was used for tissue staining. The intestinal tissues around the scaffold, including heart and liver organs, were xed with a 10% formalin solution for 24 h. The sections were then embedded in paraffin. The heart and liver tissues were stained with HE only. Intestinal tissue paraffin sections were incubated with the Bcl-2 (diluted to 1 : 200) and Bax (diluted to 1 : 200) antibodies. Elivision immunohistochemical technique was used. Finally, the stained sections were observed using an optical microscope and the number of positive cells in each visual eld were recorded. Statistical analysis The differences between the two groups were assessed using one-way ANOVA. Step-wise regression analyses were conducted to evaluate the dose effects. The values were considered signicant when p < 0.05. Statistical values are shown in the relevant experiments. Fig. 1, the cells in the negative group were normal and healthy, and there were more active cells. However, with the increase of extract concentration, the number of cells decreased gradually, and the cell morphology was abnormal. Compared with the other two groups, the number of cells in the paclitaxelcoated group was signicantly reduced and the cell morphology was abnormal. Paclitaxel has the effect of inhibiting cell proliferation. Through the cytotoxicity test, it can be conrmed that paclitaxel has been successfully loaded into magnesium alloy stents. Assessment of macroscopic corrosion morphology in intestinal drug-eluting stents Magnesium alloy intestinal stents from different treatment groups were implanted into New Zealand rabbits for 2 days and a small amount of degradation products was attached to the surface of the stents. Fig. 2 demonstrates that the structure of the PLLA-coated magnesium alloy stent was still intact ve days aer implantation into rabbits. It can also be observed that a small amount of bers began to break on the stent of the PLLA-coated magnesium alloy group and that the amount of This journal is © The Royal Society of Chemistry 2020 RSC Adv., 2020, 10, 15079-15090 | 15081 degradation products attached to the surface increased. However, no magnesium alloy stent was found in the rabbit intestinal tract in the bare stent group aer implantation for 5 days. Aer stent implantation for 8 days, only the MAO/PLLA/ paclitaxel-coated Mg-Zn-Y-Nd stents were found in the rabbits. No stents were found in the other two groups. Moreover, no magnesium alloy stents were found in the three groups aer implantation into the rabbit intestinal tract for 12 days. Fig. 3 represents the local corrosion morphology of magnesium alloy stent bers implanted for 2 days. A few cracks and aking formed on the surface of the silks, while the stent structure for uncoated Mg-Zn-Y-Nd alloy remained intact ( Fig. 3a and d). The number of cracks and debris adhesions to the surface of scaffold threads in the PLLA-coated stent group was much smaller than that in the exposed group. However, surface of the scaffold threads in the MAO/PLLA/paclitaxel-coated stent group was smoother than that of bare Mg-Zn-Y-Nd alloy stents, showing better resistance to corrosion compared to that of bare Mg-Zn-Y-Nd alloy stents ( Fig. 3c and f). The surfaces of the threads at the end of the bracket was rougher than that inside the stent and there were more small cracks (Fig. 3). EDS spectra corrosion morphology assessment in intestinal drug-eluting stents SEM and EDS results 2 days aer implantation of stents from different treatment groups are represented in Fig. 4. A lot of degradation products were present on the surface of bare scaffold laments (Fig. 4a), where the content of P and Ca was 10.48% and 15.18%, respectively (Fig. 4b). Gray-white degradation products were present on the surface of the PLLA-coated stent laments (Fig. 4c), with the content of P and Ca of 5.69% and 5.72%, respectively (Fig. 4d). A small amount of degradation products was present on the surface of the MAO/PLLA/ paclitaxel coated scaffolds (Fig. 4e), with the content of P and Ca of 3.70% and 1.33%, respectively (Fig. 4f). These results showed that degradation products contained some insoluble inorganic substances such as magnesium carbonate and calcium phosphate. Moreover, the content of P and Ca in the MAO/PLLA/paclitaxel-coated stent group was signicantly smaller than that in the other two groups (p < 0.05), while the contents of P and Ca in the PLLA-coated stent group was signicantly smaller than that in the bare stent group (p < 0.05). EDS analysis showed that the MAO/PLLA/paclitaxel-coated stent group had better corrosion resistance than the other two groups. Histopathological assessment of important organs Pathological assessment images of important rabbit organs aer implantation of the MAO/PLLA/paclitaxel-coated Mg-Zn-Y-Nd alloy stents at 2 and 8 days were shown in Fig. 5. Myocardial ber morphology was presented in Fig. 5a and d, with no obvious abnormality of nuclear structure and no inltration of inammatory cells in the interstitial cells found at each stage. There was no obvious abnormality in the morphology of hepatocytes, as evident from the structure of hepatic lobules, with no inltration of inammatory cells between hepatocytes at different time points (Fig. 5). The morphology and structure of renal corpuscles, tubules, and collecting ducts were not abnormal. This indicated that the MAO/PLLA/paclitaxel-coated Mg-Zn-Y-Nd alloy scaffolds had good biocompatibility. Histopathological evaluation of intestinal tissues around stents Pathological sections revealed the effect of stent implantation on surrounding intestinal tissue in rabbits at different stages (Fig. 6). And Fig. 7 showed the relationship between the stent and the surrounding intestinal tissue when the stent was removed aer implanted into the rabbits for a period of time. Two days aer stent implantation, the intestinal mucosa around the MAO/PLLA/paclitaxel-coated magnesium alloy intestinal stent showed a small amount of damage and exfoliation, with a certain amount of inammatory cells inltrating the sample tissue. Eight days aer stent implantation, the stripped area of intestinal mucosa around the stent was signicantly reduced compared to 2 days aer implantation. Inammatory cell inltration was also obviously reduced. The intestinal mucosa of the stripped area began to grow again, intestinal epithelial proliferation was inhibited, and tissue damage tended to recover, as shown in Fig. 6. Immunohistochemical evaluation of intestinal tissues around stents Immunohistochemical staining results for Bax and Bcl-2 were shown in Fig. 8. The antibody expression levels for Bax and Bcl-2 from immunohistochemical results for intestinal tissue around the stents were shown in Fig. 8A(a and b). Two days aer the operation, the expression level of Bax in the MAO/PLLA/ paclitaxel stent group was lower than that in the PLLA-coated magnesium alloy stent group and slightly lower than that in the control group, although no statistically signicant differences were present (p > 0.05). The expression of Bax in the PLLA group increased slightly 5 days aer the operation compared to This journal is © The Royal Society of Chemistry 2020 RSC Adv., 2020, 10, 15079-15090 | 15083 that 2 days aer the operation, and was slightly higher than that in the control group with no statistically signicant differences (p > 0.05). The expression level of Bcl-2 in the MAO/PLLA/ paclitaxel group was slightly lower than that in the PLLA stent and control groups during the whole experimental period (p < 0.05). The expression level of Bcl-2 in the PLLA group was slightly higher on the h day than on the second day (p < 0.05). However, there was no signicant difference in Bcl-2 expression between the PLLA and control groups (p > 0.05). Serological assessment of rabbit liver and kidney function The changes in liver serum and kidney function aer 8 days of implantation of the MAO/PLLA/paclitaxel-coated Mg-Zn-Y-Nd alloy intestinal stents in rabbits were shown in Table 2. The level of alanine aminotransferase (ALT/GPT) aer implantation was slightly higher than that before the experiment in only one rabbit and slightly lower in the remaining two rabbits. Similarly, the aspartate aminotransferase (AST/GOT) levels decreased slightly in only one rabbit and increased slightly in the remaining two rabbits. Among the indicators of renal function 8 days aer implantation of the MAO/PLLA/paclitaxel-coated magnesium alloy stent into the rabbit intestinal tract, creatinine (Cre) levels in two rabbits were slightly higher than those before the stent implantation, while urea levels in one rabbit were slightly higher than those before the stent implantation. Although serological liver and kidney function indices of some rabbits aer stent implantation were higher than those before stent implantation, the increase was not signicant and the overall change was maintained in the normal range. Discussion Intestinal stenosis is a common surgical disease. It is also a complication of some diseases or surgical operations, including intestinal ulcers, benign and malignant intestinal tumors, Crohn's disease, ulcerative colitis, intestinal tuberculosis, and abdominal surgery, among others. Endoscopy is an important way to treat intestinal stricture, especially colon or rectal stricture. The stent is not only used for the treatment of malignant intestinal obstructions, such as colon cancer obstruction, but also for the treatment of benign intestinal obstructions, including colon and anastomotic stulae, perforation, and inammatory obstructions. Currently, stents for intestinal stenosis include self-expanding metal stents (SEMS), self-expanding plastic stents (SEPS), and BDS. 30,31 Since 1990, there have been many reports of stent placement for the treatment of advanced obstructive colon cancer showing good success rates, with few complications, including 3.76% perforation, 11.8% stent displacement, 7.34% restenosis, and 0.58% cumulative mortality. 32 In eight studies with a total of 199 patients Thomas et al. found that the migration rate of selfexpanding removable stents was 26.4%, with an average of 17 days. Within 4-8 weeks, stent removal and perforation rates were 87% and 1.5%, respectively. 33 Although metal stents have good clinical effects in the treatment of intestinal stenosis, their clinical application causes several common problems, including new stenosis formation and intimal tissue remodeling. 1 Therefore, prevention of intestinal stenosis, good biodegradability, and biosafety are very important for the clinical application of stents. Biocompatibility of intestinal eluting stents Magnesium ion is a degradation product of magnesium alloy and an important element of human metabolism. Moreover, it is the fourth largest cation in human plasma. 34 The content of magnesium in bone is about two-thirds of the total amount of magnesium in the human body. The other one-third is found in tissue and about 1-2% of magnesium is in the extracellular uid. 35 In addition, magnesium ions play an important role in regulating the homeostasis of human body. They can inhibit the release of calcium from sarcoplasmic reticulum in response to a sudden inux of extracellular calcium. 36 Early typical symptoms, such as anorexia, nausea, vomiting, and sleepiness, occur when magnesium is decient. 37 Research has shown that drinking natural mineral water rich in magnesium sulfate seems to be a rst-line solution to functional constipation before starting medication. 38 Di et al. used degradable magnesium alloy Mg-1Sr, a new compound, to study its biocompatibility in vivo in New Zealand white rabbits. 39 Histological studies did not reveal physiological abnormalities or diseases. It has been reported that appropriate amounts of zinc are very important for regulating the immune response. Zinc supplementation has a potential effect on immune function impairment caused by a decrease in serum zinc concentration due to advanced age. 40 Without affecting the normal healing process, the main challenge of stenting in the treatment of coronary heart disease is to prevent restenosis. 41 Therefore, stent design with good compatibility and prevention of intestinal stricture is critical for the treatment of intestinal stricture. Paper RSC Advances In this study, it was proved that paclitaxel had been successfully loaded into magnesium alloy stents through the cytotoxicity test. Aer the MAO/PLLA/paclitaxel/Mg-Zn-Y-Nd alloy stents were implanted into rabbits in this study, no signicant abnormalities were found in the pathological analysis images of the heart, liver, and kidneys. In addition, serological indicators of liver and kidney function in rabbits showed no signicant abnormalities before and aer stent implantation. This may be related to the fact that the stent does not affect normal metabolism of magnesium ions in rabbits. Xu et al. prepared organic coatings with PLLA on pure Mg substrate by the spin-coating method, and the research demonstrated that PLLA lms could signicantly improve SaOS-2 cell cytocompatibility of the alloy. 17 There are also previous reports that excessive magnesium ion concentration or alkaline stress could produce negative stimuli for the cell population. 42 Moreover, low magnesium diet may lead to higher intra-cellular ratio of Ca : Mg, leading to hypertension and insulin resistance. 43 A precious study revealed that although low magnesium intake is related to constipation, high doses of oral magnesium have a defecation effect. 44 This effect of magnesium may be related to the following facts: they are absorbed from the intestinal cavity and play a penetrating role to maintain water, thereby increasing the uidity of the lumen content. 45 Paclitaxel is a natural compound, which is isolated from the Pacic Redwood. Paclitaxel reacts with b-tubulin in microtubule to induce microtubule polymerization, which leads to cell proliferation stopping at G2/M phase and nally result in tumor cell apoptosis. 46 Previous studies have shown that stent implantation can cause the formation of endothelialization in the surrounding tissue. [47][48][49] Although the time of stent implantation in intestinal tract is short, there may not be obvious tissue proliferation around the stent. However, the number of epithelial cells and broblasts in intestinal mucosa tissue is increased in the process of intestinal endothelialization. 49 While the antibody used in immunohistochemistry can be detected by the detection of cell proliferation and expression of related factors were used to indirectly explore the compatibility of stent on intestinal tissue. In this study, histopathological examination showed that the MAO/PLLA/paclitaxel/Mg-Zn-Y-Nd and PLLA/Mg-Zn-Y-Nd alloy scaffolds did not induce systemic inammation (p > 0.05), while immunohistochemical results showed that degradation products of the MAO/PLLA/ paclitaxel/Mg-Zn-Y-Nd alloy scaffolds signicantly inhibited proliferation of intestinal cells. The PLLA/Mg-Zn-Y-Nd scaffold degradation products did not induce signicant proliferation of intestinal tissue cells (especially epithelial cells) (p > 0.05). These results suggest that short-term implantation of the MAO/ PLLA/paclitaxel/Mg-Zn-Y-Nd alloy scaffolds can signicantly inhibit the endothelialization of intestinal epithelial cells. The PLLA/Mg-Zn-Y-Nd alloy stents were not conducive to the formation of intestinal endothelialization. It has been reported that the use of SEPS can cause low epithelial proliferation and make the stents easy to remove. 50 In a study of drug-loaded cardiovascular stents, it was found that proliferation of rat vascular smooth muscle cells (SMCs) was successfully inhibited when paclitaxel was released from the poly(carbonate urethane) urea coating. 51 Stephen et al. found that a magnesium-based drug delivery system had a stronger long-term inhibitory effect on the proliferation of SMCs cultured in vitro compared to stainless steel, which may be related to the degradation of magnesium alloy matrix greatly accelerating and improving the pharmacokinetics of drug release in vitro. 52 In this study, ve rabbits were found to have stents displaced during their removal. No other organ damage, perforation, or obstruction were found. Geiger et al. analyzed 26 original articles, where 63 patients received long-term treatment with SEMS. Severe complications were found, including bladder perforation, ileostomy, massive hemorrhage, and obstruction. 53 The clinical data for self-expanding metal colon stent implantation in some hospital from January 1996 to May 2012 were retrospectively analyzed. The surgical technique success rate was 92.26% (n ¼ 441), with clinical success rate of 78.45% (n ¼ 375), and complication rate during follow-up of 18.5%. The incidence of complications with stainless steel stents was higher than that with nickel-titanium alloy stents. 54 Degradation characteristics of intestinal eluting stents in a rabbit model It is well known that traditional medical and scaffolding materials such as stainless steel and titanium alloy are nonabsorbable. Previous studies have found that long-term retention of these non-absorbable materials in the human body can cause damage to human health. 55 Therefore, magnesium alloy scaffolds with biodegradable properties are of great signicance for the treatment of intestinal diseases. Magnesium-based substrates easily react in aqueous media in the following manner: 56 Mg(OH) 2 (s) ! Mg 2+ (aq) + 2OH À (aq) Hydrogen and insoluble magnesium hydroxide are produced in this reaction, which directly changes the concentration of magnesium ion, pH, and other biochemical conditions. However, these changes may have some impact on human health. In this study, macroscopic corrosion morphology of Mg-Zn-Y-Nd alloy intestinal stents implanted into New Zealand white rabbits showed that bare stents can be located only aer 2 days of implantation. PLLA-coated stents could not be found 8 days aer implantation. The MAO/PLLA/paclitaxel-coated stents were intact at 2, 5, and 8 days, and no longer present in the intestinal tract at 12 days. This phenomenon may be related to the stent structure collapse caused by stent degradation aer 8 days of implantation in rabbits and its eventual removal from the intestine. The bare body scaffold completely degraded 5 days aer implantation, most likely because the scaffold without a protective layer comes into direct contact with the intestinal environment and has poor corrosion resistance, thus accelerating scaffold degradation. In the PLLA/Mg-Zn-Y-Nd alloy scaffold group, a small amount of bers on the scaffold began to break, while the amount of surface degradation products increased. However, the PLLA-coated magnesium stents were not found in the rabbit intestines aer 8 and 12 days of stent implantation. This phenomenon indicates that the MAO/PLLA/paclitaxel/Mg-Zn-Y-Nd alloy scaffolds have stronger corrosion resistance than the PLLA/Mg-Zn-Y-Nd alloy scaffolds. SEM analysis indicated that the microwire surface of the MAO/PLLA/paclitaxel scaffolds is more complete and cleaner than the other two groups, as shown in Fig. 3. EDS results in Fig. 4 showed that the content of P and Ca in the MAO/ PLLA/paclitaxel stent group is signicantly lower than that in the other two groups, which is related to better corrosion resistance of the MAO/PLLA/paclitaxel stents in intestinal environment and fewer corrosion products. In conclusion, these results suggest that stents treated with the MAO/PLLA/ paclitaxel coating can signicantly improve corrosion resistance of the magnesium alloy. Previous studies have conrmed that the PLLA-coated alloy surface can signicantly delay alloy degradation rate, thereby improving corrosion resistance of alloy materials. 52,57 Application of micro-arc oxidation can signicantly enhance corrosion resistance of biodegradable implants. 58 In clinical application, tissue self-healing usually takes 2 weeks to a month, and in a study of a rat model, it was shown that the internal environment of the wound also returned to normal when the polymer scaffold was degraded within 14 days. 59 However, the stent in this study can be completely degraded within 12 days, and it's a little shorter than expected in this study. As for the degradation characteristics of the stent, although the degradation time is faster than expected, and the current stent can not meet the clinical needs at present. However, this is only the current degradation characteristics of the stent we studied, which will provide new measures for the future high corrosion resistance of the stent so as to develop a suitable degradation rate to meet the clinical needs of intestinal stent to provide reference. Current issues associated with magnesium alloy applications in clinical medicine This study showed that the MAO/PLLA/paclitaxel/Mg-Zn-Y-Nd alloy scaffolds had better corrosion resistance than bare PLLA/ Mg-Zn-Y-Nd alloy scaffolds. One of the necessary conditions for good intestinal scaffolds was a stable degradation rate and structural stability during wound healing. When researchers implanted intestinal stents in rats, they found that the wounded tissue began to return to normal aer 14 days. 60 In this study, the stent completely lost its supporting effect aer 12 days. Rapid degradation characteristics of the stent were not enough to support the healing of intestinal obstruction, which is related to the more active properties of magnesium. Especially in a complex intestinal environment, the contact between magnesium alloy and intestinal organic and inorganic substances accelerates its degradation rate. However, appropriate degradation rate can be achieved by creating a magnesium alloy stent with a stable structure and sufficient branches during tissue healing. Therefore, it is necessary to further improve the degradable magnesium alloy scaffold to delay its degradation rate. At present, there are many common ways to improve corrosion resistance by changing the alloy surface, such as micro-arc oxidation to form a ceramic layer, impregnation of macromolecule polymers, anodic oxidation treatment, steam treatment, alkali heat treatment, uoride treatment, ion implantation input, and physical vapor deposition. 61 However, more research on magnesium alloy degradation rate control is needed. Unremitting efforts have to be made to improve corrosion resistance of magnesium alloy stents in the intestinal environment to support intestinal structure, so that magnesium alloy stents can adapt to the complex and changeable intestinal environment, thus improving clinical treatment of intestinal obstruction diseases. Conclusion In this study, Mg-Zn-Y-Nd alloy laments were woven into the reticulated scaffolds with inner diameter of 8 mm and length of 20 mm using a monolament integral braiding method. Magnesium alloy stents exposed to different treatments were implanted into the intestinal tracts of New Zealand white rabbits and the degradation and supporting properties of the intestinal stents coated with paclitaxel were investigated. The results showed that the MAO/PLLA/paclitaxel/Mg-Zn-Y-Nd alloy intestinal stents had better corrosion resistance than the PLLA/Mg-Zn-Y-Nd alloy intestinal stents. When the MAO/ PLLA/paclitaxel/Mg-Zn-Y-Nd alloy intestinal stents were implanted into rabbits for 8 to 12 days, stents degraded a great deal in vivo, which led to stent structure collapse and discharge from the body. Pathological observations demonstrated that the MAO/PLLA/paclitaxel double-coated drug-eluting Mg-Zn-Y-Nd alloy stents had no signicant toxicity to important organs and that the intestinal mucosa around the stent gradually returned to normal within two weeks. There were no signicant differences in serological parameters at different degradation stages. The scaffolds also exhibited good biocompatibility. Immunohistochemical evaluation of local intestinal tissue around the stent showed that the PLLA/paclitaxel-coated intestinal stent could inhibit intestinal tissue proliferation. This mechanism provided theoretical support for the future treatment design of intestinal stenosis caused by benign, malignant, and inammatory hyperplasia. Data availability statement The raw/processed data required to reproduce these ndings cannot be shared at this time as the data also forms part of an ongoing study. Conflicts of interest There are no conicts to declare.
2020-04-23T09:11:35.936Z
2020-04-16T00:00:00.000
{ "year": 2020, "sha1": "5da817cc08c1f7dce487da44254dee2949093371", "oa_license": "CCBY", "oa_url": "https://pubs.rsc.org/en/content/articlepdf/2020/ra/c9ra10156j", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "44af39c43c7ecb8753d158f2341445392db032d9", "s2fieldsofstudy": [ "Materials Science", "Biology" ], "extfieldsofstudy": [ "Materials Science" ] }
56237689
pes2o/s2orc
v3-fos-license
Dynamics of Ultrafast Laser Ablation of Water Ultrafast laser ablation is an extremely precise and clean method of removing material, applied in material processing as well as medical applications. And due to its violent nature, it tests our understanding of the interplay between optics, condensed matter physics and fluid dynamics. In this manuscript, we experimentally investigate the femtosecond laser induced explosive vaporization of water at a water/gas interface on the micron-scale through several time-scales. Using time-resolved microscopy in reflection mode, we observe the formation of a hot electron plasma, an explosively expanding water vapor and a shockwave propelled into the surrounding gas. We study this fs-laser induced water vapor expansion dynamics in the presence of different atmospheres, i.e. Helium, air and tetrafluoroethane. We use the Sedov-Taylor model to explain the expansion of the water vapor and estimate the energy released in the process. Introduction In many applications in research [1][2][3][4][5], technology [6][7][8] and healthcare [9][10][11], ultrafast lasers are employed to cut and remove material as well as to locally modify the chemical, structural and optical properties of the target. In even more extreme examples, pulsed lasers are used to trigger nuclear fusion [12] and to generate high harmonics in attosecond science [13] or more practically, to study nerve regeneration in vivo after fs-laser axotomy [14,15]. The unprecedented spatial resolution achieved in all these studies is partly due to the simultaneous ultrafast character and non-linear nature of the light-matter interaction, which leads to a reduced heat affected zone and to minimal collateral damage when compared to the outcome achieved using longer pulses [16]. These outstanding features allow researchers not only to accurately modify materials but also to investigate the fast and ultrafast mechanisms involved during the process of modification, such as phase transitions or bio-chemical and chemical reactions [1,[17][18][19]. In this way, ultrashort laser pulses, with a duration that ranges from a few to hundreds of femtoseconds (1 fs = 10 −15 s), can be used to probe the ultrafast chain of processes triggered by an excitation laser pulse in a so-called pump and probe system [17, 19-23]. Particularly spectacular and very relevant for biological applications and laser-based surgery, is the ablation of aqueous media [3,9,15,16,24]. Crucial steps in this inherently multi-scale process are the absorption of laser-energy by the aqueous medium, resulting in local heating, followed by the evaporation of the liquid, which in turn does work against the tissue in which it is embedded [1,4,10,16,[24][25][26][27][28]. The use of fs-laser pulses is particularly interesting for such applications, as the absorption in that case is very nonlinear and therefore is limited to a region that is typically smaller than the focal volume [19,22,29]. The extreme optical nonlinearity makes the description of the absorption of the ultra-short laser pulse very challenging, but on the other hand, leads to a very attractive separation of timescales [29,30]. In the first picosecond, a hot electron plasma is created. Within the first 10 picoseconds, thus after the laser pulse is gone, this plasma equilibrates in temperature with the surrounding liquid. As a result, the liquid becomes superheated and explosively evaporates. The interaction dynamics of pulsed lasers with water has been investigated using lateral imaging via time-resolved shadowgraphy or time-resolved scatterometry, for instance looking at the propagation of laser-induced shockwaves and cavitation bubbles [3,4,[31][32][33][34][35][36][37][38]. Lateral imaging provides a wealth of information of the aftermath for long time-delays (ns-µs), at the expense of losing lateral spatial resolution to study the microscopic features of the initial electron plasma under strong focusing conditions [3], relevant for instance to fs-laser cell surgery and neurosurgery [9,39]. In this work, we study femtosecond laser ablation at a water/gas interface using time-resolved imaging with submicron optical resolution in reflection mode, from 10 femtosecond to 100 nanosecond time-scales. The experiment is carried out inside three different gaseous atmospheres with increasing molecular weights and densities at room temperature, i.e. Helium (4 g/mol), air (18 g/mol) and tetrafluoroethane (102 g/mol). The already mentioned separation of time-regimes allows us to compartmentalize the multi-scale problem and separately observe the light-water interaction (fs-ps), the evaporation process (10 ps) and the supersonic vapor expansion dynamics (ns). We interpret our experimental observations with a Sedov-Taylor model to extract an estimate of the energy carried by the expanding vapor. Fig.1 (a) shows a detailed scheme of the collinear pump and probe setup. The laser source used during the experiments is a femtosecond regenerative amplifier (Hurricane, Spectra-Physics) that produces 150 fs laser pulses at a wavelength of 800 nm. Using a λ/2 waveplate and a polarized beam cube, we split the 800 nm laser pulse into two sub-pulses. The most energetic pulse runs over a fixed delay line to the microscope objective while the weaker fraction runs over an automated delay line to tune the pump-probe delay time up to a maximum of 1.4 ns (Newport Co.). For larger delays, we introduce longer detours in the probe optical path to obtain delays up to 100 ns. Before the delay line, a BBO (beta barium borate) crystal is used to frequency double the probe pulse and an edge filter (F) is used to block the remaining 800 nm light. The 400 nm probe light is then spatially and temporally overlapped with the 800 nm pump light in a pellicle beam splitter (PBS) before both collinearly enter an infinity corrected microscope objective (Nikon CFI60, 100X, NA = 0.8). Additional lenses in both optical paths are used to ensure that the 800 nm pump (one-to-one telescope, T0) is strongly focused by the objective, while the 400 nm probe is focused (L) in the back focal plane of the objective, resulting in wide-field illumination suitable for imaging the surface of the sample over an area of 45 × 45 µm 2 onto the EMCCD camera chip (Andor, iXon 885). The reflected light is collected by the objective and used to image the water/air interface through a 300 mm tube lens (TL) as shown in Fig.1 (a). A filter (F) that only transmits the 400 nm light (∆λ FWHM = 10 nm, λ 0 = 400 nm) is used to prevent 800 nm pump light and plasma emission from reaching the camera. The optical path between the tube lens/filter and the camera chip is tubed in order to minimize the collection of stray ambient light. The delay control unit of the Pockels cells is synchronized with two optomechanical shutters (S1, S2) to ensure single shot experiments either with the probe pulse only (S1 open) or with both pump and probe pulses (both S1 and S2 open). For each temporal delay, several images are taken, each corresponding to a single shot of the laser while both pump and probe pulses are present. Additionally, several reference images are acquired. It is worth noting that unlike in the case of solid targets, the water surface self-restores a few milliseconds after strong laser excitation. This prevents incubation effects and allows us to record several pump-probe images at the same spot without re-focusing or moving the sample to a fresh area. The beam waist (1/e 2 ) of the focused pump beam at the sample surface was calibrated to be 3.1 µm using Liu method [40]. Sample preparation In the experiment we use milli-Q demineralised water in a 25 mL beaker. This beaker is placed inside a container in which the microscope objective is introduced through a closely fitting hole in the lid that is sealed by using rubber O-rings (see figure 1 (b)). The isolated air inside is then allowed to saturate with water vapor for a few minutes, to limit the speed with which the water level lowers due to evaporation to less than 1 µm/hour, reducing the need to periodically refocus the surface of the target. The gaseous atmosphere of the cuvette can be controlled using a circulating system with a supply and an exhaust. For the experiments we use three different gasses, namely Helium, air and 1,1,1,2 tetrafluoroethane. Table I (b) Figure 2. (a) Typical transient differential reflectivity maps of the water/air interface obtained for a pump pulse fluence of 25 J/cm 2 . The images share the same lateral scale (20x20 µm 2 ) and are represented using the same color scale (which saturates the first four images). In the first few picoseconds, we see a strong increase of the reflectivity. After 2 picoseconds, we see the increase in reflectivity turn into a ring, which fades for longer pump-probe delays. Starting at approximately 10 picoseconds, we see a reduced differential reflectivity starting in the center. On the timescale of 100 ps, the dark region radially expands and a bright rim develops around it. (b) Time-line during the ablation of water. The time-delays, at which the images in (a) are taken, are indicated by the arrows labelled a1 to a18. Ultrafast ablation of water through the time-scales Using the experimental setup, we investigate the transient reflectivity of a water/gas interface during ultrafast laser ablation of water upon tight focusing conditions, from 10 fs to 100 ns. Typical examples of such images, measured in an air atmosphere, are shown in Fig. 2 (a) whose time delays are chosen to be equally spaced in a logarithmic scale, as presented in Fig. 2 (b). Here, three main regimes are observed: excitation and relaxation of a dense electron plasma (first row), evaporation onset (second row) and water vapor expansion (third row). We observe a delay of few tens of picoseconds before the laser-induced water vapor noticeably starts to expand laterally, which agrees with the expansion dynamics observed inside bulk water [3] and at the surface of a water jet [28] under similar focusing conditions. In Figs. 3 (a)-(e), we show images representative of the different time-separated regimes, combined with their radial averages (f)-(j) and schematics to discuss the corresponding physical processes (k)-(o). Initially the energy of the laser is coupled into the irradiated water via strong field ionization [41], i.e. multiphoton and tunneling ionization. Afterwards, seed electrons that are excited via non-linear ionization processes gain kinetic energy by means of inverse bremsstrahlung leading to impact ionization [10,28,30]. Consequently, during the first picosecond, we observe a strong increase of the surface reflectivity that we atribute to the formation of a well-localized dense electron plasma, as illustrated in Fig. 3 (k). The maximum reflectivity increase that we observe experimentally is ∆R = 0.16, whereas the reflectivity of unexcited water is R o = 0.02. According to the Drude model for a free electron gas this corresponds to an electron density on the order of 10 22 cm −3 . These findings are consistent with the transient reflectivity contrast reported for solid crystalline and glassy dielectric targets [42,43], whose electonic and linear optical properties are comparable to those of water. After 2 picoseconds, the reflectivity in the center starts to drop. We attribute this reduction in reflectivity to the fact that the water in the center starts to evaporate, locally reducing the density of both the water and the electron plasma, as illustrated in Fig. 3 (l). We therefore use the appearance of this reduction to estimate the ablation threshold to be 8.1 J/cm 2 , considering the calibrated 1/e 2 beam waist (see previous section and [40]) and the Gaussian distribution of the laser. Curiously, we reproducibly observe a periodic azimuthal structure in the ring of increased reflectivity for delays that range from 8 ps to 40 ps, as shown in Fig.3 (c). The orientation of this periodic structure remains unaltered for different irradiation experiments at a given time-delay, indicating a deterministic behaviour. As of yet, we have no explanation of the cause of this structure. As time goes on, water further away from the center also evaporates, expanding upwards, as illustrated in Fig.3 (m), causing the corrugated ring of increased reflectivity to fade away for longer pump-probe delays. Starting at approximately 10 picoseconds, we see an overall reduced differential reflectivity in the affected area. We attribute this to the fact that the hot water vapor above the water/air interface can act as a local anti-reflection coating that absorbs the light of the probe beam [3,28]. From 100 ps onwards, we observe that the area of reduced reflectivity radially expands and develops a sharp bright rim. We can understand that this happens when the expanding vapor cloud becomes larger in radius than the initially laser-excited area, as illustrated in Fig.3 (n). The bright edge in Figs.3 (d),(e),(i),(j) can be interpreted as the projection of the contact discontinuity between the supersonically expanding vapor and the surrounding air. The fact that the contact discontinuity is sharply visible means it must be widest near the focal plane, i.e. the water/gas interface. This strongly suggests that the expansion is cone-like, as illustrated in Fig.3 (n),(o), which in turn indicates a highly supersonic expansion. This is supported by the fact that we observe a shockwave in the surrounding air, as can be seen in Figs. 3 (e),(j). Atmosphere influence during ultrafast ablation of water The expansion of the water vapor also depends on the atmosphere that surrounds the irradiated area. Fig. 4 (a) presents snapshots of the transient reflectivity during the ablation of water in the presence of different gases, from 500 ps to 10 ns. As explained in Fig. 3 the dark disk in the center (r v max ≈ 5.5 µm) is attributed to the rapidly expanding water vapor, whereas the outermost dark and bright ring (r s max ≈ 24 µm) corresponds to the shockwave propelled in the surrounding gas (see white arrows in Fig. 4 (a) "air" at 10 ns). The increment in the lateral size as well as the sharpness of the vapor front are clearly influenced by the surrounding atmosphere. In the presence of Helium and Air, the vapor resembles a homogeneous dark circle with a distinct edge, while in a tetrafluoroethane atmosphere, the dark area is heterogeneous and lacks a sharp edge. Although the images are similar in appearance, Fig. 4 (b) shows that the radial expansion in air achieves a slightly larger radius than in He for long delays up to 100 ns. For the first two nanoseconds, the same expansion velocity is observed in the presence of both gases (≈ 810 m/s), which corresponds to a supersonic expansion for air (v s = 343 m/s, v s is the speed of sound) and a subsonic expansion for Helium (v s = 972 m/s). This initial expansion speed explains why a shockwave is experimentally observed in an air atmosphere but not in a Helium environment. After 2 ns, the radial increase slows down more rapidly in a He environment than in the air environment. We use a Sedov-Taylor formula to estimate the energy released during the expansion of the water vapor in the presence of air and He (see Fig. 4 where ρ is the density of water and E stands for the energy. Considering a hemispherical expansion, we estimate the energies released in an air and He atmospheres to be 618 pJ and 360 pJ, respectively. To test the validity of using the Sedov-Taylor model, we additionaly fit a power-law formula r ∝ t α , retrieving an exponent of 0.30 and 0.26 for air and Helium, respectively, as shown in Fig. 4 (b). This shows that the expansion is not fully self-similar. This lack of self-similarity can be qualitatively understood, as we expect that in both atmospheres, the superheated water initially undergoes a vertical supersonic expansion [39,44]. During the expansion, the expanding vapor gathers mass by sweeping up the gas from the atmosphere, eventually leading to a transition from a vertical expansion to a more isotropic expansion. In an atmosphere with a low mass density (ρ air ≈ 7ρ He ), this transition will occur later. Thus the system will favour vertical expansion over lateral expansion as compared to a system with an atmosphere with a high mass density, in correspondence with the results shown in Fig. 4 (b). Fig. 4 (a) also shows that a gaseous atmosphere with higher density (ρ TFE ≈ 4 · ρ air ≈ 24 · ρ He , see table I in the experimental section) presents a shock with higher optical visibility, i.e. tetrafluoroethane. The graph in Fig. 4 (c) shows that the shock radius as a function of time has a similar behaviour in both atmospheres. Conclusion In conclusion, we experimentally study the ultrafast laser ablation at a water/gas interface by using time-resolved microscopy. We present the changes in the transient reflectivity for several time-scales, from 10 fs to 100 ns. Overall, our work explores the initial laser-water interaction (femtoseconds), the intermediate extreme thermodynamic state of the system (picoseconds) and the subsequent compressible fluid dynamics (nanoseconds). As an outlook we propose to further link and explore the long-run mechanical behaviour, i.e. surface waves, which will require delays far into the microsecond regime. We additionally test the influence of the atmosphere during the ablation dynamics, finding that the propulsion of a shock in the surrounding gas depends on its speed of sound and molecular weight. Moreover, we find that the atmosphere influences the way the laser induced vapor expands during the first 100 ns, following a Sedov-Taylor power law expansion with different exponents, which indicates the process is not self similar. The understanding of this concatenation of physical processes has high impact and tremendous potential in the field of laser nano-surgery (i.e. cell, ocular and neuro surgery) [9,39], since they are behind the behaviour of photomechanical damage. As the ablation of water involves less phase transitions than the ablation of solid samples, our study can contribute further research on the ultrafast laser ablation of more complex systems, such as nanoparticle synthesis via ablation of inmersed targets [45] or EUV-light generation for nanolithograpy via tin droplet ablation [46].
2018-10-16T12:10:38.000Z
2018-10-16T00:00:00.000
{ "year": 2018, "sha1": "94497cc60ef75ae072dd4d20edcdd90c614b9593", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "94497cc60ef75ae072dd4d20edcdd90c614b9593", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Materials Science", "Physics" ] }
244488304
pes2o/s2orc
v3-fos-license
Scalable Learning for Optimal Load Shedding Under Power Grid Emergency Operations Effective and timely responses to unexpected contingencies are crucial for enhancing the resilience of power grids. Given the fast, complex process of cascading propagation, corrective actions such as optimal load shedding (OLS) are difficult to attain in large-scale networks due to the computation complexity and communication latency issues. This work puts forth an innovative learning-for-OLS approach by constructing the optimal decision rules of load shedding under a variety of potential contingency scenarios through offline neural network (NN) training. Notably, the proposed NN-based OLS decisions are fully decentralized, enabling individual load centers to quickly react to the specific contingency using readily available local measurements. Numerical studies on the IEEE 14-bus system have demonstrated the effectiveness of our scalable OLS design for real-time responses to severe grid emergency events. I. INTRODUCTION Fast mitigation of power imbalance and operational limit violations during emergency events is of great importance for enhancing the resilience of power grids. To prevent potential cascading failures, load shedding is a commonly used emergency response action by adjusting the system operating point. Unlike normal operations, the decision making of load shedding under severe contingencies is timing-critical. Due to the computational complexity and communication latency concerns arising in attaining load shedding solutions, machine learning (ML) methods are uniquely positioned to enable timely emergency grid services thanks to their superior performance in real-time prediction. To quickly restore power balance, traditional load shedding schemes perform a uniform percentage of reduction for all load centers based on the electric frequency deviation [1]; see also the NERC standard [2]. This proportional reduction is easy to implement, but fails to account for the heterogeneous effects of the contingency scenario across the grid such as severe congestion at certain locations. Recent advances in optimization-based load shedding schemes (e.g., [3]- [5]) can effectively mitigate the potential risks of cascading failures through strategically targeting locational congestion or stability concerns. Nevertheless, to solve the resultant optimal load shedding (OLS) problem in real time can be challenged by the This work has been supported by NSF Grants 1802319 and 2130706. underlying nonlinearity of AC power flow model. Recently, there is a surge of interest in adopting ML methods for power system decision making under normal conditions, particularly for the AC optimal power flow (AC-OPF) problem; see e.g., [6]- [8]. Similarly, to accelerate the AC-OLS solution, [9] has proposed to learn the percentage ratio of load shedding from the system-wide contingency information. In addition, a safe reinforcement learning approach has been developed in [10] to predict the dynamic load shedding policy from the overall system state, thus requiring the grid-wide information. Although a centralized learning framework is suitable for normal grid conditions, it fails to promptly react to the contingency situation due to the associated communication and response times. Therefore, existing ML-based solutions cannot cope with the real-time OLS needs where it is critical to implement timely corrective actions at distributed load centers. This paper aims to develop a scalable learning-for-OLS framework such that individual load centers can predict their own optimal decisions in a decentralized fashion. To this end, we first formulate the OLS problem under the AC power flow model, as an extension to the AC-OPF. By determining the respective amount of load shedding at each bus, the AC-OLS problem aims to restore the system-wide power balance and mitigate the violations of operational limits. Upon solving this problem under a wide range of loading conditions and contingency scenarios, one can learn the decision rules from input system conditions to target load shedding actions through offline training. In order to attain high accuracy, we adopt the neural network (NN) model to construct nonlinear, expressive mappings from real-time measurements. One notable feature of our proposed approach is the decentralized design of the NN-based decision rules, which have been constructed solely based on locally available measurements at each load center. Thanks to the scalability of the decentralized information sharing, the offline training time to obtain individual decision rules is greatly reduced compared to a full-feedback one. During online emergency operations, these decisions rules further enable each load center to promptly react to the specific contingency from readily available local data. This paper is organized as follows. Section II formulates the centralized AC-OLS problem. Section III presents the NN training process for the proposed decentralized OLS framework. Numerical tests on the IEEE 14-bus system are provided in Section IV to demonstrate the accurate prediction performance of the proposed solutions to the line outage contingency, and the paper is concluded in Section V. II. OPTIMAL LOAD SHEDDING (OLS) PROBLEM We formulate the optimal load shedding (OLS) problem based on the nonlinear AC power flow. Consider a power grid with N buses and L lines collected in the sets N and L, respectively. Let Y = G + jB denote the network admittance matrix, where G and B are real and imaginary parts, respectively. For each bus i, the complex power output from its connected generation is denoted by p g i + jq g i , while load demand by p d i +jq d i . In addition, let V i , θ i denote the bus voltage magnitude and angle, respectively. Thus, the voltage phasor for each bus can be represented in polar form as V i ∠θ i . Under the AC power flow, the complex power flow for line (i, j) ∈ L can be represented as [11]: where (·) * denotes the conjugate of a complex number. The AC-OLS problem is formulated similarly to AC optimal power flow (AC-OPF). The latter problem determines the optimal set-points under normal operations that account for system constraints on generation output, voltage, and line flows. Under emergency operations as a result of large-scale contingencies, the AC-OPF problem could become infeasible due to insufficient resources or transfer capability. Thus, corrective actions such as load shedding are typically carried out to maintain power balance and satisfy the system operation limits. With known load demand p d i and q d i per bus i, the OLS problem aims to determine the amount of load shedding denoted by p s i and q s i , as given by The objective function in (2a) consists of the generation cost c g i (·) and load reduction cost c s i (·), which are typically (piecewise) linear or convex quadratic functions. One can also extend the objective to incorporate the cost of reactive power reduction for each load. In general, the two cost functions c g i and c s i are designed such that for every bus i ∈ N we have: This way, the OLS solutions will prefer to fully utilize the generation resources before evoking load shedding. The load shedding cost can vary from load centers by prioritizing critical loads with much higher costs than the others. Moreover, for the decision variables given in (2b) -(2c), constraints (2d) -(2f) enforce the upper/lower bounds for each of them based upon the resource budget or operational limits. The upper bounds for load shedding in (2f) can be lower than the total demand in case part of the load center constitutes as nondispatchable critical loads. Furthermore, the constraint (2g) enforces the line thermal limit for apparent power flow, while other line limits (e.g, line current, real power flow) can be similarly posed. Finally, the equality constraints (2h) -(2j) correspond to AC power flow equations. The AC-OLS problem differs from the AC-OPF one mainly in the level of flexibility that each load center can provide. During the contingency conditions, corrective actions such as reducing load demand or reconnecting transmission lines are imperative in order to restore the power balance and increase the power transfer capability. It is worth pointing out that even though the OLS problem (2) is formulated to consider load shedding only, it can be generalized to include other corrective actions such as topology optimization and full de-energization of system components; see e.g., [3], [4]. Remark 1 (Solving AC-OLS). As a nonlinear program (NLP), the AC-OLS problem (2) is nonconvex and generally NPhard, similar to the AC-OPF [12]. Various convex relaxation methods can be adopted to tackle the nonconvexity therein for OPF [13], [14] and similarly for OLS [3]. In general, opensource packages such as MATPOWER [15] and JuMP [16] are available to efficiently solve the OLS problem. III. LEARNING THE SCALABLE OLS STRATEGY Although the centralized OLS problem can be solved by various optimization solvers, its implementation requires high rate of communications for the control center to acquire the system-wide information and dispatch the emergency actions. Due to communication latency and quality issues, this centralized framework could affect the timeliness and effectiveness of corrective action responses at individual load centers, both critical for grid emergency operations. To enable fast and powerful load shedding actions during emergency events, we propose to develop a scalable OLS strategy by predicting the OLS decision using locally available measurements in real time. The key idea of the proposed framework is illustrated in Fig. 1, where each load center such as the one in bus 9 can directly form its own OLS decisions using the voltage phasor and power data collected by local meters. Under the physicsbased power flow coupling, each contingency scenario leads to various level of changes in the available measurements at every load center. Thus, the latter can be used to infer the OLS solution for the specific contingency even without centralized information exchange. Of course, the question becomes how to obtain such decentralized decision rules without using system-wide information. To this end, we utilize the basic feedforward neural network to obtain the decision rule f i (x i ; ϕ i ) → y i for each load center i, that maps from local measurements x i to its optimal decisions y i . Note that φ i denotes the NN parameters that will be specified later and learned during the training process. As the goal is to establish the decision rule f i (·; ϕ i ) for any possible system operating condition or contingency scenario, the offline training process builds upon generating a high number of instances, each representing a specific loading condition and contingency scenario. As illustrated in Fig. 2, for each instance, the corresponding input feature x i and target decision y i can be respectively computed by solving the power flow and OLS problem (2). All these samples of {x i , y i } will be used to train the decision rule f i by learning its parameters ϕ i . When using f i for online implementation, each control center can immediately use real-time local datâ x i to quickly obtain the decision asŷ i = f i (x i ; ϕ i ). This constitutes the overall architecture of the proposed scalable OLS design, which leverages extensive offline computation and learning to empower the online decision-making process. For accurate OLS prediction, the local input feature x i should include all possible real-time measurements, such as which represents local real/reactive power demand information, post-contingency voltage magnitude, all incident line flows, as well as the electric frequency. Note that symbols with here correspond to post-contingency values and differ from those in (2). Most of these values can be computed by open-source solvers such as MATPOWER through steady-state power flow simulations, except for the electric frequency ω i . As the latter is a very informative indicator of the overall power imbalance, one can approximate it using the difference between pre-and post-contingency steady-state power generation [17,Ch. 12]. In future, we plan to utilize dynamic power flow simulations such as the COSMIC tool [18] to improve the post-contingency modeling. As for the target decisions, we are primarily interested in predicting the load reduction solutions from (2), as given by: Fig. 2: The overall architecture for the proposed scalable load shedding design with extensive offline studies and training to accelerate the online decision making. General corrective decisions can be predicted as well by extending AC-OLS problem as mentioned earlier. The proposed decentralized OLS design builds upon the strong correlation between load shedding decisions and local post-contingency data, as observed in numerical tests later on. As the measurements in x i can effectively reveal the effects of contingency at bus i, they are highly indicative of the corresponding optimal decision. For example, both V i and ω i are great stress indicator on the system loading conditions. Similarly, the line flows {p ij , q ij } indicate the change of power flow patterns due to contingency. To obtain the OLS decision rule f i (·; ϕ i ) for load center i, the NN model consists of multiple fully-connected hidden layers between input x i and output y i . With the first layer z 0 i = x i incorporating the input feature, each layer k can be represented as: where the final layer z K i → y i predicts the output target. Thus, the NN parameters in ϕ i include the weight matrices {W k } and bias vectors {b k } for the linear transformation per layer k. Each layer also uses a nonlinear activation function σ(·) to attain high-dimensional, expressive functional mapping that goes beyond linearity. Common choices of the activation function include sigmoid and ReLU. To determine ϕ i , we use the mean squared error (MSE) metric as the loss function to minimize, solved by popular NN training algorithms such as stochastic gradient descent. Remark 2 (Safety of decentralized OLS). As a corrective action, the OLS decisions need to be effective and safe during online implementation. Such considerations can be further incorporated into offline training. For example, a weighted MSE metric could discourage larger prediction error for higher amount of load shedding amount. In addition, we can generalize it to a risk-aware learning framework using the conditional value-at-risk (CVaR) measure to reduce the worstcase prediction error; see e.g., [19]. IV. NUMERICAL VALIDATIONS This section presents numerical test results of the proposed decentralized OLS approach on the IEEE 14-bus system. The AC-OLS problem has been implemented using MATPOWER and solved by the primal-dual interior point method. Quadratic objective functions of c g i (·) and c s i (·) have been used for the cost of generation and shedding loads, respectively. The feedforward NN has been implemented with MATLAB ® deep learning toolbox using the Bayesian regularization algorithm. The simulations are performed on a regular laptop with Intel ® CPU @ 2.60 GHz and 16 GB of RAM. The IEEE 14-bus system is shown as in Fig. 1. It consists of 20 transmission lines and 5 conventional generators located at buses 1, 2, 3, 6 and 8. Given the initial load condition, the load shedding may be required more often at certain buses than the others. To this end, we mainly study the load centers located at buses 6, 9, 10, 11, 13 and 14, respectively. We consider line outage failures as the initial emergency events, while the proposed method is generalizable to other types of system contingencies as well. We have generated all (N − 1) contingency scenarios (single), and randomly selected (N −2) and (N − 3) contingency scenarios (multiple). For simplicity, the scenarios that lead to system islanding are excluded here and will be studied in future. To encourage the occurrence of system emergency operations, we increase the original system loading to a total of 469 MW, under which the AC-OPF solution is closer to the infeasibility margin and load shedding is more likely to incur during contingencies. In addition to contingency sampling, we also randomly generate the load demand per bus to be [95%, 105%] of its nominal value, to reflect its small variation in minute time-frame. For each contingency scenario, we generated a total of 1000 samples, a majority of which have experienced the occurrence of AC-OLS due to the stress of emergency conditions. To demonstrate the correlation between local measurements and OLS decisions, Fig. 3 plots their relations at bus 14 under the outage of both line 2-3 and line 4-9. The crosssection scatter plots show that the optimal shedding amount increases as the voltage magnitude V i or total incident line flow decreases. This is because low voltage indicates system stress, while reduced incident line flow implies a change of power flow pattern, both calling for the need of load shedding. This observation supports to use local data to form OLS decisions. Note that for ease of exposition, only the predicted real power reduction amount will be presented. The training is carried out using a feedforward NN with two hidden layers, each of which has 15 and 12 neurons, respectively. We split the data samples randomly into 80% and 20% for training and testing, separately performed on single and multiple line outages. On average, the offline training process takes around 16.5 and 38.7 seconds for single and multiple line outages, respectively. As the number of local inputs mainly depends on the incident topology of each load center, the scalability of the training process can be guaranteed under the decentralized design. Table I lists the occurrence and prediction error of load shedding at the 5 load buses Fig. 4 also compares the predicted and actual load shedding values for selected testing samples at buses 10 and 14, both very likely to need load shedding during contingencies. Note that certain buses such as bus 6 have experienced noticeably higher testing error than the training one, and we will address this issue through regularization in future. Overall, the local predictions well match the actual values of AC-OLS solutions and attain satisfactory performance. The performance of the proposed design for multiple line outage scenarios is presented in Table II and Fig. 5. As multiple line outages would increase the stress to the system, the occurrence of load shedding has increased with bus 11 experiencing load shedding in certain cases. By and large, the prediction performance remains good. However, the accuracy has reduced for certain buses compared to the single line outage results, mainly due to the vast variability of postcontingency conditions. Thus, while the simulation results have confirmed the validity of the proposed approach, we need to expand the variability of the training samples to enhance the expressiveness of the resultant NN models. V. CONCLUSIONS AND FUTURE WORK This paper developed a decentralized framework for performing real-time load shedding in order to prevent cascading propagation under emergency events. By solving the AC-OLS optimization problem for a multitude of contingency conditions, we put forth a learning-for-OLS framework that maps from each load center's local measurements to its own OLS decision. Clearly, this scalable design of OLS decision rules enables load centers to quickly react to the contingency situations without requiring the supervision from control center. Numerical results demonstrate the validity of the proposed design in terms of predicting the OLS solutions. For future work, we will extensively investigate the proposed design for different contingency conditions and types of systems, as well as improve the safety using risk-aware learning approaches.
2021-11-24T02:16:36.983Z
2021-11-23T00:00:00.000
{ "year": 2021, "sha1": "6c33dadcc60e647ee01a3d82c04c04bd0553ef51", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "6c33dadcc60e647ee01a3d82c04c04bd0553ef51", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [ "Computer Science", "Engineering" ] }
251422572
pes2o/s2orc
v3-fos-license
Dynamics of particle-laden turbulent Couette flow. Part1: Turbulence modulation by inertial particles In particle-laden turbulent flows the turbulence in carrier fluid phase gets affected by the dispersed particle phase for volume fraction above $10^{-4}$ and hence reverse coupling or two-way coupling becomes relevant in that volume fraction regime. In a recent study by Muramulla $et.al.^1$, a discontinuous decrease of turbulence intensity is observed in a vertical particle-laden turbulent channel-flow for a critical volume fraction O($10^{-3}$). The collapse of turbulent intensity is found out to be a result of catastrophic reduction of turbulent energy production rate. Mechanistically, particle-fluid coupling in particle-laden turbulent Couette-flow differs from that in a closed channel flow. In this article, the turbulence modulation in Couette-flow by inertial particles is explored through two-way coupled DNS where particle volume fraction ($\phi$) is varied from $1.75\times10^{-4}$ to $1.05\times10^{-3}$ and Reynolds Number based on half-channel width ($\delta$) and wall velocity ($U$) ($Re_{\delta}$) is $750$. The particles are heavy point particles with $St\sim367$ based on fluid integral time-scale represented by $\delta/U$. A discontinuous decrease of fluid turbulence intensity, mean square velocity and Reynolds stress is observed beyond a critical volume fraction $\phi_{cr}\sim7.875\times10^{-4}$. The drastic reduction of shear production of turbulence and in turn the reduction of viscous dissipation of turbulent kinetic energy are two important phenomena for the occurrence of discontinuous transition similar to channelflow. The step-wise particle injection and step-wise removal study confirms that it is the presence of particles which is majorly behind this discontinuous transition. In particle-laden turbulent flows it is established that the turbulence in carrier fluid phase gets affected by the dispersed particle phase for volume fraction above 10 −4 and hence reverse coupling or two-way coupling becomes relevant in that volume fraction regime. Owing to their greater inertia, larger particles tend to change either the mean flow or the intensity of fluid phase fluctuations. In a very recent study 1 a discontinuous decrease of turbulence intensity is observed in a vertical particle-laden turbulent channel-flow for a critical volume fraction O(10 −3 ) for particles with varying Stokes Number in the range of 1-420 based on the fluid-integral time-scales. The collapse of turbulent intensity is found out to be a result of 'catastrophic reduction of turbulent energy production rate'. Mechanistically, particle-fluid coupling in particle-laden turbulent Couette-flow differs from that in a closed channel flow due to different fluid coherent structures and different particle clustering behaviours and these act as the motivation in investigating the existence of continuous or discontinuous transition in turbulence modulation by inertial particles in particle-laden turbulent Couette flow. In this article, the turbulence modulation in the fluid phase by inertial particles is explored through two-way coupled DNS of particle-laden sheared turbulent suspension where particle volume fraction (φ ) is varied from 1.75 × 10 −4 -1.05 × 10 −3 and Reynolds Number based on half-channel width (δ ) and wall velocity (U) (Re δ ) is kept at 750. The particles are heavy point particles with St ∼ 367 based on fluid integral time-scale represented by δ /U. A discontinuous decrease of fluid turbulence intensity, mean square velocity and Reynolds stress is observed beyond a critical volume fraction φ cr ∼ 7.875 × 10 −4 . The drastic reduction of shear production of turbulence and in turn the reduction of viscous dissipation of turbulent kinetic energy are two important phenomena for the occurrence of discontinuous transition similar to channelflow. The step-wise particle injection and step-wise removal study confirms that it is the presence of particles which is majorly behind this discontinuous transition. Additionally, the effect of inelastic collisions are explored and found to increase the φ cr slightly, although the nature of turbulence modulation remained similar to when the collisions are elastic. The explicit role of the inter-particle collisions is studied by simulating a hypothetical case where only inter-particle collisions are kept switched off. In this case, φ cr increases more than the inelastic case. The turbulence modulation carries the signatures of transition from sheared turbulence to particle-driven fluid fluctuation. The increase in φ cr for different collisional cases are found out to be the result of decrease in reverse force acting on the fluid at a fixed volume fraction less than φ cr . a) Also at Department of Chemical Engineering, I. INTRODUCTION In particle-laden turbulent flow, the strong coupling that exists between particles and the fluid phase, has enormous impact on controlling the dynamics of both the phases. To understand the transport problems like stresses, heat-transfer and mass-transfer, it is essential to investigate the dynamics of both the phases. Fluctuations along with the mean flow play an important role in determining the transport coefficients . In the backdrop of wall-bounded turbulent flows, fluctuating kinetic energy fluxes and hence the energy transfer has been investigated at different wall-normal locations of the geometry in physical space 2 and in spectral domain 3 . In the two-phase interaction, besides the interaction between mean flows, interaction between fluctuating velocities of both the phases include additional complexities. Turbulent fluid velocity and vorticity fluctuations and the inter-particle collisions determine the translational and rotational velocity fluctuations in the particle phase. When the particle inertia is very low, they follow the fluid streamlines and behave like passive tracer. At an intermediate inertia, preferential concentration of the particles happens at low vorticity and high strain zones 4,5 Particles with high Stokes number do not tend to follow local streamlines rather they move across streamlines because of their inertia being greater than that of fluid. Due to this, larger particles tend to change either the mean flow or the intensity of fluid phase fluctuations. Turbulent Couette flow is a protoype of canonical wall-bounded flow. Mechanistically, particlefluid coupling in particle-laden turbulent Couette-flow differs from that in a closed-channel flow due to different fluid coherent structures and different particle clustering behaviours. Pirozzoli, Bernardini, and Orlandi 6 performed DNS simulations of unladen turbulent Couette flow in high Reynolds number regime (Re c based on wall-velocity and half-channel-width: 3000-21333). Their work showed that for Couette flow, at extreme Reynolds number, a log-law layer could not be found unlike channel flow where log-law layer exists at infinite Reynolds number. Also a unique feature observed in Couette flow was the existence of a secondary peak of streamwise velocity fluctuation profile in the outer-layer which corresponds to the elongated streak and roller structure with dimensions that of the channel-geometry. Using two-way coupled DNS of turbulent Couette flow, Wang and Richter 7 investigated the effect of particle inertia on turbulence modulation and on the turbulence regeneration cycle for Re b = 500 (Reynolds number based on half the relative velocity of the walls and half channel-width), volume fraction (φ v ) less than 10 −3 and Stokes number St turb (Stokes number based on the turnover time of the large scale vortex ) in the range 0.056-5.56. Streamwise coupling of the phases along with spatial distribution of high-inertial particle played a key-role in stabilizing the turbulence. The turbulence attenuation was mainly a result of increased momentum dissipation due to particle-phase drag disrupting the large vortical structures. Low-inertial particles were observed to trigger laminar-turbulence transition through strengthening of the large-scale vortical structure. In a turbulent Couette-Poiseuille flow configuration, with zero skin friction, in absence of inter-particle collisions and two-way coupling, inertial particles were found to cluster up differently near the two walls as a result of vorticity-strain rate selection mechanism 8 . Randomly oriented particle clusters are observed near the moving wall similar to Isotropic turbulence . On the other-hand streaky particle structures, signature of wall-trubulence, were observed near the stationary wall. Anisotropy in fluctuations were due to different coherent flow structures observed in different walls. In a very recent study of particle laden turbulent channel-flow, a discontinuous decrease of turbulence intensity was observed for a critical volume fraction O(10 3 ) 1 . The phenomenon was observed for particles with varying Stokes Number in the range of 1-420 based on the fluid-integral time-scales. The collapse of turbulent intensity was found out to be a result of 'catastrophic reduction of turbulent energy production rate'. In vertical channel-flow, at higher volume loading, the drag force exerted by the particles to the fluid alters the fluid pressure field and hence the net driving force. The horizontal turbulent shear flow (present case), unlike vertical channel-flow, pressure drop does not act as a driving force. Along with these differences, the very mechanistic difference of vortical coherent structures coupled with clustering behaviours of inertial particles act as the motivation in investigating the existence of continuous or discontinuous modulation in turbulence modulation by inertial particles in particle-laden turbulent Couette flow. In this article we investigate the effect of particle feedback force on the fluid phase turbulence through two-way coupled DNS of particle-laden turbulent shear flow. The nature of the modulation is explored through analyzing fluid phase velocity statistics. An attempt to understand the underlying mechanism of the modulation, is made by detailed analysis of the various terms of mean momentum, mean kinetic energy and turbulent kinetic energy budget equations along with the analysis of streamwise velocity and vorticity fluctuation field. Additionally, the effect of inter-particle collision on the turbulence modulation is studied. In this regard, the ideal elastically colliding particle-laden turbulent suspension is compared with the suspension having inter-particle and wall-particle collisions which are slightly inelastic. The explicit role of the inter-particle collisions is studied by simulating a hypothetical case where only inter-particle collisions are switched off. II. SIMULATION METHODOLOGY The flow system is simulated in Euler-Lagrange framework by two-way coupled Direct Numerical Simulation. The particles are considered to be spherical, sub-Kolomogorov sized heavy FIG. 1: Schematic of the system inertial with diameter of 39µm having a density of 4000 kg/m 3 with a Stokes Number St about 367 based on the fluid integral time scale δ /U. The dimension of the simulation box is 10πδ × 2δ × 4πδ where δ is the half-channel-width. Upper and the lower walls of the Couette-flow move with positive and negative velocities in the x-directions respectively. The y-axis is along the cross-stream ( wall-normal) direction, and z-axis indicates the spanwise or vorticity direction (Fig. 1). The origin is placed at the bottom-wall such that the upper wall remains at y = 2δ . The number of grids used in x, y, and z directions are 120, 55, and 90 respectively with the grid resolution of 13.8 × 1.92 × 7.36 in viscous unit. Here, the fluid phase is considered to be air at ambient condition. The Reynolds Number based on half-channel width and wall-velocity is kept at 750. The simulations are performed in moderately dense limit by varying the volume fractions from 1.75 × 10 −4 to 1.05 × 10 −3 by changing the number of particles from 2000 and 12000 in the system. The assumption of binary inter-particle collision remains valid in this volume fraction regime. Fluid phase is described by continuity and the Navier Stokes equations (equations 1 & 2). Both the equations are solved in Eulerian grids. Here, u(x,t) represents three dimensional instantaneous fluid velocity field as function of position x. p(x,t) denotes the instantaneous pressure field, ν and ρ f are the kinematic viscosity and the density of the fluid respectively. In our isothermal simulations fluid is considered to be incompressible. f(x, t) represents the reverse feed-back force on the fluid due to the presence of particles. The flow-field is considered to be periodic along x and z direction, whereas along ydirection is bounded by the walls where no-slip and no-penetration boundary condition is applied (Fig. 1). The fluid flow field is solved using Direct Numerical Simulation (DNS) in a pseudospectral framework. Detailed numerical scheme, interpolation method for fluid velocity at the particle location, and correction of the velocity field for the calculation of the drag on the particle has been described in Goswami and Kumaran 9 , Muramulla et al. 1 and Muramulla 10 . The reverse feed-back force by the particle on the fluid is computed using Projection of Nearest Neighbour method (PNN) as represented in figure 2. Point-particle forcing is used to represent FIG. 2: The Projection of Nearest Neigbour (PNN) method for projecting the feed-back force of a particle located within the volume ∆V onto the vertices of the volume of a grid 4 the feed-back force. The feed-back due to single particle is distributed only to the eight nearest neighbour points of the fluid grids occupying the vertices of a parallelepiped in a volume weighted method discussed in Muramulla et al. 1 . For a steady flow the reverse force can be mathematically represented as: F D i and F L i represents the drag and the lift forces acting on the 'i'th particle respectively. In this work, the effect of lift force is neglected since it has a very little contribution in changing the second moments of the velocity fluctuations (Muramulla et al. 1 ). The drag force acting on the 'i' th particle is represented by Stokes Drag Law as follows: where, η f is the viscosity of air at ambient conditions. The detail account of the collision rule used here can be found in Ghosh and Goswami 11 . In the two-way coupled DNS, the particle loading brings about change in the fluid flow characteristics; alters the friction velocity as well. III. FLUID PHASE STATISTICS AND TURBULENCE MODULATION The fluid phase mean velocity and mean-square velocity statistics are presented here in figures 3 to 5. Mean velocity profiles U f luid in figure 3 (a) show two distinct regimes. For unladen fluid and FIG. 3: Effect of particle volume fraction on (a) mean fluid velocity U f luid (b) mean fluid velocity gradient dU f luid dy fluid with particle volume fraction (φ ) upto φ = 7.875 × 10 −4 , the mean velocity profile remains unchanged. For φ = 8.3125 × 10 −4 and higher, the mean velocity follows a different profile which remains almost unchanged till φ = 1.05 × 10 −3 , the maximum volume fraction reported here. The similar trend is observed for mean fluid velocity gradient also. In this case, beyond particle volume fraction φ = 7.875 × 10 −4 the mean fluid velocity gradient becomes slightly flatter with a distinct decrease in the velocity gradient at the wall, shown in figure 3 (b). The figure 4 shows the effect of particle volume fraction on fluid mean square velocity. It is observed that with increase in volume loading the mean-square velocities in all the directions decrease. This decrease is continuous from φ = 1.75 × 10 −4 to φ = 7.875 × 10 −4 across all the directions. However, beyond φ = 7.875 × 10 −4 the fluctuations decrease drastically for all the three directions. Figure 4 (ii) shows that this sudden decrease in mean-square velocity of the fluid is about two-orders of magnitude. It is to be mentioned that a slight increase in u 2 x is observed from unladen fluid to fluid with φ = To understand the abrupt change in the fluid phase dynamics, we perform a detailed analysis of momentum and kinetic energy budget, which are reported in the following sections. IV. MOMENTUM AND KINETIC ENERGY BUDGET OF THE FLUID PHASE The budget equations of momentum and kinetic energy of the fluid phase is quite important to understand the turbulence modulation. A. Fluid phase mean momentum budget equation The Mean Momentum Budget of the fluid phase along x direction can be written as: The terms in the mean momentum budget equation are • 3: Momentum transfer due to particle feedback force The effect of particle volume fraction on fluid phase mean momentum budget is shown in figure 6. In figure 6, it is observed that the momentum transfer term due to Reynolds Stress and the transfer of momentum due to viscous forces mainly dominate at lower volume fractions and act in the opposite direction. It is also observed from figures 6(a) to (d) that the momentum transfer due to Reynolds Stress term gradually decreases with the increase in φ , whereas the momentum transfer due to particle reverse drag increases. At higher volume fraction φ = 7 × 10 −4 and 7.875 × 10 −4 , term (2) and term (3) of equation 5 together balance out term (1). It is observed that the peak values of term (3) occur nearer to the wall due to the very high slip velocity between particle and fluid phase. A different phenomenon is observed for volume fractions greater than φ = 7.875 × 10 −4 . In this regime ( Fig. 6 (e) and (f)) the momentum transfer term due to Reynolds Stress term decreases drastically and the momentum transfer due to particle reverse drag balances out the momentum transfer term due to viscous stresses. Above the volume fraction of φ = 7.875 × 10 −4 , the drastic decrease of divergence of Reynolds stress may lead to a decrease in turbulence production, which can be expressed through energy budget. The budget study of mean fluid kinetic energy and fluid turbulent kinetic energy is discussed below. B. Fluid phase mean kinetic energy budget equation . The mean fluid K.E. budget equation is expressed as: 8 Particle-laden turbulent Couette flow Particle-laden turbulent Couette flow The terms in the mean kinetic energy budget equation are as follows: • 1: Transport of mean K.E. due to fluid viscous stress • 2: Transport of mean K.E. due to Reynolds stress • 3: Viscous dissipitaion of mean K.E. due to mean shear • 4: Decrease in mean K.E. associated with the shear production of turbulence • 5: Dissipation of mean K.E. due to mean particle drag The effect of particle volume fraction on fluid phase mean kinetic energy budget is shown in figure 7 along with the absolute value of each of the spatially averaged terms in figure 9. For the terms it is observed that the terms dominating the mean fluid K.E. budget changes from the near-wall region to the central region of the Couette-flow. At the near-wall region, for all the four volume fractions, mean K.E. due to fluid viscous stress arising out of the mean velocity gradient (term 1) acts as the source of mean K.E., whereas the viscous dissipation remains (term 3) the leading sink term. Transport of mean K.E. due to Reynolds stress (term 2) acts as an important sink in the nearwall region at lower volume fractions (insets of fig 7 (a) and (b)) and decreases with the increase in volume fraction. Conversely the loss of mean fluid K.E. due to particle reverse drag (term 5) increases with increase in volume fraction and acts as an significant sink of the mean K.E. in the near-wall region as shown in the insets of figure 7 (c) and (d). However, at the central region of the Couette-flow decrease of mean K.E. used for the shear production of turbulence (term 4) becomes the dominant sink term whereas the transport of mean K.E. due to Reynolds Stress (term 2) term acts as the leading source term of the mean K.E. The magnitude of the dominant source and sink terms of the mean K.E. decreases by one order. The values of the dominant source and sink terms of this region, decreases with increasing volume fraction. Figure 7 (e) and (f) shows that above volume fraction φ = 7.875 × 10 −4 , the same terms remain as the dominant source and sink terms be it in the the-wall region or in the central region of the Couette-flow, although a decrease of two orders of magnitude is observed at the central region. Transport of mean K.E. due to fluid viscous stresses generated due to mean fluid velocity gradient (term 2) acts as the principal source of mean K.E. across the cross-stream direction. The primary sink terms are observed to be the viscous dissipation term (term 4) and the decrease of mean K.E. due to particle reverse drag (term 6) term. Loss of mean K.E. due to shear production of turbulence (term 5) and the transport of mean K.E. due to fluid Reynolds Stress term (term 3) become negligible in this regime of volume loading. The variation of different components of mean K.E. budget terms along the cross-stream distance in wall-units are presented for lower φ values. Figure 8 shows the behaviour of mean k.e terms of the fluids as a function of y + at φ = 7 × 10 −4 . The transport of mean K.E. due to viscous stress arising out of mean fluid velocity gradient acts as the main source in the viscous sub-layer and drastically decreases in magnitude in the buffer-layer. The viscous dissipation of mean K.E. term, the dominant dissipation term, shows a similar behaviour: highest near wall and drastically decreasing in the buffer layer. The dissipation term due to particle reverse drag acts as the sink in the viscous sub-layer, and reduces to a very small magnitude in the buffer layer. Peak production This occurrence takes place along with the change of transport term of mean K.E. due to Reynolds Stress from being a sink term to source term. In order to get an average measure of all the terms across different cross-stream positions of the Couette flow, a spatially averaged term [·] y derived by averaging the term [·] over half-width δ of the Couette flow is defined as: [·]dy (7) Figure 9 shows the variation of the absolute values of the spatially averaged term of the mean K.E. equation with volume fraction. A sharp change in the magnitudes are observed in between φ = 7.875 × 10 −4 and φ = 8.3125 × 10 −4 . In this region, ,the magnitude of spatial averaged values of the dominant terms e.g. the transport term of mean K.E. due to fluid viscous stress generated by mean velocity gradient, the viscous dissipation of mean K.E. and loss of mean K.E. for turbulence production show a sharp decrease and loss of mean K.E. due to particle reverse drag an abrupt increase, although less sharper. The extent in decrease for the shear production of turbulence is maximum. This motivates us to look into the various terms of the fluctuating K.E. budget of the fluid phase in order to understand the sharp attenuation, occurring above φ = 7.875 × 10 −4 . C. Fluid phase fluctuating kinetic energy budget equation In the backdrop of wall-bounded turbulent flows, fluctuating kinetic energy fluxes and hence the energy transfer is studied at different wall-normal locations. A detailed analysis of fluctuating K.E. is required in the context of sharp modulation of turbulence fluctuation. The fluctuating kinetic energy budget equation is given by: The various terms of the fluctuating turbulent K.E. equation can be given as: • 1: q y = 1 2 ρ f u y u 2 is the energy flux due to fluid fluctuating velocity • 2: Transport of T.K.E. due to pressure fluctuation is the fluctuating strain rate tensor • 4: Shear production of turbulence • 5: Viscous Dissipation of fluctuating energy is the difference between total energy dissipation rate and the mean energy dissipation rate due to the presence of particles The effect of particle volume fraction on fluid phase fluctuating kinetic energy budget before and after attenuation is given in figure 10. It is evident from figures 10 (a)-(d) that the shear production of turbulence term (term 4) acts as the primary source term of the fluctuating kinetic energy across most of the cross-stream positions except at the walls. This term is generated at the expense of the mean kinetic energy of the fluid as evident from the discussion in the subsection IV B. The viscous dissipation term (term 5) is the dominant sink term across all the y/δ positions. However, the magnitude of shear production of turbulence and the viscous dissipation is observed to decrease with increasing volume fraction. Additionally, the transport of fluctuating K.E. due to fluid viscous stresses (term 3) and the transport of fluctuating K.E. due to fluid fluctuating velocity (term 1) are observed to decrease with the decrease of the dominant source and sink term with an increase in volume loading. Fig. 10 shows a drastic decrease in shear production of turbulence, along with the viscous dissipation term across all the cross-stream positions of the Couette-flow. At the central position these terms remain the dominant source and sink terms respectively. Similarly in the near wall region the principal source and sink terms remain unchanged with respect to figure 10. The behaviour of these terms across channel-width is shown in the representative figure 11. The transport of fluctuating K.E. due to fluid viscous stresses (term 3) acts as the main source of fluctuating K.E. at the walls due to the higher strain rate in the near wall region. The qualitative behaviours of the terms in fluctuating K.E. budget equation as a function of cross-stream position scaled in wall-units y + for a volume fraction of 7 × 10 −4 is shown in figure 11 before the attenuation. As a very well-accepted fact , the shear-production of turbulence takes place at in the buffer layer 3 . In the viscous sub-layer, due to the higher strain rate, the fluctuating K.E. is imparted to the fluid through fluid viscous stresses. The transport of fluctuating K.E. figure 12. From the figure it is evident that a catastrophic decrease in shear-production of turbulence takes place, at a volume fraction of φ = 7.875 × 10 −4 . This results in a similar decrease of the viscous dissipation as well. This decrease is observed to be linear for these two dominant terms in the lower volume fractions. The decrease in fluid fluctuating K.E. due to particle drag is almost linear with partial loading up to a critical volume fraction which is followed by a sharp decrease. The figure also depicts that dissipation due to particle at the fluctuating scale is more than one order of magnitude smaller than the dissipation due to particle drag at the mean scale. V. STEP-WISE PARTICLE INJECTION STUDY In order to confirm the fact that it is only the presence of the particles with a particular volume fraction that drive the drastic discontinuous collapse in the fluid phase turbulence and the phenomenon is not initial condition dependent, a step-wise particle injection study is carried out. This study has been initiated from a statistically steady state previously achieved for a turbulent suspension with particle volume fraction φ = 7 × 10 −4 . In the first step, a step-injection of 500 of particles, with random initial positions and velocities, is done such that the volume fraction of the suspension increases to φ = 7.4375 × 10 −4 and then the system is allowed to reach statistical steady-state. Except the volume fraction, all the other parameters in this study, are kept unchanged as discussed in II. Figure 13 shows the temporal evolution L2norm2(u) which is defined as: L2norm2(u) = L x 0 2δ 0 L z 0 (u · u)dxdydz where u represents the fluid velocity disturbance field evolved from the initial numerical perturbation (standard method of DNS). It is to be mentioned that the 'zero time' is counted from the start of the simulation of the unladen fluid subjected to the initial numerical perturbation. After the statistical steady value of the L2norm2(u) is achieved, 8000 particles are injected to initiate the simulation of two-way coupled particle-laden sheared turbulent flow. From the figure it is evident that the addition of particles over φ cr = 7.875 × 10 −4 reduces the fluid disturbance field. Figure 14 shows that that there is a drastic decrease of non-dimentionalized shear-production of turbulence term due to further injection of the particles over φ cr = 7.875 × 10 −4 . This decrease is almost two-orders of magnitude. Hence it is quite evident that in step-wise injection study the tatively similar to that of the decrease in L2norm2(u) and the shear production of turbulence term. The cross-stream velocity fluctuation is observed to be decreased before the streamwise velocity fluctuation. This carries a signature that decrease in u ,2 y brings about a drastic decrease in Reynolds stress which in turn affects the turbulence production term. VI. STEP-WISE PARTICLE REMOVAL STUDY After the third step of particle injection, the discontinuous decrease in the turbulence is observed at φ = 8.3125X10 −4 . Consequently it becomes relevant to check the effect of step-wise particle removal from the system on the temporal evolution of a few crucial turbulent statistics. Such a strategy helps us to identify whether there exists any hysteresis in the phenomenon of turbulence collapse. A step removal of 500 particles is carried out which lowers the volume fraction from φ = 8.3125X10 −4 to φ = 7.875X10 −4 . Another step-removal run is carried out along with the injection of a numerical disturbance field that is traditionally used in DNS to transform laminar base state to a turbulent profile. Figure 16 and figure 17 reveals the difference between the two runs. It is observed that the removal of the particles is not sufficient to bring back the magnitude of L2norm or the shear production of turbulence term to the values in the pre-attenuation phase. Numerical disturbance is required to restore the turbulence disrupted at φ cr . Such an observation also indicates that when particle volume fraction is greater than φ cr , the fluid flow becomes laminarized. The step-wise particle injection and removal study together show that it is the presence of particles that is exclusively behind the discontinuous transition and no hysteresis is observed. This modulation in turbulence leads to a laminar state and the turbulent state could not be restored by step-removal of particles only without the application of disturbance in the flow-field. VII. KEY OBSERVATIONS ON MODIFICATIONS IN FLUID PHASE DYNAMICS In summary, a discontinuous decrease of fluid turbulence intensity, mean square velocity and Reynolds stress was observed beyond a volume fraction of φ ∼ 7.875 × 10 −4 for St ∼ 367 and Re = 750. The discontinuous decrease happens along with discontinuous modification in the mean fluid velocity profile and mean fluid velocity gradient statistics. In the mean momentum budget, the momentum transport due to viscous stress term drastically reduces above the φ cr . Transport of mean K.E. terms due to fluid viscous stress and viscous dissipation of mean K.E. show a sharp drop due to the transition. The drastic reduction of shear production of turbulence along with the viscous dissipation of turbulent K.E. are two important phenomenon occurring during the discontinuous transition. The step-wise particle injection and removal studies revealed that it is the presence of particles which is majorly behind this discontinuous transition. VIII. TURBULENCE MODULATION AND PATTERNS IN STREAMWISE VELOCITY AND VORTICITY FLUCTUATIONS In the previous sections, the discontinuous transition of turbulence is mainly studied through statistics and energy budgets. It is also relevant to study the streamwise velocity and vorticity contours, namely the primal characteristics of the large scale motions. In this section, the effect of increasing particle volume loading on contours of streamwise velocity fluctuations and vorticity is shown and is compared with the unladen fluid. All the contour plots are shown at a single instant of time after statistically stationary state is achieved. It is worth to mention that no significant difference is observed even if we compare two such instantaneous frames captured with a time difference of 10 non-dimensional units. For particle laden cases, the position of the particles are denoted by the black dots. [20][21][22][23] . Figure 18 captures the streamwise velocity fluctuation of the unladen fluid projected in three different 2-D planes. The contours of the x-z plane, captured at the buffer layer, shows the characteristic low-speed velocity streaks spanning the entire simulation box. In the projection in the y-z plane, captured at the middle of the spanwise direction, the low-speed contours are seen to be more probable to occur near wall. It is argued that the existence of low-speed velocity zones are in between two counter-rotating tails, present occur near the wall, of the hairpin vortex 14,24,25 . The phenomenon of ejection of low-speed streaks away from the wall and the sweeps of high-velocity streaks towards the wall at the buffer region create the flow-patterns away from the walls 12,24,25 . Hence the streamwise velocity contours provide a suggestive (qualitative) rather than definitive understanding of the large scale (coherent) structures. The contour in the x-y plane shows low-speed region present in an oblique plane with the mean-flow direction. This may bear the signatures of large hair-pin vortices which typically orient themselves at certain angles with the mean-flow 14,24,25 . However it is not very much clear to us at this point. Figures 20 and 21 show the streamwise velocity contours for turbulent fluid laden with particles before and after the turbulence collapse. The concentration of heavy inertial particles, as expected, do not show any correlation of particle concentration with streamwise velocity fluctuation field. Figures 18 and 20 show that contours for streamwise fluctuations in wall parallel planes are very much similar except for particle laden flow (φ > φ cr , Fig. 21), few of the smaller scales have been dumped. A similar observation has also been reported by Zhao, Andersson, and Gillissen 26 These high vorticity zones roughly can be seen to exist in opposite signs (blue patches and red/yellow patches) arranged near to each-other. In the context of trains of hair-pin vortices, the streamwise counter rotating high-magnitude fluid zones occur at the tail region of the larger stable structures. This is more prominent in x-z plane where high magnitude vorticity zones are observed to exist in patches. The low-magnitude rotating zones (green and sky-blue) are observed to be of larger sizes in all the three planes of projections. The vorticity contours in x-y plane (Fig. 19, 22 and 23) arrange themselves in an oblique plane inclined to the mean-flow direction, qualitatively very similar to what has observed for the streamwise velocity fluctuation contours. The concentration of heavy inertial particles, not showing any correlation with streamwise vorticity field is similar to the behaviour observed in the streamwise velocity field as well. Discontinuous modulation in turbulence lowers the magnitude of the streamwise vorticity by about one order as observed in figure 23 and hence the larger low-rotating vortical contours are not observed. Thus the qualitative study of the contours of streamwise velocity and vorticity, shows some elementary signatures of breaking-down of coherent structures due to discontinuous turbulence disruption caused by highinertial particles.The fluctuations in streamwise velocity and vorticity may be generated due to particle phase fluctuation has been discussed later. IX. ROLE OF INTER-PARTICLE COLLISIONS ON TURBULENCE MODULATION The discontinuous disruption of turbulence observed in the particle-laden turbulent shear flow is found out to be a result of drastic reduction of the shear production of turbulence due to increase in particle number density in the system. The contour plots reveal the break down of the streamwise velocity streaks after complete collapse of the turbulence. All of these observations reveal the interaction between particle and fluid phase thorough reverse feed-back force to be the crucial factor for the turbulence attenuation. But, it is not quite intuitive to comment on whether the presence of inter-particle collisions or their nature would bring about any change in the nature of the turbulence modification. In this section two different cases are studied turbulence modulation when inter-particle and wall-particle collisions are inelastic and for the case where the inter-particle collisions are switched but particle-wall collision are activated. A. Effect of inelastic collisions on turbulence modulation The effect of inelastic collisions on turbulence is studied by keeping the co-efficient of restitution (e) to 0.9 for inter-particle and wall-particle collisions. Fluid phase velocity statistics namely the mean velocity and the second moments of the fluctuating velocity contours of streamwise velocity fluctuations and vorticity fluctuations of the fluid phase. Fluid velocity Statistics Variation in the mean fluid velocity and fluid turbulent K.E. with volume fraction is shown in figure 24. The mean fluid velocity follows two different mean velocity profiles. Mean velocity profiles of the fluid phase for suspensions with volume fractions greater than 8.3125 × 10 −4 are comparatively flatter as shown in figure 24(a). In this regime the turbulent kinetic energy is decreased by two-orders of magnitudes figure 24(b). The second moments of the fluid velocity is shown in figure 25. It is evident that for volume fraction greater than 8.3125 × 10 −4 , the magnitudes for all the second moments decrease drastically (about two-orders of magnitudes). The magnitudes of cross-stream and spanwise mean square velocities and the fluid Reynolds stress decrease monotonically with increase in volume loading till a critical volume fraction φ cr = 8.3125 × 10 −4 . The drastic decrease in fluid velocity fluctuations and hence the turbulent kinetic energy above critical volume fraction is also observed in presence of elastic inter-particle collisions as discussed in section III. Between elastic and slightly inelastic collisional cases, the differences are observed in the decreased magnitudes of the fluid velocity fluctuations and a marginally increased critical volume fractions due to inelastic collisions. Fluid-phase streamwise Velocity and Vorticity Contours and Modulation in Turbulence The analysis of the contours of streamwise velocity and vorticity is shown following VIII. Figure 26 and 27 shows the difference in the fluid streamwise velocity fluctuation fields before and after the discontinuous transition. It is evident that the discontinuous transition reduces the strength of the velocity fluctuation field roughly by one-order of magnitude and breaks down the elongated streamwise velocity streaks, qualitatively similar to what has been observed for perfectly elastic collisions. The fluid streamwise vorticity fields before and after the discontinuous transition are captured in figures 28 and 29. Similar to the ideal collisional case, the streamwise vorticity field strength is decreased by one-order of magnitude along with the shortening of low-rotating vorticity zones. From the above discussion it is evident that the nature of the modulation in turbulence occurring due to the presence of inertial particles undergoing inelastic collisions with each other and with the walls is drastic and discontinuous very similar to the case of elastically colliding particles. However, inelastic collisions are found to marginally increase the critical volume loading. The reason of this is explored in section IX B 2. B. Turbulence modulation in the absence of inter-particle collisions In the previous subsection, the effect of slight inelasticity in the inter-particle collision is found to increase the critical volume fraction for the discontinuous turbulence disruption. The effect of inter-particle collisions could be best studied if the collisional effect is made to be isolated from the particle fluid interaction. In this case particles are allowed to move freely through each other without any mechanical contact but undergoes elastic collisional rebound from the walls. Similar to the previous cases, the fluid phase velocity statistics, namely the mean velocity and second moments of the velocity fluctuations are studied along with the contours of streamwise velocity and vorticity fluctuations of the fluid phase. Figure 30 shows the variation in mean fluid velocity and fluid turbulent K.E. with volume fraction. The mean fluid velocity follows two different mean velocity profiles. Fluid phase for suspensions with volume fractions greater than 8.75 × 10 −4 follow a nearly linear mean velocity profiles similar to laminar Couette-flow profile as shown in figure 30(a). Unlike the other cases, the fluctuating kinetic energy does not show any trend of drastic decrease. Rather the qualitative behaviour changes beyond φ = 8.75 × 10 −4 (Fig.24(b)). This is aligned with the change in trend observed for streamwise velocity second moment in figure 31(a). It is evident from 31(b) and (c) that for volume fraction greater than 8.75 × 10 −4 , the magnitudes of cross-stream and spanwise second moments decrease drastically (about two-orders of magnitudes). The magnitudes of crossstream and spanwise mean square velocities and the fluid Reynolds stress decrease monotonically with increase in volume loading till the critical volume fraction φ cr = 8.75 × 10 −4 . The magnitude The role of absence of inter-particle collision is most prominent in case of streamwise mean-square fluid velocity. For suspensions with volume fractions higher than φ cr , u ,2 x is not observed to decrease drastically. Rather, the profile changes its shape. For unladen fluid, the profile of u ,2 Fluid velocity Statistics x shows peaks in the buffer-layer. However, incase of particle laden flows, in absence of inter-particle collisions, u ,2 x profile shows maximum value at the center and monotonically decreases to zero at the wall. The particle phase statistics reported in appendix XI A figure 41(a) shows that, after the the collapse of fluid turbulence, the qualitative behaviour of streamwise velocity fluctuation of the particle-phase and the fluid-phase have similar trend and the magnitude of the particle phase is higher. Moreover, presence of inelastic (e = 0.9) inter-particle collisions the initial temporal decay of Lagrangian velocity correlation reveals a very interesting fact. Due to the higher inertia of the particles, the particle velocity auto-correlation R xx function is expected to decay slowly with respect to that of the fluid phase, as it is observed in figure 32 at the critical volume fraction. However, after the transition, the decay of R xx for both the phases is found to be similar which signifies turbulence in fluid phase being induced by the particle phase. Therefore, the present correlation technique may help to make a distinction between the sheared turbulence and the particle driven fluid fluctuation. Fluid-phase streamwise Velocity and Vorticity Contours and Modulation in Turbulence The contours of fluid phase streamwise velocity fluctuations are shown in the figures 33 and 34. streamwise low-speed velocity streaks, observed in figure 33 before the transition, are qualitatively very similar in nature with the velocity field observed in presence of inter-particle collisions (Figs. 20 and 26). The transition in turbulence adds some interesting features, not seen in presence of inter-particle collisions, in the streamwise velocity fluctuation field. The magnitude in the streamwise velocity fluctuations are not seen to be decreased unlike the previous cases. Most importantly the velocity field gets arranged in layers of different magnitudes (Fig.34). In x-z plane, contours are found to be arranged parallel to each other spanning the entire box-length. This layer, as seen in y-z plane, do not span the entire height and rather they are arranged in coreshell like structures with the highest fluctuations occurring in central part of the channel. The near-wall low intensity fluctuations are seen in x-z and y-z plane also. Hence in absence of interparticle collisions, the transition in turbulence bring about sharp change in qualitative behaviour of streamwise velocity fluctuations but not in magnitude. Additionally, it can be inferred that inter-particle collisions play a very important role in breaking up the long streaky structures of streamwise velocity fluctuation field. The fluid streamwise vorticity fields before and after the discontinuous transition are captured in figures33 and 34. The streamwise vorticity field strength is decreased by one-order of magnitude along with the shortening of low-rotating vorticity zones. These traits are qualitatively similar with the observations of streamwise vorticity field in presence of particles as well. The particle concentration field and the spanwise velocity vorticity field is not found to be correlated, although, the particles prefer to concentrate near the wall where streamwise velocity fluctuation is very low. Overall, it is observed that the absence of inter-particle collisions increases the critical volume fraction for the transition in turbulence. C. Role of inter-particle collision on critical volume fraction The effect of inter-particle collision is shown in Fig.37 on two important terms, i.e. (a) n p u x (u x − v x ), which is proportional to the mean K.E. dissipation due to particle reverse drag and (b) shear production of turbulence. We have considered a volume fraction which is less than φ cr for all of the cases. Figure 37 (a) reveals that the mean K.E. dissipation due to particle reverse drag varies marginally with the nature of collision. This dissipation is lowest in absence of inter-particle collision and highest for perfectly elastic collision. The discontinuous decay in turbulence, discussed previously in this article, happens as a result of simultaneous catastrophic reduction of the shear-production of turbulence. Figure 37(b) shows that the shear production term is highest in the absence of inter-particle collisions and lowest in presence of perfectly elastic collisions. The marginal increase in turbulence production term along with the marginal decrease in the dissipation of mean K.E. due to particle reverse drag is well correlated to the delay in drastic collapse of turbulence or with the increase in critical volume fractions. This explains why the critical volume fraction of the system without inter-particle collisions is the highest and for the system with ideal elastic inter-particle collisions is the lowest. Richter and Sullivan 21 presented a very elegant relation, given below, between particle phase stress τ particle and the feedback force F. Their formulation was based on the derivation of 27 where the dispersed phase was modeled using continuum approximation. The momentum balance equation for the dispersed phase under continuum approximation can be written by: where, c represents local instantaneous particle mass, v i represents particle phase velocity and F i denotes the reverse force per unit volume. Performing Reynolds averaging procedure on eq. 9 and taking the streamwise component : Here, v x , v y refers to the particle phase velocity fluctuations. Eq. 10 can be written through concentration-weighted average form as follows: Which upon integration along wall-normal direction yields: Here c denotes concentration weighted average and τ particle being the particle phase stress. Figure 38 represents the effect of nature of collision on the variation of spatial average particle stress cv x v y y term as a function of volume fraction φ . Equation 12 has shown that the higher particle stress value is due to the higher value of the reverse force integrated over the half width δ . Hence it is observed from the figure that the magnitude of the particle stress and thus the value of δ 0 − F x (y)dy is lowest when the inter-particle collision is switched off. This causes an increase of φ cr and delay in turbulence transition in absence of collision. The φ cr values for the cases are shown by the vertical lines: the dashed line represents φ cr in absence of particles, the thick line represents the case of inelastic collision and the dashed-dotted line represents the critical volume faction under perfectly elastic collision. This figure also reveals that above φ cr the spatial averaged particle stress decreases rapidly. On the contrary, presence of inter-particle collisions maintain the increasing trend of the cv x v y y term with φ even after turbulence transition. FIG. 38: Effect of nature of collision on spatial averaged particle stress at various volume fraction X. CONCLUSION This article presents a detail description of the fluid phase dynamics of particle-laden turbulent sheared suspension with volume fraction in the range of φ = 1.75 × 10 −4 to 1.05 × 10 −3 , higher than the cases discussed in our previous work 11 . DNS with two-way coupling is used to study the turbulence modulation by the high inertial particles (St ∼ 367). Unlike turbulent channelflow, turbulent Couette flow is driven by the mean shear generated by the differential motion of the walls and hence there is no mean imposed pressure gradient. The total shear stress remains constant across the channel-width. In particle-laden turbulent shear flow the coherent structures, the fluid-particle interaction all differ from channel geometry. Hence the turbulence modulation by high-inertial particles is expected be different. However, the sheared turbulent suspension is found to show a discontinuous collapse in turbulence, very similar to what has been observed in a vertical channel 1 . A detailed analysis of momentum, mean K.E. and turbulent K.E. budget at different volume fractions reveals that the catastrophic decrease in shear production of turbulence plays the major role in the discontinuous transition. Moreover, step-wise particle injection and removal study confirms that the catastrophic decrease is found out to be due to the presence of particles only and does not show any hysteresis. The transition is also studied through the analysis of streamwise velocity and vorticity fluctuation field. The streaky structures of low-speed streamwise velocity fluctuations are found to be broken down in smaller structure of very small magnitude at higher particle loading. Particle concentration field is uncorrelated with fluid velocity and vorticity contours as expected due to the very high inertia. The next part of the article is focused in exploring the role of inter-particle collision on the turbulence transition. In this regard, the ideal elastically colliding particle-laden turbulent suspension is compared with the suspension having inter-particle and wall-particle collisions slightly inelastic. Introduction of slight inelasticity in the inter-particle and wall-particle collision is found to increase the critical volume fraction marginally. The qualitative nature of the streamwise velocity and vorticity fluctuation field remaining similar to the elastic ideal collisional case. The explicit role of the inter-particle collisions is studied by simulating a hypothetical case where only interparticle collisions are switched off. Here the transition do manifests itself by drastic reduction of streamwise, spanwise mean square velocities and Reynolds stress. On the other hand transition shows only a change in trend of streamwise mean square velocity, similar to that of particle phase instead of unladen fluid phase. This change is found out to be an effect of particle induced fluid turbulence where the timescale of streamwise Lagrangian velocity auto-correlation becomes similar to that of the particle phase. The streamwise velocity fluctuation field shows a unique behaviour, where velocity fluctuations of different magnitude arrange themselves in large parallel structures in x-z plane which gives a core-shell kind of appearance in the y-z plane where the high magnitude zones are present only at the central part. The particle concentration field and the spanwise velocity vorticity field is not found to be correlated, although, the particles prefer to concentrate near the wall where streamwise velocity fluctuation is very low. The absence of inter-particle collisions increases the critical volume fraction for the transition in turbulence as it shows marginally higher shear production of turbulence at the same volume fraction than the cases where the inter-particle collisions are switched on. XI. APPENDIX A. Particle phase statistics in absence of inter-particle collisions The hypothetical study of switching off the inter-particle collisions, as discussed in section IX B, bring about a few interesting and different qualitative behaviour of particle phase statistics especially after the transition in turbulence. After the transition, unlike any cases discussed before, particle mean velocity do not show any slip at the walls along with a very high mean velocity gradient near the walls (Fig.39(a)). The particles tend to accumulate more in the near-wall region for suspension above critical volume fraction, although very little variation of particle concentration is observed before transition (Fig. 39(b)). Due to the absence of inter-particle collisions the redistribution of particle x momentum to y and z direction gets drastically decreased. Hence we observe span-wise and cross-stream particle mean square velocities and covariance v x v y drastically decrease. This decrease is about two-orders of magnitude forv 2 y andv 2 z and one-order of magnitude for v x v y with respect to the ideal collisional condition shown in section. The most interesting trend is observed forv 2 x . Increase inv 2 x values with a completely different trend, i.e. zero at the walls and maximum at the centre, occur after the transition in turbulence as a result of the shear induced particle migration. Hence the different trends are found to be independent with small changes in volume fraction as well. Following analysis suggests that the streamwise fluctuation of the particle phase in the absence of particle-particle collision originates from the wall normal migration of the particles under sheared velocity profile. In this mechanism, the magitude of streamwise velocity fluctuation (v x ) induced, can be written as: For, φ = 9.625 × 10 −4 , at y = δ , v 2 x = 1.468 × 10 −6 ; average mean velocity gradient ∂U ∂ y ≈ 1.0 and τ v = 367 yields v x ≈ 0.448 or, v 2 x ≈ 0.19. This is of the same order of magnitude as observed from the Fig. 40(b) which is around 0.14. Thus the the particle-phase statistics along with the fluid phase statistics take a very important role in understanding two-way coupled turbulent suspensions in absence of collisions. . 41: Effect of particle volume fraction on (a) span-wise mean square velocity, (b) particle phase shear stress in the absence of inter-particle collisions
2022-08-09T15:10:01.429Z
2022-04-01T00:00:00.000
{ "year": 2022, "sha1": "7559eeb7a6b3c0d1ee8b5951b99fe2dd0ad8babb", "oa_license": null, "oa_url": "https://aip.scitation.org/doi/pdf/10.1063/5.0097173", "oa_status": "BRONZE", "pdf_src": "Arxiv", "pdf_hash": "73f23a27333965f35ef774ea5998c0c696d90dfe", "s2fieldsofstudy": [ "Physics", "Engineering" ], "extfieldsofstudy": [] }
29368173
pes2o/s2orc
v3-fos-license
Spin polarised scanning tunneling probe for helical Luttinger liquids We propose a three terminal spin polarized stm setup for probing the helical nature of the Luttinger liquid edge state that appears in the quantum spin Hall system. We show that the three-terminal tunneling conductance strongly depends on the angle ($\theta$) between the magnetization direction of the tip and the local orientation of the electron spin on the edge while the two terminal conductance is independent of this angle. We demonstrate that chiral injection of an electron into the helical Luttinger liquid (which occurs when $\theta$ is zero or $\pi$) is associated with fractionalization of the spin of the injected electron in addition to the fractionalization of its charge. We also point out a spin current amplification effect induced by the spin fractionalization. Introduction :-A new class of insulators have recently emerged called quantum spin Hall insulators which have gapless edge states due to the topological properties of the band structure [1]. For a two-dimensional insulator, a pair of one-dimensional counter propagating modes appear on the edges [1,2] which are transformed into helical Luttinger liquids (HLL) due to inter-mode Coulomb interactions [3]. Various aspects of this state [4][5][6][7][8][9] have been studied. The central point about the HLL is the fact that the spin orientation of the edge electrons, which is dictated by the bulk physics, is correlated with the direction of motion of the electron -i.e., opposite spin modes counter propagate. The existence of such edge channels have already been detected experimentally in a multiterminal Hall bar setup [10]. But although this experiment does confirm the existence of counter propagating one-dimensional (1-D) modes at the edge, it is not a direct observation of the spin degree of freedom. A central motivation of this letter is to suggest a setup wherein the structure of the spin degree of freedom on the edge can be directly probed. Motivated by the spin valve (SV) effect, the first idea to probe the spin degree of freedom, would be to replace one of the ferromagnetic leads in a magnetic tunnel junction by the HLL and measure the magneto-resistance, as a function of the relative spin orientation of the HLL and the magnetization direction of the ferromagnetic lead. However, the angle dependent tunnel resistance for the SV depends directly on the degree of polarization of the two leads. For HLL, although the edge modes have a specific spin orientation locally, they have no net polarization, and hence the tunnel resistance would be independent of the spin polarization of the ferromagnetic lead. In this letter, we show that switching to a three terminal geometry involving a magnetized scanning tunneling microscope (STM) tip facilitates the detection of the spin orientation of the edge electron by inducing a finite three terminal magneto-resistance. For a normal LL, it is not possible to inject an electron with a well-defined momentum (left or right movers) at a localized point in the wire, and hence extended wires were used as injectors in Ref. 11 to achieve chiral injection. But for HLL, since the direction of motion is correlated with the spin projection, chiral injection (i.e., injecting only left movers or right movers) is possible even at a localized point in the wire. One just needs to tune the direction of polarization of the STM parallel (anti-parallel) to the polarization direction of the edge. Once this is achieved, injection of an upspin (downspin) electron is equivalent to injecting right (left) movers. Hence the HLL has a natural advantage over a normal LL for chiral injection. As was experimentally demonstrated in Ref. 11, chiral injection of electrons can lead to an asymmetry in the currents measured on both sides of the injection region, which is further modified by the LL interaction. In this letter, we show that in a similar setup, the left-right current asymmetry in the wire when voltage biased with respect to the STM has also a strong θ dependence due to the interaction induced scattering of electrons between the right (spin up) and left (spin down) moving edges. For purely chiral injection (θ = 0, π), we find that the fraction of the total tunneling current measured at the left and right of the injection region is asymmetric and is given by the splitting factor (left) A c1 = (1 ∓ K)/2 and (right) A c2 = (1 ± K)/2 (where K is the LL parameter and the top and bottom signs are for θ = 0 and π respectively) in agreement with the results in Ref. 12, and is a manifestation of charge fractionalization of the injected electron. Observing the asymmetry with a spin polarized STM as a local injector would be an indisputable sign of the helical nature of the edge states, since for the usual LL, no such current asymmetry would be expected for local injection. The direction of orientation of the electron spin in the HLL is along the Z axis. The angle between direction of orientation of the spin of electrons in the edge and the majority spin in the STM tip is θ and they are assumed to lie in the X-Z plane. The Y -axis points out of the plane of the paper. Here x and x ′ represent the intrinsic one dimensional coordinates of the STM tip and the wire. The most subtle outcome of our analysis is the fractionalization of the spin of the injected electron. In contrast to charge fractionalization, the spin gets fractionalized such that one of the fractionalized components turns out to be larger than the injected spin. The asymmetric fractions of the total injected electron spin current are given by A s1 = (1 ∓ K)/4K and A s2 = (1 ± K)/4K (upper and lower signs for θ = 0 and θ = π respectively) and are a manifestation of the fractionalization of the injected electron spin in the HLL. Note that for K < 1 (repulsive electrons) A s2 > 1/2, thus resulting in an effective magnification of the injected spin current at the right lead. Geometry :-We propose a three terminal junction as shown in Fig. 1. Three terminal setups have also been used to study tunneling into a quantum wire in the Fabry-Perot regime [13]. The spin of the electrons in the edge states are polarized in some direction depending on details of the spin-orbit interaction in the bulk. We use a coordinate system which has its Z-axis along the direction of orientation of the spin of the edge electrons and the plane containing the polarization direction of the edge electron and the tip electron is assumed to be the X-Z plane (see Fig. 1) . Note that here we have assumed that the edge is smooth and is along a straight line, so that there is a well defined quantization direction for the electron spin living on the edge. The Hamiltonian for the HLL is given by [5] where Φ = (φ R↑ + φ L↓ )/2, Θ = (φ R↑ − φ L↓ )/2 and the φ R↑/L↓ are related to the up and down electron operators in the edge by the standard bosoniza- . ζ and K are the short distance cut-off and the Luttinger parameter respectively. Unlike the standard LL, here the spin orientation is correlated with the direction of motion. We drop Klein factors as they are irrelevant for our computations. The Hamiltonian for the STM is assumed to be that of a free electron in 1-D. The tunneling Hamiltonian between the tip and the helical edge at a position x = 0, x ′ = 0 is given by where i = R, L denotes right and left movers and α denotes the spin index, ψ iα and χ α denote the electron destruction operator in the HLL and the STM respectively. Voltage bias in the tunneling operator can be introduced simply by replacing χ α (x) → χ α (x)e −iV t/ . We will, henceforth, drop the index i, j denoting the direction of motion. Since the tunneling conserves spin, using a fully polarized STM with polarization direction tuned along the positive or negative direction of Z-axis will naturally allow for chiral injection i.e., injecting only right (↑) or left (↓) movers. In the absence of interactions in the HLL, the chirally injected electron will cause both charge current and spin current to flow only to the right or to the left lead, hence leading to a left-right asymmetry. In the presence of interactions in the HLL, due to Coulomb scattering between the right and left movers, the chirally injected charge and spin degrees of freedom of the electron get fractionalized and move in both directions; however, in general, the left-right asymmetry still survives. Now, let us consider the fully polarized STM tip with the polarization direction making an arbitrary angle θ with respect to the spin of the HLL electron. In the quantization basis of the HLL spins, the tip spinor can be written as χ rot = e −iθσ·Ŷ /2 χ T , where χ T is the tip spinor in a basis where the spin quantization axis is along the STM polarization direction i.e., χ T = (χ ↑ , 0). So χ rot↑ = cos(θ/2) χ ↑ and χ rot↓ = sin(θ/2) χ ↑ . In other words, the electron in the tip has both ↑ and ↓ spins, but the effective tunnel amplitudes are asymmetric (except when θ = π/2) and hence, the current asymmetry survives. As a function of the rotation angle θ, the chiral injection goes from being a pure right-mover at θ = 0 to a pure left mover at θ = π. Charge current :-The tunneling Hamiltonian can now be rewritten in terms of χ ↑ as where t ↑ = t cos(θ/2) and t ↓ = t sin(θ/2) can be tuned by tuning θ. The Boguliobov fields φ L/R which move unhindered to right and left direction (henceforth we call them the right chiral and left chiral fields) are given by Note that the total electron density on the HLL wire can be expressed in terms of the chiral fields as ρ(x) = ( )∂ x φ L thus defining the chiral right (left) densities and the corresponding number operators as (5) Next we define the operator corresponding to the chiral decomposition of the total charge current as I tα = d N α /dt = −i[ N α , H t ], where we have set = 1 and electron charge e = 1 and α = R/L. Using the standard commutation relations of chiral fields, [ φ ↑/↓ (x), φ ↑/↓ (x ′ )] = ±i π sgn(x − x ′ ) the chiral currents can be found to be I t (θ) = I tL (θ) + I tR (θ) is the total tunneling charge current operator for an arbitrary value of θ and The expectation values of the currents operator in linear response is given by Since the HLL Hamiltonian is left-right symmetric in the absence of the tip and the tip is fully polarised, the value is equal for θ = 0 and θ = π and given by I t (θ = 0) = I t (θ = π) = I 0 . Using the well-known correlation function of LL liquid at finite temperature T , we find where Λ is an ultra-violet cutoff and ν is the Luttinger tunneling exponent given by ν = −1+(K +K −1 )/2. Here we have have assumed that T ≫ T L , T V , where T L is the temperature equivalent of the length of the wire defined by v/L = k B T and T V = eV /k B , is the temperature equivalent of bias voltage. Using these values, we now obtain the current heading to the right and left ends of the wire as Note that even though the left and right chiral currents which will be measured at the right and left contact depend on θ, the total tunneling current I t (θ) = I tL (θ) + I tR (θ) is independent of θ. Thus we show that unlike the two terminal tunnel current, the three terminal current is clearly not independent of θ. This is one of the key results of this letter. Spin currents :-The isolated HLL, even in equilibrium, has a persistent spin current because of the correlation of the direction of spin with the direction of motion but no charge current. However, here we would like compute the excess spin current that is caused by the inflow of electrons from the STM tip into the edge mode. Now the tunneling induced magnetization of the edge state can be defined as S = is the local spin density. Hence the spin current can be defined as dS/dt = −i[S, H t ]. Now using bosonization, it is straight-forward to evaluate the X, Y and Z components of the spin current operator as given beloẇ Note thatṠ X andṠ Z are expressible in terms of the current operator while theṠ Y is expressible only in terms of the tunnel Hamiltonian given in Eq. 3. The difference is related to the fact that only the X and Z components of the spin are relevant as the injected electron spin from the STM has no component along the Y direction. Hencė S Y is expected to be zero and indeed the expectation value ofṠ Y is easily seen to be zero, since H t is leftright symmetric. Now using Eqs. 6, 9 and 10, we get the following expressions (within linear response) for the spin currents towards the left and right contacts - Hence, for arbitrary values of θ, the spin current collected at the right and the left contacts are asymmetric. Now using Eqs. 9 and 11, it is easy to check that total injected spin current is pointing exactly along the magnetization direction of STM as expected. Charge and spin fractionalization :-Recently, the issue of charge fractionalization has been addressed both theoretically and experimentally in Refs. 11, 12 and 13. The fractionalization of a chirally injected electron charge into the HLL at a point (x = 0) can be understood by considering the following commutator This implies that the creation of a single right moving electron at x = 0 creates simultaneously an excitation of charge (1 ± K)/2 in the right and left going chiral densities, thus leading to fractionalization of electron. (A similar equation (with an overall sign change) works for the left-movers). Note that the splitting of the total tunneling current into its chiral components (see Eq. 6) is exactly consistent with the splitting of the electron charge. Hence measuring the chiral currents can provide information about the charge fractionalization, as demonstrated in Ref. 11. Similarly, to study spin fractionalization, we bosonize the Z-component of spin density given by s Z ( This defines s Z,R/L = ±(1/2K) ρ R/L (x). Now let us consider the following commutator This implies that the creation of a single right moving electron at x = 0 creates simultaneously spin excitations of spin (1 ± K)/2K (in units of electronic spin quanta) in the right and left going chiral spin densities, thus leading to K dependent fractionalization of the spin of the injected electron. Now let us consider the Z-component of the spin current operator given in Eq. 10. This operator can be chirally decomposed as follows - (15) For chiral injection (i.e., θ = 0, π) we note the splitting of the total tunneling spin current (I 0 /2) into its chiral components is given precisely by (1 ± K)/2K, which is exactly consistent with the splitting of the electron spin evaluated from the commutator. Intriguingly, one of the splitting fractions, (1 + K)/2K is larger than unity for K < 1 (i.e., for repulsive electrons). Hence in the three terminal geometry one obtains an interaction (K = 1) induced amplification of the injected spin current. Discussion :-Regarding the application of our work to realistic systems, we first point out that our work is directly applicable to edge states in graphene with a small spinorbit coupling [14] and to other genuine quantum spin Hall insulators like the model considered in Ref. 15 and Bi [16]. However, for HgT e/CdT e quantum wells, where the spin projections actually refer to pseudospin related to the two block diagonal parts of the effective Hamiltonian written in the |E 1 , m J = +1/2 , |H 1 , m J = +3/2 , |E 1 , m J = −1/2 and |H 1 , m J = 3/2 basis [17], we need to modify our computations. Fig. 1 is still applicable with the Z-axis now referring to the crystal growth axis of the quantum well. But the ψ ↑/↓ states that we have considered in Eq.3 are no longer right or left movers even before interactions have been introduced. We need to introduce the right and left moving fields as η ↑/↓ given by ψ ↑ = aη ↑ + bη ↓ and ψ ↓ = a ′ η ↑ + b ′ η ↓ where a, b, a ′ , b ′ are material dependent parameters which denote how the pseudospin states are related to the real spin of the electron. Hence, the tunneling Hamiltonian in Eq.3 can be rewritten as We get pure right-moving or left-moving currents at tan θ/2 = −a ′ /a or at tan θ/2 = −b ′ /b. Note that the angle at which the left-moving current disappears is not exactly opposite to the angle at which the right-moving current vanishes, since the real spin of the left-movers and right-movers need not be equal and opposite. With interactions, it is the η ↑/↓ fields which are bosonised and the rest of the formulation goes through provided that the non-interacting reference angles (i.e., the coefficients a, b, a ′ , b ′ ) are known. But determining both a, b, a ′ , b ′ and K is a non-trivial problem. However, if the experiment could be carried out at different temperatures at fixed θ, then since the current I 0 (defined in Eq.8) depends only on K and not on θ, it may be possible in principle (albeit difficult in practise) to extract the value of K from the power law dependence of current. Moreover, the edge states could be known (from other experiments) to be in the weakly interacting regime (K = 1). In these cases, this setup can be used to extract the values of the coefficients a, b, a ′ , b ′ . Conclusion :-To summarise, in this letter, we have proposed a three-terminal polarised STM set-up as a probe for HLL. We suggest that the spin polarized tip can facilitate local chiral injection. This leads to current asymmetries, with specific θ dependence, whose measurement can lead to undisputed confirmation of the helical nature of the edge state. Chiral injection of the electron into the HLL is also shown to be directly related to the physics of fractionalization of the injected electron spin in addition to the fractionalization of its charge. We also point out that spin fractionalization leads to a spin current amplification effect in the three terminal geometry. SD would like to thank C. Brüne, Y. Gefen, M. Zahid Hasan, M. König, A. Mirlin, Y. Oreg, G. Refael, K. Sengupta, S. Simon, M. R. Wegewijs and A. Yacoby for stimulating discussions. Both of us would like to thank the referee for useful suggestions.
2011-05-17T12:06:23.000Z
2010-06-11T00:00:00.000
{ "year": 2010, "sha1": "70dbf065393a8624bc5e3aa4b34e1990e4ea97ff", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1006.2239", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "70dbf065393a8624bc5e3aa4b34e1990e4ea97ff", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics", "Medicine" ] }
244879698
pes2o/s2orc
v3-fos-license
Associations of Monocytes and the Monocyte/High-Density Lipoprotein Ratio With Extracranial and Intracranial Atherosclerotic Stenosis Background: Although the monocyte/high-density lipoprotein ratio (MHR) has been shown to be a potential marker of inflammatory of cardiovascular and cerebrovascular diseases, there are few studies on its relationships with the degree of intracranial and extracranial atherosclerotic stenosis and the stenosis distribution. Methods: In total, 271 patients were admitted for digital subtraction angiography (DSA) examination and were classified into a non-stenosis group and a stenosis group. (1) The two groups were compared and the arteries were categorized according to the degree of intracranial or extracranial atherosclerotic stenosis (if ≥two branches were stenotic, the artery with the most severe stenosis was used). (2) Clinical baseline data and laboratory indexes of patients grouped according to stenosis location (intracranial vs. extracranial) were collected. Results: (1) MHR × 102 [odds ratio (OR) = 1.119, p < 0.001], age (OR = 1.057, p = 0.007), and lymphocyte count (OR = 0.273, p = 0.002) significantly affected the presence of cerebral atherosclerotic stenosis, with an MHR area of 0.82 under the receiver operating characteristic (ROC) curve (AUC) and an optimal diagnostic value of 0.486. Analyses of the moderate, mild, and severe stenosis groups showed that MHR × 102 (OR = 1.07, p < 0.001) significantly affected the severity of stenosis in patients. (2) In the analysis of stenosis at different sites, the rate of extracranial artery stenosis in patients who smoked (OR = 3.86, p = 0.023) and had a reduced lymphocyte level (OR = 0.202, p = 0.001) was remarkably greater than that in patients who smoked (OR = 3.86, p = 0.023). With increasing age, the rate of extracranial artery stenosis raised sharply. With the increase in the MHR level, the stenosis rate of each group was highly greater than that of the non-stenosis group. Conclusion: The MHR has a predictive value for the diagnosis of extracranial and intracranial atherosclerotic stenosis and is correlated with the degree and distribution of stenosis. Trial Registration: Clinical Medical Research Center Project of Qinghai Province (2017-SF-L1). Qinghai Provincial Health Commission Project (Grant #2020-wjzdx-29). BACKGROUND Atherosclerosis is a common chronic illness characterized by endovascular atheroma or fibrous plaque. Pathophysiological changes, namely, arterial wall hardening, decreased elasticity, and lumen stenosis or occlusion, are important risk factors for the occurrence and development of ischemic cerebrovascular diseases and mortality (1,2). Inflammatory factors play an essential role in lipid metabolic disorders and their importance in thrombosis, plaque rupture, and stenosis in atherosclerosis is being increasingly reported (3). Studies on the correlations of the monocyte/high-density lipoprotein ratio (MHR) [determined via dividing the absolute monocyte count by the absolute high-density lipoprotein cholesterol (HDL-C) count] with coronary stenosis and myocardial infarction have shown that the MHR is a potential inflammatory marker of cerebrovascular and cardiovascular diseases (4,5). Currently, there are few reports on cerebrovascular diseases and most are relevant to the occurrence and prognosis of ischemic stroke (6,7). Digital subtraction angiography (DSA) is regarded as a perfect standard for diagnosing intracranial and extracranial arterial stenosis. However, studies using DSA to investigate the relationship between intracranial or extracranial arterial stenosis and the MHR are rarely reported. This study investigated the relationships between the MHR and intracranial and extracranial arterial stenosis and related risk factors, aiming to proposed a reliable theoretical foundation to guide the treatment and prevention of intracranial and extracranial arterial stenosis. Study Population From May 2017 to May 2020, a total of 216 inpatients with intracranial and extracranial atherosclerotic stenosis confirmed by cerebrovascular DSA examination at the Qinghai Provincial People's Hospital were consecutively enrolled. There were 55 hospitalized patients without intracranial and extracranial atherosclerotic stenosis. DSA examination included the aortic arch, subclavian artery, vertebral artery, common carotid artery, and internal carotid artery. The common carotid artery, intracranial internal carotid artery (cervical segment, petrous segment, lacerum segment, and so on), V1-V3 segments of the vertebral artery, subclavian artery, and external carotid artery are classified as extracranial arteries. Intracranial arteries include the extracranial internal carotid artery (ophthalmic segment and communicating segment), A1-A2 segments of the anterior cerebral artery, basilar artery, M1-M2 segments of the middle cerebral artery, P1-P2 segments of the posterior cerebral artery, and V4 segment of the vertebral artery. The exclusion criteria were as follows: acute cardiovascular disease, signs of acute infection, immunosuppressive therapy, tumor, hematological system disorder, connective tissue disease, severe liver and kidney function impairment, moyamoya disease, and arteriovenous malformation. The Qinghai Provincial People's Hospital Ethics Committee approved this study. Blood Analysis Methods Basic clinical data, such as gender, age, ethnicity, hypertension, drinking, smoking, diabetes mellitus, and cerebral infarction, were collected from patients meeting the inclusion criteria. Additionally, laboratory measures of monocytes, HDL-C, and the MHR of patients were obtained within 24 h after admission (with monocytes and HDL-C measured from the same initial blood sample). (1) First, the stenosis and non-stenosis groups were compared. Then, the DSA examination results were assessed based on relevant diagnostic criteria developed for the Warfarin-Aspirin Symptomatic Intracranial Disease Study (8) to evaluate the content of intracranial and extracranial atherosclerotic stenosis, which was calculated as follows: Degree of stenosis (%) = (1-diameter at the narrowest point of a narrow segment/the diameter of the proximal normal vessel) × 100%. According to the degree of arterial stenosis, the patients were divided into a mild stenosis group (stenosis degree of 29% or less), a moderate stenosis group (stenosis degree of 30-69%), and a severe stenosis group (stenosis degree of 70-99%). The aim was to study the factors influencing the degree of atherosclerotic stenosis and to analyze the predictive value of the MHR for cerebral atherosclerotic stenosis. (2) Based on the location of extracranial and intracranial atherosclerotic stenosis, the patients were divided into four groups: a non-stenosis group, an intracranial atherosclerosis only group (ICAS group), an extracranial atherosclerosis only group (ECAS group), and an intracranial and extracranial atherosclerosis group (I-ECAS group). The non-stenosis group worked as a control group and was compared with the other three groups to analyze influencing factors. Statistical Analyses The SPSS software version 26.0 (Chicago, Illinois, USA) was employed for the analysis of the data. The chi-squared test was used for count data. The experimental data were examined for normality by the Shapiro-Wilk normality test; those with normal distributions were expressed as the mean ± SD and analyzed with the one-way ANOVA. The characteristics of baseline of the non-stenosis group and stenosis group were compared with the Mann-Whitney U-test. For data distributed non-normally, the data were expressed as medians (lower quartile-upper quartile) and analyzed with non-parametric tests. The predictive power of the MHR for the occurrence of cerebral atherosclerotic stenosis was analyzed using the subject working characteristic curve and the optimal threshold was determined. Finally, variables with statistical significance (p < 0.05) in the univariate analyses were involved within the logistic regression model. Analysis of Logistic Regression and Receiver Operating Characteristic Curve for Cerebral Artery Stenosis Age, neutrophil count, white blood cell (WBC) count, the MHR, C-reactive protein (CRP), proportion of males, smoking, drinking, hypertension, and diabetes mellitus were greater within the stenosis group (n = 216) than in the non-stenosis group (n = 55) and lymphocyte count was lower in the non-stenosis group; these differences were remarkable (p < 0.05). In the univariate and multivariable logistic regression analyses, age was the independent variable found to be positively associated with the probability of narrow stenosis [p = 0.007 < 0.05, odds ratio (OR) = 1.057 > 1]. A higher MHR × 10 2 value was associated with a greater probability of stenosis (p < 0.001, OR = 1.119 > 1) and a higher lymphocyte count was associated with a lower probability of stenosis (p = 0.002 < 0.05, OR = 0.273 < 1) as shown in Table 1. The ROC curve analysis of the MHR and cerebral atherosclerotic stenosis yielded an area under the ROC curve (AUC) of 0.82 and the optimal diagnostic value was 0.486; the results are plotted in Figure 1. Patients with mild (n = 72), moderate (n = 35), and severe (n = 60) stenosis were selected for analysis and the group differences in WBC count, neutrophil count, CRP, apolipoprotein A, and the MHR were significant (p < 0.05). The variables found to be significant in the univariate ordered logistic regression analysis were included in the multivariate ordered logistic regression analysis, with the severity of stenosis as the dependent variable. The results showed that the MHR alone significantly influenced the severity of stenosis (p < 0.001, OR = 1.07 > 1), i.e., the greater the MHR value was, the greater the stenosis severity was, as shown in Table 2. Analysis of the Factors Correlated With the Distribution of Atherosclerotic Stenosis in Intracranial and Extracranial Arteries There were remarkable differences in sex, smoking, drinking, diabetes mellitus, hypertension, WBC count, neutrophil count, lymphocyte count, CRP, low-density lipoprotein (LDL), apolipoprotein A, and the MHR among the atherosclerosis-free, ICAS, ECAS, and I-ECAS groups (p < 0.05), as shown in Table 3. Logistic Regression Analysis of Atherosclerotic Stenosis Distribution in Intracranial and Extracranial Arteries Following the univariate ordered logistic regression analysis, the multivariate ordered logistic regression analysis was conducted, which excluded CRP and LDL, but included platelets. The results suggested that age was significantly and positively associated with the probability of simple extracranial stenosis (p = 0.003 < 0.05, OR = 1.066 > 1) and the probability of combined intracranial and extracranial atherosclerotic stenosis (p < 0.001, OR = 1.102 > 1). Smoking (p = 0.023 < 0.05, OR = 3.86 > 1) significantly increased the incidence of simple extracranial atherosclerotic stenosis. The higher the lymphocyte value was, the lower was the probability of developing simple extracranial atherosclerotic stenosis (p = 0.001 < 0.05, OR = 0.202 < 1). A higher MHR × 10 2 value was associated with higher probabilities of simple intracranial atherosclerotic stenosis (p < 0.001, OR = 1.12 > 1), simple extracranial atherosclerotic stenosis (p < 0.001, OR = 1.121 > 1), and combined intracranial and extracranial atherosclerotic stenosis (p < 0.001, OR = 1.147 > 1), as shown in Table 4. DISCUSSION Atherosclerosis is a common chronic inflammatory disease. Inflammation is an important pathophysiological mechanism of atherosclerotic thrombosis, plaque rupture, and stenosis or occlusion. Monocytes are immune cells and when the vascular endothelium is damaged, the expression of adhesion molecules upon the surface of these cells increases. Upon stimulation by cytokines, these cells transform into macrophages. Phagocytosis of lipids occurs followed by the formation of foam cells under scavenger receptor mediation; these changes mark the initial phase of atherosclerosis and the transition from a stable to an unstable state. The lipid core of atherosclerotic lesions contains not only lipid deposits, but also a variety of immune cells derived from monocytes and macrophages that include T cells, mast cells, and dendritic cells, which act as major roles within the proliferation and progression of atherosclerosis (9, 10). Monocytes can aggravate inflammation and promote the development and instability of plaques, local thrombosis, and a series of responses, thus aggravating vascular stenosis. Dyslipidemia is another significant risk factor for atherosclerosis. The main function of HDL-C is the reverse transport of total cholesterol in the tissues toward the liver and out of the body. HDL-C can reduce thrombosis risk via platelet stabilization and decrease leukocyte adhesion to stable plaques. HDL-C can also prevent LDL oxidation and exhibit antithrombotic and antiinflammatory properties, thereby playing a protective role (11,12). Study has revealed great prospects of HDL-C infusion for the treatment of atherosclerosis (13,14). Monocytes are closely related to HDL-C. Abnormal levels of blood lipids, especially elevated cholesterol, can stimulate the production of monocytes in the circulation. Furthermore, reduced HDL-C can reduce the monocyte inflammatory response (15). The ability of monocytes to phagocytose lipid particles is enhanced in atherosclerotic stenosis, making blood fat more likely to be deposited in the stenosis (16). Therefore, it is speculated that the MHR has more advantages than monocytes and HDL-C as an inflammatory marker. This study found that the MHR is an independent factor of risk for the occurrence of cerebral atherosclerosis. ROC curve analysis showed that the area under the ROC curve (AUC) of the MHR was 0.82 and the optimal diagnostic value was 0.486, showing that the MHR can be used as a good predictor of the occurrence of intracranial and extracranial atherosclerotic stenosis. In addition, age has been proven to be one of the most obvious independent factors of risk for the incidence of intracranial and extracranial atherosclerosis (17,18), which is consistent with the results of this study. The MHR is linked with cerebral atherosclerotic stenosis. However, there are few studies on the correlation between the MHR and the occurrence or degree of extracranial and intracranial atherosclerotic stenosis. From the analysis of the mild, moderate, and severe stenosis groups, this study concluded that the MHR significantly affects the degree of stenosis. Chen (16) found that monocytes are closely relevant to the degree of peripheral atherosclerosis stenosis. A population-level study of arterial atherosclerotic ischemic stroke in southern China found severe HDL with carotid artery stenosis in the brain [cervicocerebral atherosclerotic stenosis (CCAS)] (19). An elevated level reflects increased degrees of inflammation and oxidative stress and an increased severity of coronary artery stenosis (4,20). Domestic and international studies have found ethnic differences in the frequencies of extracranial and intracranial atherosclerotic stenosis. In Europe and the United States, extracranial artery stenosis is the dominant stenosis, while in Asia, intracranial arterial stenosis is more common (21,22). However, in this study, the rate of extracranial artery stenosis was slightly greater than that of intracranial stenosis, which is consistent with the increasing prevalence of extracranial artery stenosis in Chinese people revealed by epidemiological surveys in the recent years (23). The higher rate of extracranial artery stenosis than of intracranial stenosis in this study may be due to the following factors: (1) Regional, dietary, and lifestyle differences. In high-altitude areas, the temperature difference between day and night due to the cold climate can limit the availability of fruits and vegetables. In addition, the dietary habits of the population include a high intake of meat, which can increase blood lipid levels and ATP (as measured by the plasma arteriosclerosis index). Moreover, longterm exposure to a hypoxic environment changes the blood microcirculation, anatomy, and physiology (24). (2) Aging with the proportion of stenosis cases involving intracranial arteries decreases, while the proportion of those involving extracranial arteries increases (25). China's Aging Society may exacerbate this phenomenon. Conclusions vary with respect to the factors that influence the intracranial vs. extracranial distribution of atherosclerotic stenosis. This study concluded that male sex and smoking are independent risk factors for extracranial atherosclerosis alone, which is consistent with previous large-sample data studies (22,26). Men are more prone to intracranial and extracranial atherosclerosis than women, which reflect the protective effects of estrogen on the cardiovascular and cerebrovascular systems such as its direct effect on the vascular wall and its beneficial effects on lipid composition. Estrogen resptor alpha 36 (ERa36) and estrogen receptor G protein-coupled receptor 30 (ERGPR30)/G protein coupled estrogen receptor (GPER1) signaling has been found to play an anti-inflammatory role in monocyte-/macrophage-related inflammatory processes (27). Recent studies have shown that estrogen can activate the GPER signaling pathway, which results in decreased SR-BI expression in endothelial cells and, thus, significantly reduces the transport of LDL-C (28). Furthermore, estrogen can inhibit liver esterase activity, improve the level of circulating HDL-C, reduce blood cholesterol and LDL-C, and directly interact with HDL-C to inhibit the oxidation of LDL-C, thus preventing atherosclerosis (29). However, diabetes mellitus was not found to be associated with extracranial or intracranial atherosclerotic stenosis, which is in contrast to previous results indicating that diabetes is a factor of risk for intracranial artery stenosis (24,26,30). The results may be due to the following: (1) Chronic hypoxic acclimatization at high altitude increases the dependence of the body on glucose and enhances glucose utilization, (2) With the improvement in standards of living of the residents, the incidence of diabetes mellitus has been rising rapidly. Diabetes mellitus has been shown to increase the incidence and burden of vascular risk factors and is common in both the intracranial and extracranial atherosclerotic stenosis (31,32). This study concluded that age is an independent risk factor for intracranial and extracranial atherosclerosis and previous work has shown that the incidence of cerebral artery stenosis rises significantly with age (25). It is generally believed that the occurrence of extracranial artery stenosis is more strongly correlated with age than that of intracranial arterial stenosis (22,30). However, a postmortem report (33) showed that the frequency of intracranial arterial stenosis increased with age. In addition, the results of this study suggest that lymphocytes might have protective effects against intracranial arterial stenosis. Recent studies have found that the number of circulating lymphocytes is significantly reduced in the progression of atherosclerotic lesions, which may be related to weakened adaptive immunity and healing effects in the atherosclerotic process (3,34). The number of lymphocytes is highly related to the presence of extracranial artery stenosis. The lack of elastic fibers in intracranial vessels, the dense internal elastic layer, and increase in antioxidant enzyme activity with age provide good barrier effects. Intracranial atherosclerotic stenosis appears later than extracranial atherosclerotic stenosis and lymphocyte values are reduced in intracranial arterial stenosis (17). This study found that an elevated MHR value was related to significantly raised risks of simple intracranial arterial stenosis, combined extracranial and intracranial arterial stenosis, and simple extracranial arterial stenosis. The identification of the MHR as a common independent correlated factor in the ICAS, ECAS, and I-ECAS groups confirmed the MHR to be closely related to cerebral atherosclerotic stenosis. Although the "gold standard" of cerebrovascular DSA examination was used in this study to diagnose intracranial and extracranial atherosclerotic stenosis, this method is traumatic, risky, and costly; thus, its use is mainly limited to the subset of patients with cerebral infarction who require surgery. For the patients in this study, DSA was found to be reliable for determining the stenosis rate and to have good precision and other advantages. However, patients with mild or no symptoms who did not opt for cerebrovascular DSA examination could not be included in this study. Thus, the total sample scale should be increased in future studies to verify the present results. Moreover, this study was limited to patients in plateau regions. In addition, data on the long-term (6 months or longer) clinical outcomes of patients are crucial to enhance the use of the MHR. Thus, follow-up clinical control studies should be conducted at multiple centers and regions to confirm the findings. CONCLUSION In conclusion, as a risk factor for extracranial and intracranial atherosclerotic stenosis, the MHR has predictive value and is highly related to the severity and location of stenosis. This study also expounds on the development of extracranial and intracranial atherosclerotic stenosis in the process of inflammation, providing a theoretical basis for targeted interventions. Such interventions would reduce the incidence of cerebral atherosclerotic stenosis caused by ischemic cerebrovascular disease and provide better health services to the residents of high-altitude areas. DATA AVAILABILITY STATEMENT The original contributions presented in the study are included in the article/supplementary material, further inquiries can be directed to the corresponding author/s. ETHICS STATEMENT The studies involving human participants were reviewed and approved by Ethics Committee of Qinghai Provincial People's Hospital. The patients/participants provided their written informed consent to participate in this study. AUTHOR CONTRIBUTIONS ZL, SW, and QF performed the study design, interpretation of the results, and statistical analyses. YL participated by analyzing and resolving difficulties of analytic strategies and the discussion. QF performed the final review and is the corresponding authors. All authors have approved the final manuscript after reading. FUNDING This study was supported by grants from the Clinical Medical Research Center Project of Qinghai Province (Grant #2017-SF-L1) and the Qinghai Provincial Health Commission Project (Grant #2020-wjzdx-29) program funds. We confirm that any aspect of the work in this manuscript involving human patients was carried out with the ethical approval of all the relevant agencies.
2021-12-05T16:10:58.242Z
2021-12-03T00:00:00.000
{ "year": 2021, "sha1": "78936a23c76be9a133c4e1dec93861a00fa47d0a", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fneur.2021.756496/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "353f3f025030ef2f972368e8ba873764c6e50e0d", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
234202448
pes2o/s2orc
v3-fos-license
A Study of Negative Polarity Items in Chinese Existential Sentences Negation is crucial to semantics. Negative polarity items (NPI) play an important role in negation. There are a few studies on NPIs in Chinese, but so far no research is on NPIs in Chinese existential sentences (ESs). Because the existential verb “ you ” (have) is combined with the negative marker “ mei ” (not) in Chinese ESs, unlike in other types of sentences where “ meiyou ” (not have) together function as a negative marker. We wonder whether this combination of a single negative marker and an existential verb affects the licensing conditions of NPIs. This is why we study negative polarity items in Chinese ESs. We investigate the variety of negative polarity items which can be allowed in Chinese ESs, and their licensing conditions. It is found that four types of negative polarity items can occur in Chinese ESs, i.e. negative polarity adjectives, negative polarity adverbs, negative polarity wh-words, and negative polarity ‘one’ phrase as minimizer. In this paper, we just focus on the last two types of negative polarity items that can occur in Chinese ESs. The linguistic facts show that negative polarity wh-words (except ‘ duoshao’ ) and negative polarity ‘one’ phrase as minimizer in Chinese ESs can be licensed by negative sentences, yes-no interrogative sentences, A-not-A interrogative sentences, and the antecedent clause of a conditional. We claim that negative polarity wh-words (except ‘ duoshao ’) and negative polarity ‘one’ phrase as minimizer in Chinese ESs are strong NPIs. It has been found that NPIs can be licensed by negative sentences in Chinese according to the previous studies. Our new contribution to the field is that we have found NPIs in Chinese ESs can also be licensed by yes-no interrogatives, A-not-A interrogatives and the antecedent clause of a conditional apart from negative sentences. This finding is to some extent accountable for NPIs in other Chinese constructions. Introduction Negative polarity refers to the grammatical property of a word or phrase, such as ever or any in English that may normally be used only in a semantically or syntactically negative or interrogative context [1]. Words or expressions that either require or shun the presence of a negative element in their context are referred to as negative or positive polarity items (henceforth NPIs and PPIs), respectively. Often examples of NPIs in English are any and yet, while some and already are instances of PPIs. Many languages, perhaps all, have NPIs and PPIs, and their distribution has been the topic of a rapidly growing literature since the seminal work of Klima [2] [3]. Negative polarity items also occur in Chinese existential sentences, for example: (1) Chouti li meiyou renhe dongxi. Drawer in not-have NEG any thing 'There isn't anything in the drawer.' (2) Qiang shang meiyou renhe dongxi. Wall on not-have NEG any thing 'There is not anything on the wall.' (3) Zhe li shenme xiansuo ye meiyou. Here what news either not-have NEG 'There is not any news here.' (4) Zhe li yewan meiyou na er ke qu. Here night not-have NEG where may go 'There is no place to go at night here.' (5) Ta nian shang meiyou yisi xiaorong. He face on not-have NEG one CLF smile 'There is not a trace of smile on his face.' (6) Zhe ge shichang meiyou yidian er huoli. This CLF market not-have NEG one CLF vitality 'There is not a little vitality in this market.' Example (1) to (6) are existential sentences, where 'renhe' in example (1), 'renhe' in example (2), 'shenme' in example (3), 'na er' in example (4), 'yisi' in example (5), 'yidian er' in example (6) are negative polarity items, which modify existential entities and occur in negative sentences, enhancing negative effect. Huang [4] thinks that existential you-sentences in Chinese can be represented as the general form of (NP)…V…NP…(XP), where NP is optional, either left empty or filled by a locative NP functioning as a subject, V is filled by the existential verb 'you'(have), NP is the existent entity, XP is also optional, which can be filled by either a clause or a phrase predicating over the existent entity. Huang [4] shows that the existential you-sentences in Chinese also exhibit the Definiteness Effect (DE). According to Hu and Pan [5], the basic function of Chinese existential you-sentences is to introduce new information into the discourse; the new information can be either a new entity or a new relation, and the so-called Definiteness Effect (DE) is only a by-product of the discourse function of the existential construction. Gu [6] has proposed a non-movement analysis for the subject of the two types of locative existential construction in Chinese. She suggests that in both types of locative existential construction, the locative subject is base-generated in that position which is theta-marked by the verb. Tsai [7] explores three types of existential quantification in Chinese, you, you-de, you-(yi)-xie, arguing that while presentational you count as a sentential unselective binder, partitive you and specific plural you are to be treated as determiners. Negation plays an important role in existential quantification. Negative polarity items generally occur in negative environment, which can enhance negative effect. The core research issues about NPIs are the proper delimitation of their distribution and the underlying causes of their distribution. As for the issue of distribution of NPIs, there has been a debate as to whether it should be explored from a syntactic, semantic or pragmatic perspective. The studies of NPIs run through the history of generative grammar [1]. As for the review of 'negation', the early corpus is mainly focused on English, which has been gradually extended to some other languages in the past 30 years. The empirical evidence across languages indicates that polarity is not only determined by syntactic categories, but also by semantic factors, involving the essence of grammatical structures and language knowledge. The traditional study on NPI licensing dates back to Klima [1].The issue of NPI licensing is also explored by Progovac [8] [9][10], Postal [11], Szabolcsi [12], Dikken [13] and so on, including a syntactic relation between a proper (semi-)negative element and an NPI. Klima [1] proposes a treatment whereby the presence of a morphosyntactic feature [affective] acts as the trigger of a negative polarity item. Among the environments marked as [affective] are: the scope of negation, such as never, nothing, no, none, etc., complements to negative predicates such as unpleasant, unlikely, odd, impossible, etc., comparative clauses, questions, the scope of negative quantifiers and adverbs such as few/little, rarely, barely, hardly, etc. Progovac [10], for instance, argues that NPI licensing shows significant similarities with syntactic binding. She believes that an NPI is essentially an anaphor that must be A'-bound in its governing category. English 'any' is seen as the subject to Principle A of Binding Theory but is licensed by a superordinate as well as clause-mate negation as it raises in logic form (LF). According to Szabolcsi [12], NPI 'any' and its licenser are the joint spell-out of an underlying negative determiner [D NEG [SOME]]. Dikken [13] believes that at least some NPIs require syntactic agreement with a negative head-in terms of minimalist feature checking. The studies on negative polarity items tend to circle around the similar examples and items, and focus on various languages, besides English, mainly Dutch, Greek, Italian, Russian, Chinese, Japanese, Korean and Hindi. The scholars try to relate the distribution of NPIs to their meaning. Ladusaw [14] has proposed to eliminate the feature of [affective] by using a semantic account of the licensing of the polarity items. He points out that many of the contexts in which polarity items are acceptable have the property of downward monotonicity or implication reversal. Normally, expressions may be replaced by more general ones salva veritate, for example, 'John is a sophomore.' will entail 'John is a university student.', given that a sophomore is the second-year university student. Now for the negative counterpart of these sentences, the direction of the entailment is reversed: 'John is not a university student.' entails 'John is not a sophomore.' but not vice versa in propositional logic, which can be represented as p→q, -q→-p. Here, the proposition p is 'John is a sophomore', the proposition q is 'John is a university student.', p entails q. And the negation of q entails the negation of p, because 'sophomore' is a lower word of the upper word 'university student'. However, the reversed negation is not necessarily correct in propositional logic, for instance, 'John is not a sophomore.' does not necessarily entail 'John is not a university student', maybe he is a freshman. Zwarts [15], Nam [16], and van der Wouden [17] have argued that the typology of NPIs corresponds to a typology of 'monotone decreasing' operators, which can be summarized: a. Weak NPIs are licensed by decreasing functions, like 'any' 'ever' in English; b. Strong NPIs are licensed by anti-additive functions, like 'yet' in English; c. Strongest NPIs are licensed by anti-morphic functions, like 'a bit' in English. According to the above typology, different downward-entailing operators license different NPIs. There is a subset relation among the three types of NPIs. For example, anti-additive operators are a proper subset of monotone decreasing operators, and anti-morphic operators are a proper subset of the anti-additive operators. However, negative polarity items in Chinese ESs are overlooked. Many issues concerning NPIs in Chinese ESs are worth exploring, such as the variety, the distribution, and the licensing conditions of NPIs in Chinese ESs, and how they interact on the three levels of syntax, semantics and pragmatics. Since the first two types of negative polarity items that can occur in Chinese ESs, negative polarity adjectives and negative polarity adverbs, have been investigated in the first paper, we will focus on the third and fourth types of negative polarity items in this paper, i.e. negative polarity wh-words and negative polarity "one" phrase as minimizer. Huang's definition to Chinese existential sentence is particularly defined from a perspective of formal linguistics. We provide our definition before we address the two types of NPIs in Chinese ESs. Existential sentence is the sentence pattern which expresses the existence of something or somebody in some place, whose pattern can be summarized as "some place has something or someone". Another similar sentence pattern is (dis)appearing sentence (yinxian ju). (Dis) appearing sentence is the sentence pattern which expresses someone or something appears or disappears in some place, whose pattern can be summarized as "some place appears or disappears someone or something". In its broad sense, existential sentences refer to existential sentences and (dis)appearing sentences; in its narrow sense, (dis)appearing sentences are excluded. In this paper we use existential sentences in its narrow sense. We wonder whether the combination of a single negative marker 'mei' (not) and an existential verb 'you' (have) affects the licensing conditions of NPIs in Chines ESs. In other types of sentences, 'meiyou' (not) together functions as a compound negative syntactic marker. There are three research questions in this paper: 1) Does the combination of a single negative marker and the existential verb affect the licensing conditions of NPIs in Chinese ESs? 2) What types of NPI can be allowed in Chinese ESs? 3) What are their licensing conditions respectively? This paper consists of four sections. In section one, we briefly introduce the literature on the studies of Chinese ESs and the research of NPI abroad. In section two, we will discuss negative polarity wh-words in Chinese ESs in detail. In section four, negative polarity "one" phrase as minimizer in Chinese ESs will be explored in detail. The last section is conclusion. Negative Polarity Wh-words in Chinese Existential Sentences Wh-words in Chinese have interrogative function, and they can also be used as negative polarity items. Chinese wh-words such as 'shenme'(what), 'shei'(who), 'na'er' (where), etc., may sometimes be interpreted as non-interrogative existential indefinites, meaning 'something', 'somebody', 'somewhere', etc. These existential wh-words typically occur in negative sentences such as 'wo mei chi shenme' (I didn't eat anything), 'wo mei zhao shei' (I didn't look for anybody), 'wo mei qu na'er' (I didn't go anywhere), etc. However, they cannot occur in affirmative sentences in these senses, such as * 'wo chi le shenme' (I ate something), *'wo zhao le shei' (I looked for somebody), * 'wo qu le na'er' (I went somewhere), etc. Some scholars treat non-interrogative existential wh-words as polarity items [18] [19][20] [21]. The purpose of this section is to investigate the distribution and licensing of existential wh-words in Chinese ESs where such existential wh-words are used as negative polarity items. Chinese speakers tend to use wh-words, such as 'shenme' (what), 'shui' (who), 'na'er' (where), etc., to express emphasis on negation. Chinese wh-words may show a thorough negation and function as NPIs in Chinese ESs, for example: thing 'There is not anything in the refrigerator.' In example (7), 'shenme' is not used as an interrogative wh-word but as NPI, which is licensed by the negative marker 'mei(you)'. In this context, NPI 'shenme' has insignificance reading, used to modify the existential entity 'dongxi'. NPI 'shenme' can also be understood in the sense of none reading, for example: Refrigerator in what thing either/all not-have NEG 'There is nothing in the refrigerator.' In example (8), NPI 'shenme' is used together with the focus adverb 'ye' or 'dou', which makes the sentence order changed with the object of existential verb going from the right to the left. In this context, NPI 'shenme' is interpreted as none reading, meaning 'nothing'. NPI 'shei' and 'na er' may also occur in Chinese ESs, for instance: In example (9), NPI 'shei' is licensed by the negative marker 'mei(you)', and it is understood in the sense of 'nobody'. In example (10), NPI 'na er' is also licensed by the negative marker 'mei(you)', and it is understood in the sense of 'no place'. Unlike example (4), when the object of the existential verb in example (10) is focalized, NPI 'shei' requires the collocation of the focus adverb 'ye' or 'dou', meanwhile the negative marker 'mei(you)' has to be changed into 'bu', as shown in example (11). The same is true of NPI 'na er' in example (10), which can be illustrated by example (12 NPI 'shenme' in example (13) is licensed by the interrogative sentence, and it is used as an object of the existential verb 'you' (have) with the meaning of 'something'. Based on the method used by Higginbotham [22], Cai [23] shows that yes-no question sentence in Chinese can be a licensing context of NPI 'renhe', and yes-no question sentence and A-not-A question sentence both have semantic characterization of anti-additivity. Anti-additive environments are a proper subset of downward entailment contexts, and anti-morphic environments are a proper subset of anti-additive contexts. Negative sentences and interrogatives can license similar items because they have downward entailment contexts, which are not shared by affirmative sentences. NPI 'shei' in example (14) is also licensed by the interrogative sentence, and it is used as an object of the existential verb 'you' with the meaning of 'somebody'. NPI 'na er' in example (15) is also licensed by the interrogative sentence, and it is used as an object of the existential verb 'you' with the meaning of 'some place'. Negative polarity wh-words 'shenme', 'shei', and 'na er' also appear in A-not-A interrogative sentences, for instance: (16) is licensed by A-not-A interrogative sentence, and it is used as a modifier of the existential noun 'xiansuo', meaning 'any'. NPI 'shei' in example (17) is also licensed by A-not-A interrogative sentence, but it is used as an object of the existential verb 'you', meaning 'somebody'. In example (18), NPI 'na er' is also licensed by A-not-A interrogative sentence, and it is used as an object of the existential verb 'you', meaning 'some place'. NPI 'shenme', 'shei', and 'na er' also occur in the antecedent clause of a conditional, for example: (19) is a conditional, and NPI 'shenme' appears in the antecedent clause of it. NPI 'shenme' is used as an object of the existential verb 'you', and it is licensed by the antecedent clause of a conditional, meaning 'something'. In example (20), NPI 'shei' is used as an object of the existential verb 'you', which is also licensed by the antecedent clause of a conditional, meaning 'somebody'. Similarly, in instance (21), NPI 'na er' is also licensed by the antecedent clause of a conditional, meaning 'some place'. Apart from these three negative polarity wh-words, 'shenme', 'shei', and 'na er', there is another wh-word 'duoshao' (how many), which can also be used for non-interrogative function. For instance, 'wo meiyou duoshao jihui' (I don't have much chance), in this sentence 'duoshao' means 'much' in a negative sentence. It cannot appear in an affirmative sentence in this sense, for example, *'wo you duoshao jihui' (I have much chance). NPI 'duoshao' occurs in negative existential sentences, expressing indefinite quantity, for instance: (22) Zheli meiyou duoshao you jiazhi de ziliao. Here not-have NEG how many have value DE materials 'There are not many valuable materials here.' In example (22), negative polarity wh-word 'duoshao' is licensed by the negative marker 'mei(you)'. NPI 'duoshao' is used as a modifier of the existential noun 'ziliao' (materials), in this context it expresses indefinite quantity, in the sense of insignificance reading, meaning 'not many'. However, NPI 'duoshao' cannot occur in yes-no question sentences, A-not-A interrogative sentences, and even the antecedent clauses of a conditional, for example: (23) is not acceptable, as we can see from this instance, NPI 'duoshao' is not licensed by the interrogative sentence. Example (24) is not acceptable, either, where NPI 'duoshao' is not licensed by the A-not-A interrogative sentence. In example (25), NPI 'duoshao' appears in the antecedent clause of a conditional, but the whole sentence is not accepted. The unacceptability of the sentence means NPI 'duoshao' is not licensed by the antecedent clause of a conditional. To summarize, negative polarity wh-words in Chinese ESs, such as 'shenme', 'shei', 'na'er', etc. are licensed by the negative marker 'mei(you)'. They can also be licensed by yes-no question sentences, A-not-A interrogative sentences, and the antecedent clauses of a conditional. In such contexts they are used as NPIs. However, negative polarity wh-word 'duoshao' can only be licensed in negative existential sentences, while in other contexts such as yes-no question sentences, A-not-A interrogative sentences, and the antecedent clauses of a conditional, NPI 'duoshao' is not licensed. Negative Polarity 'One' Phrase as Minimizer in Chinese Existential Sentences In addition, there exists another kind of negative polarity determiner structure 'one' phrase as minimizer in Chinese ESs. There are a number of unit words in Chinese, which can collocate with the basic numeral 'yi' (one), expressing minimal quantity. So we call this kind of structure 'one' phrase as minimizer, such as 'yidian'er', 'yige', 'yitai', 'yisi', 'yihao', 'yijin', 'yiliang', 'yichi', 'yicun', 'yizhi', 'yigen', 'yibu', etc. They are all expressions composed of 'yi' and various classifiers in Chinese, which are used with a noun together to minimize the quantity of the existential entity in ESs. Let us look at the distribution and licensing of the frequently used 'one' phrase as minimizer 'yidian er' (a little) in Chinese ESs, for example: In example (26), NPI 'yidian er' (a little) is used as a modifier of the existential noun 'jinzhan' (progress), and it is licensed by the negative marker 'mei(you)'. In example (27), NPI 'yidian er' is also used as a modifier of the existential noun, but it is licensed by a yes-no interrogative sentence rather than a negative marker. In example (28), NPI 'yidian er' is also used as a modifier of the existential noun, but it is licensed by A-not-A interrogative sentence. In example (29), used as a modifier of the existential noun, NPI 'yidian er' is licensed by the antecedent clause of a conditional. The second frequently used 'one' phrase as minimizer in Chinese ESs is 'yige' (one + classifier), for instance: In example (30), NPI 'yige' (one + classifier) is used as a modifier of the existential noun 'hugong' (nursing worker), and it is licensed by the negative marker 'mei(you)'. In example (31), NPI 'yige' remains as a modifier of the existential noun, but it is licensed by the yes-no interrogative sentence, which clearly expresses negative meaning. In example (32), NPI 'yige' is also used a modifier of the existential noun, but it is licensed by A-not-A interrogative sentence. Though example (32) is A-not-A interrogative sentence, it is in general employed to express negative meaning. In example (33), NPI 'yige' is still used as a modifier of the existential noun, but it is licensed by the antecedent clause of a conditional, which also conveys negative implicature. The third frequently used 'one' phrase as minimizer in Chinese ESs is 'yisi' (one + classifier), for instance: In example (34), NPI 'yisi' (one + classifier) is used as a modifier of the existential noun 'xiaorong' (smile), which means 'a trace of', and it is licensed by the negative marker 'mei(you)'. In example (35), NPI 'yisi' is also employed as a modifier of the existential noun, but it is licensed by the yes-no interrogative sentence. Though this sentence is made in the form of a question, but it expresses negative meaning. In example (36), used a modifier of the existential noun, NPI 'yisi' is licensed by A-not-A interrogative sentence, which generally expresses negative meaning. In example (37), NPI 'yisi' is still used as a modifier of the existential noun, but it is licensed by the antecedent clause of a conditional. In general, this conditional is employed to convey negative implicature. The tree diagram of example (34) can be shown as the following: The fourth 'one' phrase as minimizer in Chinese ESs that I will investigate is 'yitai' (one + classifier), for instance: In example (38), NPI 'yitai' (one + classifier) is used as a modifier of the existential noun 'jiqi' (machine), and it is licensed by the negative marker 'mei(you)'. In example (39), NPI 'yitai' is also employed as a modifier of the existential noun, but it is licensed by the yes-no interrogative sentence. Though this sentence is made in the form of a question, but it actually expresses negative meaning. In example (40), used a modifier of the existential noun, NPI 'yitai' is licensed by A-not-A interrogative sentence, which also generally expresses negative meaning. In example (41), NPI 'yitai' is still used as a modifier of the existential noun, but it is licensed by the antecedent clause of a conditional. This conditional is generally employed to convey negative meaning. By using the focus adverb 'ye' or 'dou', the existential nouns and its modifiers acted by NPIs such as 'yidian'er', 'yige', 'yisi', 'yitai', can be focalized by moving from a postverbal position to a preverbal position, thus further reinforcing negative effect of the existential sentence, which can be illustrated as follows: To sum up, NPI 'yidian er', 'yige', 'yisi', and 'yitai' in Chinese ESs often occur as a modifier of the existential noun in negative sentences, expressing minimal quantity or total negation. "Yi" (one) is the basic number in Chinese, and functions as minimal quantity limiter which restrains the quantitative scope of the whole expression. "Yi" remains the prerequisite of "One" phrase minimizer. It must collocate with minimal quantity classifier, such as "si" (silk), "hao" (millimeter), "dian" (point), "ge" (classifier of concrete object), "tai" (classifier of machine), etc. In this case it can never collocate with maximal quantity classifier, such as "dei" (pile), "che" (truck), "changku" (warehouse), etc. The following noun may be an abstract noun or a concrete noun. Whatever the following noun is, the basic number "yi" should be the prerequisite and the classifier ought be minimal quantity one. The semantic congruity and grammatical collocation require the combination of "yi" and minimal quantity classifiers, which gives rise to "one" phrase minimizer, expressing very strong negative effect. By using the focus adverb 'ye' or 'dou', the existential noun and its modifier acted by NPIs can be focalized by changing the sentence order, from the right to the left to the negative marker 'mei(you)', thus strengthening negation. Apart from negative sentences, the four NPIs can also be licensed by yes-no interrogative sentences, A-not-A interrogative sentences and the antecedent clauses of a conditional. Conclusions Two types of NPIs in Chinese ESs, namely, negative polarity wh-words and negative polarity 'one' phrase as minimizer are investigated. Including the first two types of NPIs which can be allowed in Chinese ESs, i.e. negative polarity adjectives and negative polarity adverbs, which have been studied in the first paper, there are four types of NPIs that can occur in Chinese ESs. We just summarize the third and fourth types of NPIs in Chinese ESs here, in particular, their licensing conditions. The combination of a single negative marker and the existential verb does not affect the licensing conditions of NPIs in Chinese ESs. The single negative marker 'mei' functions in the same way as the compound negative marker 'meiyou' in other types of sentences. The third type of NPI which can be allowed in Chinese ESs is non-interrogative wh-words, such as 'shenme', 'shei', 'na'er', 'duoshao', etc. Negative polarity wh-words in Chinese ESs are mainly employed as objects or modifiers of an object after the existential verb. Negative polarity wh-words in Chinese ESs are licensed by negative sentences, yes-no interrogative sentences, A-not-A interrogative sentences, and the antecedent clauses of a conditional. Among them, NPI of wh-word 'duoshao' is only licensed by negative sentences, but it is not licensed by yes-no interrogative sentences, A-not-A interrogative sentences, and the antecedent clauses of a conditional. Therefore, NPI 'shenme', 'shei' and 'na'er' in Chinese ESs are strong negative polarity items, while NPI 'duoshao' is a weak one. In some cases, NPIs of wh-words and the existential nouns can be focalized by using the focus adverb 'ye' or 'dou', by which the position of the existential noun phrase is converted from after the existential verb to before it, increasing the salience of the existential noun phrase.
2021-05-11T00:05:54.957Z
2021-01-01T00:00:00.000
{ "year": 2021, "sha1": "47bc6014a30b9d86c31065a4a6fcb3c401834d3a", "oa_license": "CCBY", "oa_url": "http://www.hrpub.org/download/20201230/LLS2-19320879.pdf", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "a0d6881f9d12c12c4c2da4d1724cc307f89191d2", "s2fieldsofstudy": [], "extfieldsofstudy": [ "Mathematics" ] }
96290270
pes2o/s2orc
v3-fos-license
Synthesis of Mesoporous Material from Chrysotile-Derived Silica Mesoporous MCM-41-type molecular sieves were synthesized using calcined and leached chrysotile and cetyltrimethylammonium bromide as the silica source and structure directing agent, respectively. Powder X-ray diffraction (XRD), N2 isothermal adsorption-desorption, scanning electron microscopy (SEM) and thermogravimetric analysis (TGA) were used to characterize the samples. The calcined and leached chrysotile can be employed as an inexpensive silica source for the formation of low-order MCM-41 mesoporous materials. Introduction Amiant and asbestos are generic names for fibrous minerals; serpentine group minerals are also included in this category, of which chrysotile is a member.The discovery of the world's third largest deposit of chrysotile in the beginning of the 1960s in the north of the Brazilian state of Goiás changed Brazilian participation in the world minerals market; with several applications in the modern world, amiant played a key role in the social and economic development of Brazil.Low-cost roofs and water tanks are manufactured with the amiant, allowing people easier access to habitation and basic sanitation.In addition to supplying internal market demand, Brazil exports amiant fibers to Latin American, Asian and African countries.Some studies reported that chrysotile fiber production in 2004 was as high as 250,000 tons [1], making increases in the value of this inexpensive and abundant material an attractive prospect. In chemical terms, chrysotile is a hydrated lamellar silicate with a 1:1 structure, as shown in Figure 1.The chemical formula is Mg 6 (Si 4 O 10 )(OH) 8 , though there can be some Fe +3 substitution into Mg +2 sites.Because of lamellar mismatch arising from different bond angles, chrysotile forms a wrapped fibrous structure with a silicate layer located inside a brucite layer [2].Some papers report the possibility of obtaining amorphous silica from chrysotile.This procedure is accomplished by subjecting chrysotile fibers to an acid treatment, allowing the removal of the outer brucite magnesium layer [3]. The use of chrysotile for the synthesis of porous materials is thus far relatively unexplored.A few studies report the synthesis of microporous materials such as ZSM-5 and NaA zeolites using natural chrysotile as a silica source [4][5][6].Additionally, some studies report the use of silica from chrysotile in the production of nanofibers [7] and nanowires [8].However, no studies on the synthesis of mesoporous materials such as MCM-41 using silica from natural chrysotile have been reported in the literature.In 1992, a major barrier was broken in nanostructured materials synthesis; the discovery of a novel family of materials called M41S, which exhibited the properties of mesoporous molecular sieves, spurred an increase in scientific research in this field [9].MCM-41 belongs to this family and new applications and synthetic routes using alternative raw materials are reported often.The most relevant factors contributing to the interest in these materials are their high thermal stability, pore sizes ranging from 2 to 20 nm, specific area of ~700 m 2 •g −1 , morphological features and easy synthesis. The majority of the applications for this material have been registered by the Mobil Oil Co. [10] and are related to cracking and hydrocracking of hydrocarbons.Nevertheless, this material may have major applications in the fields of catalysis and adsorption as well as more general applications related to the inability of bulky molecules to enter in the channels of microporous crystalline materials [11]. MCM-41 synthesis employs three ingredients: a solvent, which is usually basic; a siliceous source material, which can be any one of a number of alternative siliceous materials, such as rice husks [12], kaolin [13], and coal fly ash [14]; and the structure directional agent, a surfactant. The purpose of this study is to synthesize nanostructured materials, such as MCM-41, from chrysotile-derived silica for possible future applications in adsorption and catalysis. Obtaining Silica from Chrysotile Natural chrysotile from Goiás (Brazil) was heat treated with a heating ramp of 100˚C per hour to 600˚C and then held at this temperature for 3 hours.After cooling, the samples were treated with aqueous HCl (6 mol•L −1 ) under reflux for 48 hours at 100˚C.The material was filtered with deionized water until a pH of 7 was reached and then dried at 100˚C overnight. The procedure was the same that described by Dr. Avelino Corma (Villalba, 1997) [15].First, a reference sample was synthesized using commercial silica (Aerosil 200, Degussa); these samples were called SiMCM-4.Amorphous chrysotile-derived silica was then synthesized in an identical manner; these samples were called MSCris (Chrysotile Molecular Sieve). Solution A was added to solution B. In addition, 4.52 g of silica was added and stirred for 1 hour.The resultant gel was transferred to a Teflon-lined autoclave and heated under autogenous pressure at 135˚C for 24 hours without stirring.The material was filtered with deionized water and dried at 100˚C overnight. The obtained material was calcined at 550˚C for 4 hours under nitrogen and synthetic air flow. Characterization The obtained materials were characterized by several techniques, namely, X-ray powder diffraction (XRD), scanning electron microscopy (SEM), nitrogen adsorption by the BET method and thermogravimetric analysis (TGA).The X-ray diffraction analyses were recorded on a Diffraktometer, model D5000 (Siemens) using a Ni filter and Cu-kα radiation under a 30 kV accelerator voltage and 15 mA current.The nitrogen adsorption-desorption isotherms were measured using a Quanta Chrome-Nova 1000 series.Before analysis, the samples were degassed at 300˚C during 12 hours in a vaccum furnace.The surface area is calculated using the BET equation [16].The model of Barret, Joyner and Halenda (BJH) [17] were used for the mesoporose size distribuitions.For the scanning electron microscopy (SEM), samples were coated with a gold film and analyzed with a JOEL-JSM 5800 under a 20 kV acceleration voltage.The thermogravimetric analysis (TGA) was performed in a Shimadzu, TGA-50H with heating/cooling rates of 10˚C•min −1 under synthetic air flow. Results and Discussion Figure 2 shows the X-ray diffractogram of natural chrysotile (a) and its calcined and leached form (b). The characteristic diffraction peaks of the chrysotile were observed at d = 8.08 and 4.03 Å, corresponding to the (002) and (004) planes respectively. These results are consistent with the results reported in the literature [18].A small amount of brucite is present in the structure by the presence of a peak at 2θ = 18.60˚, corresponding to the (001) reflection.In the calcined and leached chrysotile (b), a broad, low intensity peak is observed at 2θ = 15˚ to 30˚, an indicator of the amorphous nature of this material.The absence of a corresponding chrysotile peak indicates that the calcination and acid treatment were effective.Figure 3 presents the XRD pattern of the samples synthesized with calcined and leached chrysotile (MSCris) and with commercial silica (SiMCM-41).The MSCris diffractogram has only one peak at 2θ = 1.97˚, corresponding to the (100) plane, which is characteristic of a hexagonal pore system.The absence of peaks for (110) and ( 200) reflections indicate that the material has a disoriented unidirectional structure, as previously described in the literature [19].In the SiMCM-41 reference, three characteristic peaks from MCM-41 are observed.The material was furthermore shown to have good thermal stability because no structural degradation was observed when the samples were submitted to calcination temperatures above 500˚C. The BET specific area analysis nitrogen adsorption iso-therms for the MSCris and SiMCM-41 samples are shown in Figure 4. The high specific areas (698 and 1090 m 2 •g −1 for the MSCris and SiMCM-41 samples, respectively) and type IV isotherms without microporous phases demonstrate the mesoporous characteristics of the MSCris and SiMCM-41 materials.Pore size distributions calculated by the BJH method for the MSCris and SiMCM-41 samples with peak values near 32.7 and 31.6 Å for the MSCris and SIMCM-41 samples, respectively.Both samples possess a mesoporous structural order.As shown by the similarity of the pore diameter distributions, the synthesized particles of MCM-41 material are not significantly affected by the source of the silica but are instead largely affected by the size of the alkyl chain of the surfactant molecules, as previously reported in the litera-ture [20]. An SEM micrograph of calcined and leached chrysotile is show in Figure 5, indicating the loss of its fibrous structure, as previously reported by Petkowicz [4]. The SEM micrograph of MSCris in Figure 6(a) shows clusters with irregular morphology and sizes greater than 50 μm. The fibrous form of some particles indicates the presence of incompletely dissolved chrysotile, suggesting a possibly incomplete thermal and chemical treatment.The SEM micrograph of the SiMCM-41 reference sample shown in Figure 6(b) shows some 50 μm particles with a large quantity of 7 μm particles.This is a better distribution compared with the MSCris sample.This result can be explained by the greater reactivity of the commercial Aerosil 200 silica compared with the chrysotile-derived silica. Figure 7 shows the thermogravimetric analysis (TGA) results of MSCris and SiMCM.According to the literature [21], MCM-41 has three mass loss features.At temperatures between 25 and 150˚C, desorption of physically adsorbed water occurs, resulting in mass losses of approximately 5% in both samples.At temperatures between 150 and 400˚C, the decomposition of the hexagonally arranged surfactant occurs.At temperatures above 400˚C, mass loss due to dehydroxylation of silanol groups present on the network occurs.The two synthesized samples, MSCris and SiMCM-41, have similar mass loss profiles. Conclusion The synthesis of a nanostructured material from chrysotile-derived silica was carried out successfully.The analysis of specific surface area indicates that the material is mesoporous.Nitrogen adsorption-desorption isotherms present the type IV characteristic regions of mesoporous materials.The pore size distribution observed also fits that of nanoscale mesoporous materials.Thermogravimetric analysis shows that the MSCris sam- ple has a similar mass loss profile to that of the SiMCM-41 reference sample.
2019-04-05T03:33:57.232Z
2013-08-14T00:00:00.000
{ "year": 2013, "sha1": "96f3fc90f137623a4ff729db043af1213260a7c4", "oa_license": "CCBY", "oa_url": "http://www.scirp.org/journal/PaperDownload.aspx?paperID=36288", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "96f3fc90f137623a4ff729db043af1213260a7c4", "s2fieldsofstudy": [ "Chemistry", "Materials Science" ], "extfieldsofstudy": [ "Materials Science" ] }
213749436
pes2o/s2orc
v3-fos-license
CONTEXTUAL APPROACHES IN KAIWA LEARNING ( SPEAKING ) JAPANESE LANGUAGE Speaking as one of the productive language skills and is an important activity in daily activities as meaningful interaction between humans is still not optimal in the Japanese language study program from an explanation before, this purpose of this study is to achieve the goal of speaking skills; the permitted curriculum in the Japanese Language Education study program, the development competency based curriculum in 2009 provides courses: Kaiwa 1 (会話 1) Conversation 1、 Kaiwa 2 (会話 2) Conversation 2、 Kaiwa 3 (会話 3) Conversation 3、 Kaiwa 4 (会話 4) Conversation 4、 Kaiwa Enshuu (会話 演習) Conversation Deepening and one special course for Japanese speech skills, Nihongo Supiichi (日本語 ス ピ ー チ) Japanese Language Speech. In line with that idea, an important strategy in this learning is to teach students to be able to connect each concept with reality rather than emphasizing how much knowledge must be remembered and memorized. This research used CTL approaches which is contextual learning can be applied in any curriculum, any subject or field of study with a class regardless of the circumstances. In line with what was discussed by the Ministry of National Education, there are Seven (7) main principles that must be developed by teachers in the CTL approach. The contextual approach in Kaiwa learning is also expected to facilitate the achievement of learning goals, namely the aim of speaking Japanese skills to improve students' speaking abilities and provide many opportunities for the practice of talking with friends while making students actively involved in the learning process. Therefore, it is recommended that in Kaiwa learning (the name of the course for speaking skills in Japanese), I IV this contextual approach can be used even maximized. The researcher realizes that this learning model is not the only one that is most suitable and relevant in teaching Japanese speaking skills courses but can be used as enrichment material. Introduction In language activities, someone communicates more verbally compared to other ways. By talking someone tries to express all feelings and thoughts to others verbally. In other words speaking activities are expressions of all language skills a person has through a combination of various actions to achieve goals. The intended purpose is the creation of mutual understanding between speakers and listeners as the purpose of the communication itself. Speaking skills are one of the productive language skills and are important activities in everyday life. Therefore someone's speaking skills especially the prospective teachers are absolutely necessary. Speaking as one of the productive language skills and is an important activity in daily activities as meaningful interaction between humans is still not optimal in the Japanese language study program, in terms of verbal communication using Japanese between students and students on campus still very lacking, when meeting lecturers or guests from the Japanese-speaking community, speaking in Japanese students is still very low. Even worse when meeting native Japanese speakers, many are silent and hardly want to talk. Daily communication both on campus during study hours and off campus between students tends to use everyday language or mother tongue. The Japanese Language Education Program has many textbooks used by lecturers. From textbooks collected by study programs and lecturers' personal collections, materials or teaching materials are prepared by lecturers in these courses. The textbook that is currently being used as a reference by the study program is the Minna no Nihongo (み ん の の 日本語) textbook from 3A Corporation Tokyo Japan, which has just entered its fourth year of replacing the previous textbook Nihongo Shoho (日本語 初 歩). Included are books for speaking skills of Renshuu C. Kaiwa Irasutoshito (練習 C. 会話 イ ラ ス ト シ ー ト). Minna no Nihongo book consists of Minna no Nihongo I and Minna no Nihongo II complete with main book (main book) for grammar teaching, translation books and explanations in Indonesian, books for writing Japanese letters and work books; books for listening skills, reading skills and speaking skills; which is also equipped with an audio CD and other supporting books, namely instructions for use for lecturers (instructors). In addition, on its own initiative, lecturers in speaking skills use other textbooks available in the study program library. features, (d) Skills in communicating the relationships between main ideas and supporting ideas, old information and new information, generalizations and examples, (e) skills using facial language, kinesthetic, body language and other nonverbal languages together with verbal language, (f) the skill of developing and using various speaking strategies such as emphasizing key words, paraphrase, providing context for interpreting word meanings, asking for help and appropriately assessing how well the interculator understands what is being said. Still within the framework of language skills, especially speaking skills, Canale and Swaim suggest that speaking skills also require mastery of four other language competencies, namely: grammatical competence (discourse competence), sociolinguistic competence (sociolinguistic competence), and strategic competence (strategic competence) grammar competencies are grammar knowledge competencies which are related to dictionaries (vocabulary mastery), morphological, syntactic, semantic and phonological rules. Discourse competency is the ability to combine ideas or thoughts that will be delivered in the right order according to the communication objectives to be achieved. Sociolinguistic competence is the ability to use grammar in the right context, can be understood by native speakers, and does not deviate from the socio-cultural rules of the community. Strategic competencies are verbal and non verbal communication strategies that can be used to compensate for congestion in communication due to performance variables or due to inadequate competence. As communicative competence was initiated by Canale and Swain, Littlewood argued that there are four major domains of speaking skills that a person must possess, namely: (1) skills using the linguistic system to the level of being able to use it spontaneously and flexibly, (2) the skills to distinguish forms which is mastered as part of linguistic competence with communicative functions. In other words, it can place linguistic systems as part of the communication system, (3) skills in using language to communicate meaning in real situations as effectively as possible, (4) skills of awareness of the social meaning of each form of language so that they can use acceptable forms. From the four domains above, it can be concluded that language learners need to be given the freedom to express and develop appropriate strategies according to language functions. As Brown argues that function is a goal that we achieve with language, for example stating, asking, responding, greeting, saying goodbye, and so on. However, functions cannot be fulfilled without language forms such as: morphemes, words, grammar rules, discourses and other organizational competencies. So it is clear and it can be concluded that form is an outward manifestation of language, while function is an embodiment of those forms. No less important in speaking skills is the importance of accuracy and fluency in speaking. Accuracy is related to language forms while fluency is related to language functions. Speaking in Japanese is called Hanasu 話 す which according to Houjou defines "話話Icara な ど が あ る。 talking is a communication between humans and humans to convey their intentions to each other which can be one person to another, one person to many people or vice versa. However, this limitation is of course different from Kaiwa's understanding 会話 in this study which according to Matsuura as "conversation" as in the phrase 会話 力 ... as the ability to speak verbally or 日本語 の 会話 の 練習 す る ... practice conversing in Japanese. The two Japanese words above, both Hanasu and Kaiwa are speaking activities which in principle according to Tarigan need to pay attention to several factors such as (1) pronunciation of language, (2) intonation and accents, (3) accuracy in pronunciation that reflects understanding of the language used , (4) the use of appropriate structures, (5) fairness and fluency in the use of language, (6) expressions according to the content of the conversation. The speaking skills intended in this article are verbal (verbal) skills in a second language in Japanese. Speaking skills are among the mastery that must be possessed for the ability to perform in language skills besides; listening (listening = chookai), reading (reading = dokkai) and writing (writing = sakubun). As with other foreign languages, one of the competencies that must be achieved in conducting language learning in the Japanese Language Education Program FBS UNIMA is speaking competence. To achieve these competencies there are three courses related to efforts to improve speaking skills, namely: Kaiwa (会話) I-IV, Kaiwa enshuu (会話 演習) and Nihongo supiichi (日本語 ス ピ ー チ). Contextual Approach in Language Teaching a. General understanding The basic concept of contextual learning is an approach to learning with teaching activities from teachers that connect learning material with real-world situations and learning activities that motivate students to connect and apply their knowledge to everyday life as family members and society. In the context of learning there are several terms that have similarities in meaning, namely; learning approaches, learning strategies, learning methods, learning techniques, learning models. The learning approach can be interpreted as our starting point or point of view of the learning process that refers to the view of the occurrence of a process. Judging from the approach, learning is divided into two types, namely: (1) a learning approach that is oriented or student-centered (student centered approach) and (2) a teacher-oriented approach. Contextual Teaching Learning is a contextual translation (CTL). The term contextual means the relationship, context, atmosphere or situation so that CTL can be interpreted as a learning that relates to a particular atmosphere. The philosophical foundation of CTL is constructivism, which is a learning philosophy that emphasizes that learning is not just memorizing but must construct knowledge. Contextual Teaching Learning, is a holistic learning concept that emphasizes learning with the real world; which helps the teacher relate the material being studied in class to the real-world situation of students. Students are also helped to connect every knowledge they have by applying in the context of the lives of their peers, their families and the environment and surrounding communities. In line with the above formula, Mulyasa argues that in the learning strategy as the implementation of the Competency Based Curriculum (Contextual Teaching Learning), Contextual Teaching Learning is one of the five learning models that are considered in accordance with the requirements of the CBC in addition; Role Playing, Participatory Teaching and Learning, Mastery Learning and Modular Instruction. Contextual learning is the latest learning concept developed to extract learners who are able to think creatively and act decisively and intelligently. This learning process is motivated by the constructivism theory that requires students to be able to build knowledge little by little, the results of which are expanded through their own context without being given directly by the teacher / lecturer (knowledge transfer). According to Zahorik, there are five elements that must be considered in contextual learning, namely: (1) activation of existing knowledge (activating knowledge), (2) acquiring new knowledge (acquiring knowledge), (3) understanding of knowledge (4) ) practicing knowledge (5), reflecting (reflecting knowledge). Judging from the approach, contextual learning is a learning approach that is oriented or student-centered (student centered approach) and has the following characteristics: (1) Students are actively involved in learning, (2) in learning, students collaborate to establish knowledge (3) learning is associated with real life so that problems can be simulated, (4) behavior built on self-awareness, (5) skills developed based on the results of understanding, (6) gifts for good behavior are satisfaction. Whereas according to Muslich, learning with a contextual approach has 7 characteristics, namely: (1) learning is carried out in an authentic context, namely learning directed at achieving skills in real life contexts or learning in a real life setting, (2) learning provides opportunities for students to do meaningful learning, (3) learning is carried out by providing meaningful experiences to students (learning by doing), (4) learning is carried out through group work, discussion, correcting each other between friends (learning in group), (5) learning provides an opportunity to create a sense of togetherness, mutual understanding between one another in depth (learning to know each other deeply), (6) learning is carried out actively, creatively, productively and memmentingkan cooperation (learning to ask, to inquiry, to work tog ether), (7) learning is carried out in a pleasant situation (learning as enjoy activity). Contextual learning requires cooperation between all parties. This trains students to respect each other's differences in order to achieve learning goals. So that cooperative learning is one of the characteristics in learning foreign languages, including Japanese. In contextual learning the task of the teacher is as formulated by Mulyasa, which is to provide learning convenience to students by providing various facilities and adequate learning resources, as well as regulating the environment and learning strategies conducive to learning activities by paying attention to five elements: (1) learning must pay attention to the knowledge possessed by students, (2) learning starts from the whole (global) towards its parts specifically (from general to special). (3) learning must be emphasized in understanding by: arranging temporary concepts, sharing with others, revising and developing concepts, (4) learning emphasized in the effort to practice directly, (5) reflection on learning strategies and developing the knowledge learned. Contextual learning provides opportunities for learners to explore material with real life everyday. As Johnson said, the partnership allows students to apply academic lessons to the workplace, where lessons are linked to everyday tasks and experiences. allows learning by doing. Where the main message of learning by doing is making learning that connects with everyday life produces meaning so that it is absorbed and mastered by learners in the form of knowledge and skills. b. Characteristics of Contextual Learning According to the Ministry of National Education, contextual learning (CTL) has the following characteristics: (1) cooperation, (2) mutual support, (3) fun, not boring, (4) passionate learning, (5) integrated learning, (6) using various sources, (7) active students), (8) sharing with friends, (9) critical students, creative teachers, (10) classroom walls and hallways filled with student work, maps, pictures, articles, humor, etc., (11) reports to parents are not only report cards, but students' work, reports on lab results, essays and so on. Whereas according to Nurhadi, contextual learning has eight components as characteristics of CTL, namely: 1) Making meaningful connection (making meaningful connections), encourages students to find a connection between the material being studied and the real world situation. 2) Doing significant work (doing meaningful work) encourages students to do work that is meaningful, purposeful, useful and has tangible products or results 3) Self-regulated learning (conducting learning on its own), students can organize themselves in learning that is tailored to their abilities, talents and interests in developing their own potential. 4) Collaborating (collaborating), students collaborate in learning, both with other friends and with teachers, and other people. 5) Critical and creative thinking, students are encouraged to think higher levels such as; analyze, solve problems, make decisions, use logic, and form arguments based on facts. 6) Nurturing the individual (maintaining or caring for students), learning must be able to form the personality of students well as well as motivate to tolerate other friends. 7) Reaching high standards (achieving high standards), students are encouraged to diligently and diligently study to achieve better standards. 8) Using authentic assessment (using authentic assessment), assessment of students is not only done at the end of learning activities, but also takes place when the learning process takes place. And the assessment carried out should assess what should be assessed. Contextual learning is a system consisting of related and related components. The components in question will function properly if the three principles of contextual learning are carried out. The three principles are: interdependency, difference, and self-regulated. First is the principle of interdependence, that an institution is a system of life consisting of (students / students, teachers / lecturers, principals / deans, administrative employees, parents / guardians, alumni, communities) who are in a network of relationships that shape the learning environment . Students / students as one part of it, will make it possible to establish meaningful relationships with other parts. Second, the principle of diversity, in the context of the learning environment, schools, faculties, universities as a learning environment that consists of diverse and different entities and is loaded with each other's uniqueness. The principle of differentiation encourages mutual cooperation, respect for diversity and uniqueness. Synergy between each entity will encourage endless creativity to create an orchestra that is harmony that is harmony of life. The three principles are self-regulation, where everything is governed by oneself, maintained by oneself, realized by oneself. When students connect the material to the context of their personal circumstances or the social environment in which they live, they are involved in self-regulation activities. b. Principles of Contextual Learning CTL or contextual learning can be applied in any curriculum, any subject or field of study with a class regardless of the circumstances. In line with what was discussed by the Ministry of National Education, there are Seven (7) main principles that must be developed by teachers in the CTL approach, namely: 1) Constructivism (Constructivism) Constructivism is the foundation of thinking (philosophy) in CTL. Knowledge gained by humans little by little the results are expanded through a limited context. Knowledge is not a set of facts, concepts or rules that are ready to be taken and remembered. Rather, humans must build that knowledge, give meaning through real and not sublime or instant experiences. Students are accustomed to solving problems, finding something useful for themselves and must be rich in ideas and innovations. So constructivism is an activity that develops the idea that learning will be more meaningful if the learner is: the student or student works, finds and builds his own knowledge and even new skills. Assuming that the teacher is not able to construct all knowledge to students so that the essence of constructivism theory is that students must find and transform complex information into other situations even if it is possible that information becomes their own. An important strategy in this learning is to teach students to be able to connect each concept with reality rather than emphasizing how much knowledge must be remembered and memorized. Research carried out reveals the fact that mastery of the theory has a positive impact in the short term but does not contribute well enough in the long run. In other words, rote theoretical knowledge is easily separated from memory if it is not supported by real experience. Thus the basic principles that must be considered when implementing constructivism are: (a) prioritizing the process rather than the learning outcomes, (b) prioritizing meaningful and relevant information rather than verbalistic information, (c) the widest opportunity for learners to find and apply his own ideas, (d) students have the freedom to apply their own strategies, (e) students' knowledge grows and develops through their own experience, (f) understanding will be stronger and continue to develop if always tested with new experiences. For teachers; both teachers and lecturers have implications in developing this constructivism stage, especially the ability to guide students to get meaning from each concept that is learned. That is why every teacher must have sufficient knowledge to provide illustrations, use learning resources, and learning media that can stimulate students to actively seek out and do and find their own links between concepts learned and their experiences. 2) Inquiry Finding, is a core activity in CTL, through efforts to find, will provide confirmation that the knowledge and skills and other abilities needed are not the result of remembering a set of facts, but are the results of finding themselves. Finding, is a learning activity that conditions the environment so that students either individually or in groups can make observations, investigations, analyze to find or make conclusions on the topic or subject matter that is being faced in accordance with their respective experiences. Besides that, it was seen in terms of emotional satisfaction, a finding result alone had higher satisfaction compared to the results of giving. Teachers in inquiry learning activities must always design activities that refer to finding activities, whatever the material being taught. Because to grow the habits of students creatively in order to find their own learning experience has implications for the strategies developed by the teacher. The inquiry cycles are (1) observation (observation), (2) questioning (asking), (3) hypothesis (submitting a guess), (4) data gathering (collecting data), and (conclusion) 5. The basic principles for implementing inquiry in learning are: (a) information will be longer when finding themselves, (b) information will be stronger if supported by evidence or data found alone, (c) cycles of inquiry usually through stages observation, asking questions, submitting allegations, collecting data and drawing conclusions. 3) Questioning Asking activities is equally important as the main characteristic of CTL learning. The ability to ask questions and ask questions must be possessed by someone because someone's knowledge always starts with curiosity, and curiosity is done by asking questions. Asking questions is an activity that develops the idea that learning will be more meaningful if students work, discover and build their own new knowledge and skills. Asking is the main learning strategy based on CTL. Asking in learning is an activity for teachers to encourage, guide and assess students' thinking skills because, as in the previous stages, the development of abilities and desires to ask questions is very important. . In the implementation of CTL, the questions must be used as tools or approaches to explore information or learning resources that have to do with real life. In other words, the teacher was asked to find out about the relationship between students and students in relation to real life. (A) extracting information both academically and administratively will be more effective, (b) the confirmation of what students know is more effective, (c) increasing student responses, (d) knowing more student curiosity, (e) focus student attention, (f) refresh the knowledge they have. In all learning activities, questioning can be applied both between students and students, teachers with students, between students and teachers, or between students and others brought in class, and so on. This activity is also carried out when discussing, studying and working in groups, while observing and other activities. 4) Learning Community Learning society, is an activity that familiarizes students to collaborate and utilize learning resources from their learning friends. Learning society is a learning activity that creates an atmosphere of 'learning together' in mutual discussion, helping one another with each other. Thus creating positive interdependence developed. Learning community activities suggest that the results of learning are obtained from working with others through various experiences. In other words strong encourages his friend to be weak, who quickly catches on helping the sluggish, who is good at teaching those who don't know, who has an idea to immediately propose and so on so that groups of students will vary greatly in form; both membership numbers, can even involve other friends in the upper class, even bring in other people who are experts to collaborate. The application of learning community in classroom learning will depend largely on the learning communication model developed by the teacher. Ability and ability and professionalism of a teacher to develop multiple-way communication (interaction), namely a communication model that is not only the relationship between the teacher and students or vice versa but broadly opened the path of communication between students and other students. The basic principles that must be developed in the application of a learning society are: (a) learning outcomes obtained from cooperation or sharing with other parties, (b) sharing in question is two or even multi-directional, (c) awareness of each party that the knowledge, experience, and skills possessed will benefit others, (d) those who are considered as learning resources are those who are involved in the learning community. The practice of implementing and developing learning communities within CTL is very possible and opened with extensive use of other learning communities outside the classroom. And the practice in learning is manifested in (a) the formation of small groups, (b) the formation of large groups), (c) bringing experts into the class (native speakers, doctors, clergy, sportsmen and so on), (d) working with classes parallel or equivalent, (e) workgroups with kelsa on it or with the community. 5) Modelling Modeling or modeling is one solution to the limitations of teachers who have been regarded as the only source of learning. The development of science and technology, the complexity of life problems faced and the demands of students who are increasingly developing and diverse greatly affect the ability of teachers who have perfect abilities and this is very impossible and difficult to fulfill. Therefore the modeling stage can be used as an alternative to developing learning. Modeling is a learning activity that brings models, for example, which are commonly used as reference material, references through the appearance of figures, demonstrations of activities, the appearance of works, how to operate things, play roles and so on. In learning something there is a model that can be replicated. Examples of speaking: how to imitate saying or reciting words or sentences in Japanese. Thus the teacher simply teaches how to learn by presenting the model by presenting the original Japanese castors into the learning class. The basic principles that need attention in applying modeling in learning are: (a) the acquired knowledge and skills are more stable with the model that can be replicated, (b) the model or example can be directly from the original or expert, (c) model or example in the form how to say something, operate something, sample work, or model appearance. 6) Reflection Reflection is a way of thinking about what just happened or just learned. In other words, reflection is thinking back to reflect back on things or anything that has and has happened in the past. Reflection or feedback, is a learning activity by making a reflection of the activities that have been carried out, such as through questions and answers about the difficulties faced and how to solve them, reconstruct the steps of activities that have been carried out, impressions and messages of hope during activities, draw conclusions about activities, and so on. Students settle with what they have just learned as a new knowledge structure which is enrichment or revision of previous knowledge. The key is how that knowledge settles in the minds of students. Students record what they have learned and how to apply new ideas. During reflection, students get the opportunity to digest, weigh, compare, appreciate, and conduct discussions with themselves (learning to be). Through the CTL model, learning experiences not only occur and are owned when a student is in the classroom but far more important than that is how to bring the learning experience out of class, namely when he is required to respond to and solve real problems he faces a day -day. This is related to the ability to apply knowledge, attitudes, and skills to the real world, the world of work will be easily actualized when learning experiences have been internalized in the souls of students, is the core of the application of elements of reflection in every learning opportunity. Realization of the moment given by the teacher in the application of reflection is in the form of (a) a direct statement of what he obtained at that time, (b) notes in the student book, (c) impressions and suggestions of students about learning, (d) discussion, (e) masterpiece. While the basic principles that must be considered are: (a) contemplation of new knowledge obtained as an enrichment of previous knowledge. (b) contemplation is a response to the events, activities, or knowledge acquired, (c) contemplation can be in the form of brief notes, discussions or work. 7) Authentic Assessment Assessment is the final stage of contextual learning, the assessment of learning has a very decisive function to obtain information on the quality of the processes and learning outcomes in implementing CTL. Authentic assessment is a process of collecting various complete data and information that can provide an overview of student learning progress. An overview of the development of student learning is important to be known by the teacher in order to ensure that students experience the learning process correctly so that students' shortcomings can be known earlier by identifying data. The basic principles that must be considered when applying authentic assessment in learning are: (a) assessment is not to judge students, (b) assessment must be comprehensive and balanced between process and results, (c) authentic assessment allows students to conduct self-assessments ) and peer assessment, (d) assessment is carried out by various evaluation tools on an ongoing basis as an integral part of the learning process, (e) assessments can be used by various parties both students, parents/guardians, and schools to diagnose learning difficulties , feedback and determine student achievement. Authentic assessment characteristics: (a) carried out during and after the learning process takes place, (b) used in formative and summative, (c) continuous, (d) integrated, (e) can be used as feedback. And things that can be used as objects of assessment: (a) project activities and reports, (b) homework, (c) quizzes, (d) work results, (e) student presentations or performances, (f) demonstrations, (g ) report, (h) written test results, (i) written work. In general, there is no fundamental difference between conventional learning program scenarios and CTL learning. But what distinguishes it lies in the emphasis, where the conventional model emphasizes the description of the objectives (results) that will be achieved (clear and operational), while CTL learning emphasizes more on the scenario (process) of learning, namely the stages carried out by teachers and students in efforts to achieve the expected learning goals. Therefore a teacher/lecturer contextual learning program should: (1) state the main learning activities, namely a student statement which is a combination of competencies, main material, and indicators of learning achievement, (2) clearly formulating the general goals of learning, (3) describe in detail the media and learning resources that will be used to support the expected learning activities, (4) formulate step-by-step scenarios of activities that students must do in the learning process, (5) formulate and conduct assessment systems by focusing on the ability actually what students have both during the learning process and after they finish learning. 1 Results and Discussion How to apply the seven main components of contextual learning in learning Japanese specifically speaking skills (kaiwa) are as follows: 1) Constructivism Constructivism in speaking learning (kaiwa 会話) can take place if the students themselves learn to pronounce or pronounce words, phrases, expressions or sentences properly and correctly and the appropriate intonation. In learning to speak usually begins with imitation (modeling) several times repeated both through cassette recordings, CDs and speech immediately followed by demonstrations. Usually, the demonstration is carried out first by students who are considered capable and fast, then followed by others. And all students are tried to be involved in speaking activities. Students are also divided into groups and asked to demonstrate or demonstrate the practice of speaking in Japanese. Such activities provide opportunities for students to build their own knowledge based on their respective abilities. 2) Questioning In learning to speak, there is a great opportunity for students to ask questions when students have difficulty expressing intent in the target language. Differentiate questioning at the end of learning activities when learning takes place. 3) Finding (Inquiry) In speaking learning, inquiry can be done by giving assignments to students in groups to have conversations related to the theme/topic of learning. And this method is very appropriate because all students are actively involved and try to find as much information as possible both new words, conversation situations and so on. 4) Learning Community (Learning Community) Learning communities in speaking learning (kaiwa 会話), can be done when students in their respective groups or with other groups share information by doing the target language in Japanese. Students who are considered capable will become tutors for other friends. 5) Modeling In all language learning, modeling or modeling is very often done. Through modeling, it will be followed by stages of imitation (imitation). Especially in speaking skills learning, usually given examples of reciting words, phrases, expressions, sentences that build a conversation situation, played a cassette or shown a CD and then the imitation stage goes on. 6) Reflection (Reflection) In learning speaking skills; reflection can be done when students see or hear the model then give or take special and important notes on new things and are considered important. It is necessary to distinguish between reflection in this process with reflection at the end of the learning activity. 7) Authentic Assessment The main purpose of authentic assessment is to know the progress or development of student learning. The assessment sheet is needed to observe how each student is capable, the performance sheet to determine the level of seriousness. So for students who experience difficulties can be done repetition and students who are considered capable may act as tutors for other friends to provide guidance and assistance. Closing The contextual approach in Kaiwa learning as Japanese speaking skills includes the use of various learning media such as video, audio, LCD, vocabulary lists, conversation materials taken from other Japanese language book sources or made by lecturers according to the topic or subject matter, and media created by students. This teaching material model component also includes assessment techniques or tests that are not only oral forms but also written tests. This teaching material should also contain text and individual exercises or even groups or pairs and be equipped with guidance for lecturers, while for students given teaching material as a guide when learning as well as strengthening skills according to the learning objectives. The contextual approach in Kaiwa learning is also expected to facilitate the achievement of learning goals, namely the aim of speaking Japanese skills to improve students' speaking abilities and provide many opportunities for the practice of talking with friends while making students actively involved in the learning process. Therefore, it is recommended that in Kaiwa learning (the name of the course for speaking skills in Japanese), I -IV this contextual approach can be used even maximized. The researcher realizes that this learning model is not the only one that is most suitable and relevant in teaching Japanese speaking skills courses but can be used as enrichment material.
2019-12-05T09:11:49.353Z
2019-04-01T00:00:00.000
{ "year": 2019, "sha1": "8fbb0c3adbc40b0320c47304892635f4b83e66d6", "oa_license": "CCBYNC", "oa_url": "https://doi.org/10.2991/icss-19.2019.60", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "2bd69f51b8076494a064ad6081c74acb01b8b058", "s2fieldsofstudy": [ "Linguistics", "Education" ], "extfieldsofstudy": [ "Psychology" ] }
236091285
pes2o/s2orc
v3-fos-license
Reduced Acrolein Detoxification in akr1a1a Zebrafish Mutants Causes Impaired Insulin Receptor Signaling and Microvascular Alterations Abstract Increased acrolein (ACR), a toxic metabolite derived from energy consumption, is associated with diabetes and its complications. However, the molecular mechanisms are mostly unknown, and a suitable animal model with internal increased ACR does not exist for in vivo studying so far. Several enzyme systems are responsible for acrolein detoxification, such as Aldehyde Dehydrogenase (ALDH), Aldo‐Keto Reductase (AKR), and Glutathione S‐Transferase (GST). To evaluate the function of ACR in glucose homeostasis and diabetes, akr1a1a−/− zebrafish mutants are generated using CRISPR/Cas9 technology. Accumulated endogenous acrolein is confirmed in akr1a1a−/− larvae and livers of adults. Moreover, a series of experiments are performed regarding organic alterations, the glucose homeostasis, transcriptome, and metabolomics in Tg(fli1:EGFP) zebrafish. Akr1a1a−/− larvae display impaired glucose homeostasis and angiogenic retina hyaloid vasculature, which are caused by reduced acrolein detoxification ability and increased internal ACR concentration. The effects of acrolein on hyaloid vasculature can be reversed by acrolein‐scavenger l‐carnosine treatment. In adult akr1a1a−/− mutants, impaired glucose tolerance accompanied by angiogenic retina vessels and glomerular basement membrane thickening, consistent with an early pathological appearance in diabetic retinopathy and nephropathy, are observed. Thus, the data strongly suggest impaired ACR detoxification and elevated ACR concentration as biomarkers and inducers for diabetes and diabetic complications. Introduction Diabetes is a worldwide disease and is characterized by a high blood glucose level over a prolonged period of time. More than 463 million people have been diagnosed with diabetes so far, and the number will be rising to 700 million in 2045 as estimated. [1] Without appropriate treatment in time, diabetes can cause severe microvascular complications in the eye and kidney, namely diabetic retinopathy and nephropathy, which become the leading cause of vision loss and renal failure in working-age people. [2] Hence, early diagnosis and intervention of diabetes are essential and urgently needed for improving long-term prognosis. Increasing evidence indicates that reactive carbonyl species (RCS) are positively correlated with diabetes and insulin resistance. [3][4][5][6][7][8] Among them, methylglyoxal (MG) is well studied and regarded as a main toxic factor. [9,10] In addition to MG, acrolein (ACR) also has drawn extreme attention due to its reactive capacity in past decades. [11,12] ACR, originated from myeloperoxidasemediated degradation of threonine, amine oxidase-mediated degradation of spermine lated internal ACR and results in diabetes and relevant complications in the end require further elucidation. Therefore, the study aimed to establish an animal model with increased internal levels of ACR and clarify the subsequent effects of ACR on glucose metabolism and organic alterations in zebrafish. Our data indicate impaired ACR detoxification and accumulated ACR as inducers for insulin resistance and resulted in the pathological progression of diabetic retinopathy and nephropathy. Expression of akr1a1a in Zebrafish and Generation of akr1a1a Knockout Zebrafish In zebrafish, two homologs for Akr1a1, including Akr1a1a and Akr1a1b, exist. The alignment comparing the amino acid sequence of Akr1a1 and Akr1a1a in humans, mice, and zebrafish showed that zebrafish Akr1a1a shares a 60% and 58% similarity with Akr1a1 in humans and mice, respectively. Meanwhile, Akr1a1a and Akr1a1 possess the same active site and binding site among three different species, suggesting Akr1a1a as a potential candidate to study ACR detoxification in zebrafish ( Figure 1A). Akr1a1a mRNA expression was determined by RT-qPCR in larvae and adult organs of zebrafish. The results showed ubiquitous akr1a1a expression throughout embryonic and larval stages with being highest expressed at 2dpf ( Figure 1B). Besides, akr1a1a expression was mostly observed in the liver, more than two hundred-fold than in the reference organ (heart), and to a less extent in the brain (eightfold), kidney (threefold), and eye (threefold) ( Figure 1C,D). Altogether, these data suggest that Akr1a1a distributes widely in the early developmental stages of zebrafish larvae and adult organs, and it may play an essential role in embryonic development and in physiological liver function. In order to explore the function of Akr1a1a in zebrafish and on ACR metabolism, an akr1a1a −/− mutant line was generated by CRISPR/Cas9 technology as the first step. [30] Briefly, gRNA was designed targeting exon 2 of akr1a1a, and a deletion-insertion of four nucleotides in the Tg(fli1: EGFP) reporter line was identified and utilized for further studies ( Figure 1E). The general morphology of larvae at 5dpf did not show any noticeable alterations in mutants compared to the wild types ( Figure 1F). To evaluate whether the akr1a1a mutations caused a non-functional Akr1a1a protein after translation, a Western blot was performed and showed an ultimate loss of the Akr1a1a protein in mutants ( Figure 1G). The percentage of akr1a1a −/− zebrafish growing into adulthood is about 23.43%, while 24.57% to wild type and 52% to heterozygous zebrafish, which is consistent with Mendel's law of inheritance suggesting permanent loss of akr1a1a does not affect the survival of zebrafish ( Figure 1H). Besides, AKR activity was measured by using DL-Glyceraldehyde as substrate, and it showed significant decline in akr1a1a −/− larvae ( Figure 1I). All the above data have proven the successful generation of akr1a1a knockout mutants. A) The amino acid alignment showed a high similarity between the different species on the active site (red frame) and binding site (green); first line: zebrafish AKR1a1a; second line: human AKR1a1; third line: mouse AKR1a1. B) akr1a1a mRNA expression in wild-type zebrafish larvae showed a significant upregulation at 2 dpf. C,D) akr1a1a mRNA expressed mostly in liver of wild-type adult zebrafish (heart as reference organ). Expression of genes was determined by RT-qPCR and normalized to b2m. Larval stage: n = 3 clutches with 30 larvae, adult organs: n = 3 with one organ per sample. E) Akr1a1a-CRISPR-target site was designed in exon 2 of the akr1a1a gene and CRISPR/Cas9-induced deletion-insertion of four nucleotides was selected for further akr1a1a mutant line generation and maintenance. Genotype was analyzed via sequencing chromatograms of PCR-amplified akr1a1a region, containing the akr1a1a target site. Chromatogram shows akr1a1a wild type and deletion-insertion of four nucleotides homozygous sequencing results. F) Microscopic images showed unaltered morphology of akr1a1a −/− larvae in comparison with akr1a1a +/+ larvae at 5dpf. Black scale bar: 300 μm. G) Western blot for Akr1a1a expression in adult liver showed the loss of Akr1a1a protein in mutants. b-actin served as loading control. n = 3, each lane represents one liver sample from according adult fish. H) Adult fish number among different genotypes was in line with the Mendelian Inheritance in the first generation of F2: akr1a1a +/+ = 43, akr1a1a +/-= 91, aldh3a1 −/− = 41. I) akr1a1a −/− zebrafish showed decreased enzyme activity (DL-Glyceraldehyde served as substrate) measured by spectrophotometric analysis in zebrafish lysates at 96 hpf; n = 6-11 clutches, each clutch contains 50 larvae. For statistical analysis one-way ANOVA followed by Tukey's multiple comparison test and Student's t-test was applied, *p < 0.05. **p < 0.01. RT-qPCR, real-time quantitative polymerase chain reaction; dpf, day post fertilization; b2m, 2 microglobulin. PAM, protospacer-adjacent motif. www.advancedsciencenews.com www.advancedscience.com a valuable animal model to study alterations in the eyes and kidneys. [31,32] The hyaloid vasculature was analyzed first by using the Tg(fli1: EGFP) reporter line. [33] Zebrafish larvae were collected at 5 dpf, and images were captured with a confocal microscope. Increasing numbers of branches and sprouts were identified in the hyaloid vasculature of akr1a1a −/− larvae compared to akr1a1a +/+ larvae at 5 dpf (Figure 2A-D). Meanwhile, as same as in larvae, adult akr1a1a −/− zebrafish also displayed more branches and sprouts in retinal vessels in contrast to akr1a1a +/+ adults ( Figure 2E-G). Besides, although no apparent alterations in kidneys have been found with PAS staining under the light microscope ( Figure S1A,B, Supporting Information), a thickening of the glomerular basement membrane (GBM) was observed using electron microscopy in akr1a1a −/− adults ( Figure 2H,I). Taken together, these results revealed that the loss of Akr1a1a leads to alterations of the hyaloid vasculature in zebrafish larvae, which persists into adulthood, besides, the loss of Akr1a1a resulted in a thickening of the GBM in adults. Thus, akr1a1a mutants show incipient hallmarks of diabetic retinopathy and diabetic nephropathy. Insulin Receptor Signaling Pathway Was Down-Regulated in akr1a1a −/− Larvae To investigate the potential mechanisms behind the alterations in akr1a1a −/− mutants, gene-expression patterns were analyzed by genome RNA-Seq among akr1a1a +/+ and akr1a1a −/− larvae at 5dpf ( Figure 3A). Principal component analysis (PCA) exhibited components of each sample by which showed that akr1a1a +/+ and akr1a1a −/− plots are totally separated in the PC1 axis ( Figure S2A, Supporting Information). Quality control showed comparable properties between akr1a1a mutants and wild types ( Figure S2B, Supporting Information). Then, gene set enrichment analysis (GSEA) was performed to better understand altered physiological signaling pathways reflected by the loss of Akr1a1a. Among a series of altered biological pathways, the insulin receptor and downstream signaling pathways, including but not limited to MAPK, signal transduction by protein phosphorylation and transmembrane receptor protein tyrosine kinase signaling pathway, were significantly down-regulated in akr1a1a mutants ( Figure 3B-F), suggesting the loss of Akr1a1a induces an impaired insulin signaling transduction in zebrafish larvae at 5dpf. Additionally, metabolomics assay was also performed in wild type and homozygous larvae and livers. Among amino acids, thiols, adenosines and fatty acids, lysine, putrescin and C20:3n6 were significantly increased in mutant larvae. C18:3n6, C20:3n6 were significantly increased while cholesterol decreased significantly in mutants' liver which implies the impaired insulin signaling transduction in akr1a1a −/− larvae give rises to alterations in protein and fatty acids synthesis and metabolism procedures (Figures S3,S4, Supporting Information). Akr1a1a −/− Mutants Displayed Impaired Glucose Homeostasis and Accumulated Internal ACR Although impaired insulin signaling transduction has been verified in akr1a1a −/− mutants, whether glucose homeostasis alters afterward is still unknown. In order to address this question, body glucose and blood glucose measurements were performed in larvae and adults, respectively. Interestingly, akr1a1a −/− larvae exhibited 50% incremental glucose level in contrast to akr1a1a +/+ larvae at 5dpf ( Figure 4A). Additionally, akr1a1a −/− adults displayed postprandial hyperglycemia in both male and female, while overnight-fasting blood glucose kept unaltered (Figure 4B,C) proving that permanent loss of Akr1a1a can induce impaired glucose homeostasis in both zebrafish larvae and adults. To explore the potential leading cause for impaired glucose homeostasis in akr1a1a −/− mutants, preproinsulin (insa), and insulin receptor (insra, insrb) mRNA expression levels were analyzed by RT-qPCR. Since akr1a1a has been seen expressed mostly in liver ( Figure 1D), larvae and liver were chosen as target organs. Meanwhile, potential substrates for Akr1a1a including ACR, glyoxal and methylglyoxal were measured accordingly. Intriguingly, insa, the main insulin-encoding gene involved in glucose homeostasis regulation in zebrafish, was unaltered in akr1a1a −/− larvae, but insra and insrb mRNA reduced significantly in akr1a1a −/− larvae at 5 dpf ( Figure 4D-F). Moreover, reduced insrb mRNA expression, and decreased insra mRNA expression tendency was also found in adult akr1a1a −/− livers, suggesting reduced insra/insrb expression as the cause for impaired glucose homeostasis and altered insulin signaling transduction in akr1a1a mutants ( Figure 4G,H). Importantly, accumulated protein-bound ACR in larvae and livers, but not glyoxal and methylglyoxal, was identified in akr1a1a −/− mutants ( Figure 4I-L). In addition, decreased AKR activity by utilizing ACR as substrate was observed in akr1a1a −/− larvae proving Akr1a1a as a main metabolizing enzyme for ACR in zebrafish ( Figure 4M). Furthermore, the decline of phosphorylated p70S6K and pyruvic acid confirmed impaired glucose homeostasis and hinted the existence of insulin resistance in akr1a1a −/− livers ( Figure 4N,O). However, expression of important glycolytic enzymes, such as phosphofructokinase (pfk) and hexokinase (hk1) were not altered while pyruvate kinase (pk) was increased in akr1a1a −/− livers (Figure S5A, Supporting Information). In addition to the expression analysis, enzyme activity determination was also performed for phosphofructokinase (PFK), pyruvate kinase (PK), and glucokinase (GK). The results indicated that PFK activity but not PK and HK, decreased significantly in livers of mutants suggesting glycolysis procedure is also partially influenced after Akr1a1a knockout ( Figure 4P; Figure S5B,C, Supporting Information). Last, different from previous studies regarding Akr1a1b, [26] the entirely loss of Akr1a1a did not up-regulate S-nitrosylated proteins ( Figure S6, Supporting Information). At the meantime, akr1a1b −/− mutants displayed unaltered insra/insrb mRNA expression, internal ACR level and normal hyaloid vasculature ( Figures S7,S8, Supporting Information). All these evidence suggest that zebrafish Akr1a1a is not the functional homologue to zebrafish Akr1a1b and mouse Akr1a1 in regualting S-Nitrosylation. [34] Meanwhile, Glo1 and ALDH enzyme activities were also determined since these two enzyme systems exert similar functions as the AKR enzyme system in detoxifying RCS metabolites and AGEs precursors. However, Glo1 and ALDH enzyme activity kept unchanged in akr1a1a −/− larvae compared to akr1a1a +/+ larvae at 4 dpf suggesting Glo1 and ALDH enzyme systems are not able to detoxify accumulated ACR in zebrafish ( Figure S9A,B, Supporting Information). 30 larvae per clutch, 6 clutches of akr1a1a +/+ and akr1a1a −/− zebrafish larvae at 5dpf were applied for RNA isolation. B) Bubble plot showed the significantly down-regulated biological pathways in akr1a1a −/− zebrafish larvae at 5dpf via KEGG and GOBP analysis. Heatmaps showed relative mRNA expression in C) insulin receptor signaling pathway, D) MAPK, E) signal transduction by protein phosphorylation, and F) transmembrane receptor protein tyrosine kinase signaling pathway was down-regulated significantly in akr1a1a −/− zebrafish larvae. Higher and lower expression is displayed in red and blue, respectively. GSEA, gene set enrichment analysis. dpf, day post fertilization. KEGG, Kyoto encyclopedia of gene and genomes. GOBP, Gene ontology biological processes. ACR Caused Angiogenic Alterations in Hyaloid Vasculature and Impaired Glucose Homeostasis via Reducing insra/insrb mRNA Expression To test the hypothesis if ACR is the missing connection between the declined insra/insrb mRNA expression and the loss of Akr1a1a, wild-type larvae were incubated with exogenous ACR from 1 dpf to 5 dpf. 10 μm was selected as working concentration, since zebrafish exhibited stable survival rate and normal morphology after treatment ( Figure S12, Supporting Information). Morphology of hyaloid vasculature was analyzed afterward. Interestingly, more branches were observed in the www.advancedsciencenews.com www.advancedscience.com hyaloid vasculatures after the ACR treatment (Figure 6A-C). Further experiments with ACR intervention showed that treated larvae had normal expression level of insa mRNA ( Figure 6D), significantly elevated whole-body glucose ( Figure 6E), but less expressed insra/insrb mRNA ( Figure 6F,G) in contrast to the larvae without treatment, which recapitulates the discoveries in akr1a1a −/− larvae ( Figure 2B). Additionally, in order to understand how the ACR exerts its functions at the transcriptome level, we performed full genome RNA-Seq on larvae with or without ACR treatment. Importantly, the insulin receptor and downstream signaling pathways including MAPK, signal transduction by protein phosphorylation and transmembrane receptor protein tyrosine kinase signaling pathway were also down-regulated (Figure 7A-E), which resemble the earlier findings in akr1a1a −/− larvae ( Figure 3). Furthermore, since AKT/PKB is one of the crucial downstream proteins and is phosphorylated upon insulin-induced signaling transduction, phosphorylation of AKT/PKB was determined in both akr1a1a +/+ and akr1a1a −/− larvae and in conditions after ACR treatment. The results showed that AKT/PKB expression is increased but phosphorylated AKT/PKB is significantly decreased in akr1a1a −/− larvae compared to akr1a1a +/+ controls. In contrast to akr1a1a +/+ control larvae, the ACR treated larvae displayed unchanging total AKT/PKB but decreasing tendency for phosphorylated AKT/PKB ( Figure 7G,H). All above data indicated that ACR directly leads to an impaired insulin receptor signaling and disrupts glucose homeostasis. Moreover, accumulated and non-detoxified internal ACR after the loss of akr1a1a is responsible for the impaired glucose homeostasis and altered hyaloid vasculature in akr1a1a −/− mutants (Figures 2,6). Angiogenic Alterations in Hyaloid Vasculature Caused by ACR Can Be Rescued by l-Carnosine and PK11195 ACR has been reported as a toxic antioxidant that can cause alterations in various tissues, but whether the alterations in hyaloid vasculature result from reduced insra/insrb expression or by ACR directly remains unknown. Therefore, RCS scavenger-lcarnosine, which forms carnosine-ACR Michael adducts, [35] and hypoglycemic drug PK11195 were selected for the co-incubation experiment on wild type larvae with ACR. Surprisingly, both lcarnosine and PK11195 could reverse the alterations in hyaloid vasculature, suggesting ACR leads to alterations in hyaloid vasculature via altered glucose homeostasis instead of direct toxic effects caused by itself (Figure 8). Moreover, the same rescue experiments were also processed in akr1a1a −/− larvae in which the normalized hyaloid vasculature were observed after the treatment with l-carnosine and PK11195 ( Figure S13, Supporting Information). To sum up, successful reversion of the hyaloid phenotypes by applying hypoglycemic drug and ACR-scavenger implies that these drugs are potential candidates for treating ACR-induced vascular alterations. Discussion In this study, we established an Akr1a1a knockout zebrafish model with increased endogenous ACR concentrations for the first time. The further study proved impaired ACR detoxification and increased internal ACR concentration due to Akr1a1a loss induces insulin resistance and impaired glucose homeostasis and subsequently caused abnormal angiogenesis in the hyaloid vasculature in larvae and resulted in angiogenic retina vessels and GBM thickening in adults. Type 2 diabetes (T2DM) is considered the primary subtype of diabetes, which occupies vast amounts of cases and is characterized by insulin resistance. [2,36] With this, finding out the initial trigger leading to insulin resistance becomes more and more essential. More importantly, a better understanding of the drivers in the proceeding procedure of insulin resistance may help diagnose and prevent T2DM early and provide new insight into promising therapeutic approaches. Up to now, several risk factors such as accumulation of ectopic lipid metabolites, activation of the unfolded protein response pathway, and innate immune pathways have been identified as potential pathological pathways of insulin resistance. [37,38] Nevertheless, the molecules by which such pathways cause the onset of insulin resistance remained unclear. To date, MG, a hazardous reactive metabolite, has been identified as the main precursor of advanced glycation end products (AGEs) which is strongly linked to the development of insulin resistance and microvascular complications, [9,[39][40][41] but a limited number of studies have reported if ACR is involved in the onset of insulin resistance and diabetic complications. Our work in zebrafish firmly supports an elevated endogenous ACR leads to hyperglycemia, identifying a new metabolic intermediate related to diabetes. In zebrafish, we found excess ACR due to the loss of Akr1a1a enzyme system inhibited the expression of insulin receptor a and b, disrupted insulin signaling transduction, and finally resulted in impaired glucose homeostasis and diabetic organ damage. Besides, this study offers a novel animal model with high internal ACR and brings the possibility and convenience for studying physiological functions of ACR in vivo. ACR is recognized as a side-product and appears accompanied by diabetic retinopathy and nephropathy due to the disordered lipid peroxidation. [11,14,42] This was also observed in our results which exhibited angiogenic retina vessels and thickening GBM in akr1a1a mutants consistent with early pathological alterations in diabetic retinopathy and nephropathy. [43,44] This raised the hypothesis, whether ACR is the marker or the maker of hyperglycemia and diabetic complications. To address this question, a known hypoglycemic drug PK11195 and ACR scavenger l-carnosine were utilized to determine if neutralizing ACR or decreasing the glucose level can impede the ACR leading effects. Intriguingly, ACR-induced hyaloid angiogenic alterations can be reversed by either anti-hyperglycemic or anti-ACR treatment. The data now afford several important implications. First, it suggests Akr1a1a as a principle enzyme to detoxify ACR in zebrafish and AKR family is involved in glucose metabolism and diabetes. Second, it implies the accumulated ACR as the upstream metabolite triggering organic alterations via adjusting glucose homeostasis. Last, in addition to being an escort of diabetic complications, this study identified ACR, at least in zebrafish, as an effective inducer causing insulin resistance and diabetic complications, providing a new therapeutic target for diabetic complications treatment. In clinical studies, elevated free-state ACR or ACR-adducts have been identified as distinct features in patients with . Down-regulated insulin receptor signaling pathways in akr1a1a +/+ zebrafish larvae upon ACR treatment. A) Bubble plot showed the significantly down-regulated biological pathways between akr1a1a +/+ and akr1a1a +/+ zebrafish larvae after ACR treatment at 5dpf via KEGG and GOBP analysis. Heatmaps showed relative mRNA expression in B) insulin receptor signaling pathways, C) MAPK, D) signal transduction by protein phosphorylation, and E) transmembrane receptor protein tyrosine kinase signaling pathway was down-regulated significantly in akr1a1a +/+ zebrafish larvae after ACR treatment. Higher and lower expression is displayed in red and blue, respectively. F) Concise mechanism flow chart showed the consequence of defective ACR detoxification after Akr1a1a loss. G) Representative Western blot shows total AKT and phosphorylated AKT level in different groups. H) Quantification of AKT phosphorylation and total AKT expression in akr1a1a +/+ and akr1a1a −/− zebrafish larvae treated with ACR. GESA, gene set enrichment analysis. KEGG, Kyoto encyclopedia of gene and genomes. GOBP, gene ontology biological processes. For statistical analysis one-way ANOVA followed by Tukey's multiple comparisons test was applied. *p < 0.05. NS, not significant. showed significant increasing numbers of branches in akr1a1a1 +/+ larvae incubated with 10 μm ACR but rescued by carnosine (dissolved in egg water) and PK11195 (dissolved in DMSO) at 5 dpf, n = 11-18. For statistical analysis one-way ANOVA followed by Tukey's multiple comparisons test was applied. **p < 0.01, ****p < 0.001. NS, not significant. DMSO, dimethylsulfoxide. CAR, carnosine. PK, PK11195. diabetes and diabetic complications. It was found that ACRlysine adduct (FDP-lysine) accumulated in both type 1 and type 2 diabetic patients' urine and even more in diabetic patients with microalbuminuria. [18,19] Additionally, in end-stage renal disease, the FDP-lysine level of those patients with type 2 diabetes was significantly higher than the non-DM group, [21] which implies FDP-lysine has a strong connection with diabetic nephropathy. Furthermore, it was suggested that FDP-lysine level could be utilized as a biomarker for the severity of diabetic retinopathy. [22] However, the crucial role of ACR in the onset of T2DM required further investigations. Based on our study, it would be a promising strategy to dynamically monitor endogenous ACR concentration in clinics for filtering the people with the pre-diabetic state, early diagnosing patients with insulin resistance, and improving long-term prognosis. Lastly, this study also affords an alternative strategy for diabetes treatment. General therapy applying hypoglycaemic agents and insulin sensitizer require patients to take medications for their whole life to achieve ideal and stable glucose control. [45,46] Whether it would be practical to treat diabetes and diabetic complications by using ACR scavenger, such as carnosine, deserves further study. Although this study successfully illustrated ACR elevation contributes to impaired glucose homeostasis, there are some limitations. First, although it has been confirmed that ACR cause reduced insra/insrb expression, the detailed mechanism remained unknown. Second, whether the impaired glycolysis appearing in akr1a1a −/− adults caused by reduced PFK activity results from insulin resistance or partially caused by accumulated internal ACR is still controversial. Finally, clinical studies are more than essential to determine the sensitivity and specificity of ACR in predicting insulin resistance and pre-diabetic state. In conclusion, this study provided patent evidence for the contribution of poor ACR detoxification and subsequently increased ACR concentration to the development of impaired glucose homeostasis via insulin receptor signaling dysfunction in www.advancedsciencenews.com www.advancedscience.com akr1a1a mutants, providing a novel direction for future research regarding diabetic pathophysiology and therapy. Experimental Section Zebrafish Husbandry and Zebrafish Lines: Zebrafish lines, Tg(fli1:EGFP) was raised and staged as described under standard husbandry environment. [33,47] Embryos/larvae were kept in E3 media at 28.5°C with/without PTU (2.5 mL in 25 mL) to suppress pigmentation formation. Adult zebrafish were kept under 13 h light/11 h dark cycle and fed with living shrimps in the morning and fish flake food in the afternoon. All experimental interventions on animals were approved by the local government authority, Regierungspräsidium Karlsruhe and by Medical Faculty Mannheim (license no: G-98/15 and I-19/02) and carried out in accordance with the approved guidelines. Age of adult male zebrafish was from 9 to 16 months. Both sexes were only included for the blood glucose measurements and RT-qPCR. Mutant Generation: The CRISPR target site for akr1a1a was identified and selected using ZiFiT Targeter 4.1. The akr1a1a-CRISPR oligonucleotides were synthesized by Sigma-Aldrich. The oligonucleotides were cloned into the pT7-gRNA plasmid (Addgene). BamHI-HF (Biolabs) was used for linearization. Cas9 mRNA was synthesized from pT3TS plasmid (Addgene) after linearizing with XbaI (Biolabs). Plasmids were purified with a PCR purification kit (Qiagen). CRISPR gRNA in vitro transcription was done by the T7 MEGAshortscript kit and mMESSAGE MACHINE T3 kit for Cas9 mRNA (Invitrogen). Purification of RNA after TURBO DNAse treatment was done with the MiRNeasy Mini (gRNA) and RNeasy Mini (Cas9 mRNA) kits (Qiagen). Akr1a1a gRNA and Cas9 mRNA (150pg nL −1 ) was mixed with KCl (0.1 m). The RNA mixture (1 nL) was injected directly into one-cell embryos. For genotyping, PCR-products of genomic DNA were used for Sanger sequencing (Table S1, Supporting Information). Morpholinos: Morpholinos including SB-insra-MO, SB-insrb-MO, and Control-MO (Table S2, Supporting Information) were designed and produced by GENE TOOLS, LLC. All morpholinos were diluted to 2 μg μL −1 with 0.1 m KCl. One nanoliter of morpholino was injected into the yolk sack of one-cell stage of the embryos as previously described. [48] Morpholinos and the genotyping primers for zebrafish insra/insrb morpholinos are listed in Table S3, Supporting Information. Antibody Generation: For Akr1a1a antibody generation, peptide (RLIESFNRNERFII-C) was designed, synthesized, and coupled to KLH (Keyhole Limpet Hemocyanin) by PSL GmbH, Heidelberg, Germany and subsequently injected into guinea pigs for immunization following standard procedures from GPCF Unit Antibodies, DKFZ Heidelberg, Germany. Preparation of Adult Zebrafish and Blood Glucose Measurement: Adult zebrafish were transferred into single boxes one day before and fasted overnight. Sixteen hours later, fish were tested directly or fed with 0.5 g flake food for 1 h followed by refreshing water and 1 h postprandial experiment. Afterward, fishes were euthanized with 0.025% tricaine until the operculum movement disappeared entirely. Then blood was extracted from caudal vessels and blood glucose was measured by a glucometer. Later on, fishes were sacrificed and transferred into experimental platform covered with ice-cold PBS. Organs were isolated, weighed, snap frozen in liquid nitrogen and stored at −80°C for metabolomics, RT-qPCR and ELISA analysis. Age of adult zebrafish was from 10 to 12 months. The whole fish head was transferred into 4% PFA/PBS for 24 h at 4°C for further retinal vasculature analysis. Microscopy and Analysis of Vascular Alterations in Larvae and Adults: For imaging of the zebrafish retinal hyaloid vasculature, Tg(fli1: EGFP) larvae were anesthetized in 0.0003% tricaine at 120 hpf, and fixed in 4% PFA/PBS overnight at 4°C. Fixed larvae were washed three times for 10 min per time in double distilled water (ddH 2 O) and incubated for 90 min at 37°C in 0.5% Trypsin/EDTA solution (25200-056, Gibco) buffered with 0.1 m TRIS (Nr. 4855.3, Roth) dilution and adjusted to pH 7.8 with 1 m HCl solution. Larval hyaloid vasculature was dissected under a stereoscope and displayed in PBS for visualization according to Jung's protocol. [49] Confocal images for phenotype evaluation were acquired using a confocal microscope (DM6000 B) with a scanner (Leica TCS SP5 DS) utilizing a 20 × 0.7 objective, 1024 × 1024 pixels, 0.5 μm Z-steps. Vascular cross points of blood vessels were regarded as "branches" and small new blood vessels were counted and addressed as "sprouts" within the circumference of the hyaloid per sample. For imaging of the zebrafish adult retinal vasculature, retina dissection and analysis were performed as recently described. [50] In brief, PFA fixed heads from adult zebrafish were transferred to agarose platform covered with 1× PBS and eyes were removed from the head as the first step. Retina was detached from eye and washed twice with 1× PBS. Washed retina was immersed in mounting media and covered with a cover slide. Images were taken by using DM6000 B confocal microscope with Leica TCS SP5 DS scanner. Parameter: 600 Hz, 1024 × 1024 pixels and 1.5 μm thick of zstacks were utilized. Quantification of branch points and sprouts was performed by using GIMP and ImageJ in squares of 350 × 350 μm 2 . Analysis of Kidney Morphology: Kidneys for EM study were fixed for at least 2 h at room temperature in 3% glutaraldehyde solution in 0.1 m cacodylate buffer pH 7.4, cut into pieces of ≈1 mm 3 , washed in buffer, post fixed for 1 h at 4°C in 1% aqueous osimium tetroxide, rinsed in water, dehydrated through graded ethanol solutions, transferred into propylene oxide, and embedded in epoxy resin (glycidether 100). Semithin and ultrathin sections were cut with an ultramicrotome (Reichert Ultracut E). Semithin sections of 1 μm were stained with methylene blue. 60-80 nm ultrathin sections were treated with uranyl acetate and lead citrate, and examined with an electron microscope JEM 1400 equipped with a 2K TVIPS CCD Camera TemCam F216. Kidneys were fixed in 10% buffered formalin for Periodic acid-Schiff staining, removed, routinely embedded in paraffin, and cut into 4 μm-thick sections. For quantification of GBM on EM sections, up to 15 images were analyzed per genotype. Pharmacological Treatment of Zebrafish Embryos/Larvae: Fertilized zebrafish embryos were transferred into 6-well plate, around 30 embryos per well with 5 mL eggwater. At 24 hpf the chorion of zebrafish embryos was removed using sharp tweezers and 0003% PTU was added to the eggwater. For ACR intervention and rescue experiments, 10 μm ACR (S-11030F1; CHEM SERVICE), 10 μm PK11195 (C0424; Sigma-Aldrich), 10 mm l-Carnosine (C9625; Sigma-Aldrich) treatments were started from 24 hpf and continued until the end. Medium was refreshed daily. Whole-Body Glucose Determination in Zebrafish Larvae: Zebrafish larvae were collected at 5 dpf and snap frozen. Approximately 20-25 larvae per clutch were homogenized in glucose assay buffer by the ultrasonic homogenizer, 90% intensity, and 15 s for 2 times. Glucose content was determined according to manufacturer's instruction (Glucose Assay Kit, CBA086, Sigma-Aldrich). ACR Determination in Zebrafish Larvae and Liver: Zebrafish larvae at 96 hpf and adults' liver were collected and snap frozen. Approximately 40-50 larvae per clutch were homogenized in 1× PBS with by the ultrasonic homogenizer, 90% intensity, 15 s for 2 times. Protein-bound ACR was determined according to manufacturer's instruction (Acrolein ELISA Kit, MBS7213206, MyBioSource Inc). Enzyme Activity Assay: At 96 hpf, around 50 zebrafish larvae per measurement were anaesthetized with 0.003% tricaine and snap frozen. ALDH activity was assayed at 25°C in 75 mm Tris-HCl (pH 9.5) containing 10 mm DL-2-amino-1propanol, 0.5 mm NADP, and 2 mm MG by measuring the rate of NADP formation at 340 nm. Glo1-activity was determined spectrophotometrically monitoring the change in absorbance at 235 nm caused by the formation of S-D-lactoylglutathione. AKR activity was determined by measuring the rate of reduction of NADPH at 340 nm, pH 7.0, and 25°C. The assay mixture contained 100 mm potassium phosphate, 10 mm DL-Glyceraldehyde/5 mm ACR, and 0.1 mm NADPH. The glucokinase activity was determined based upon the reduction of NAD through a coupled reaction with glucose-6-phosphate dehydrogenase and was measured spectrophotometrically by the increase in absorbance at 340 nm. [51] The phosphofructoskinase activity assay was based upon the oxidation of NADH through a coupled reaction with aldolase, triosephosphate isomerase, and glyceraldehyde-3-phosphate dehydrogenase. Activity was determined by measuring the decrease in absorbance at 340 nm. [52] The pyruvatekinase activity assay was based upon the oxidation of NADH through a coupled reaction with l-lactic dehydrogenase and was determined spectrophotometrically by increase the decrease in absorbance at 340 nm. [53] Detection of S-Nitrosylation: S-Nitrosylation was detected by using the Biotin Switch Assay Kit (Abcam, ab236207) as described previously. [26] MG and Glyoxal Measurement: At 96 hpf, around 50 zebrafish larvae per measurement were anaesthetized with 0.003% tricaine and snap frozen. MG and glyoxal were measured as previously described. [48] Metabolomic Analysis: Detection was done in cooperation with the Metabolomics Core Technology Platform from the Centre of Organismal Studies Heidelberg. 50 zebrafish larvae at 96 hpf per measurement or livers were snap frozen in liquid nitrogen. Adenosine compounds, thiols, free amino acids, fatty acids, and primary metabolites were measured as previously described. [9] Reverse-Transcription Quantitative Polymerase Chain Reaction Analysis (RT-qPCR): Total RNA was isolated from TG(fli1:EGFP) zebrafish larvae/adult organs at different time points using the RNeasy Mini Kit following the manufacturer's protocol (Qiagen). cDNA Synthesis and qPCR reaction were proceeded as previously described. [26] Primers are listed in Table S2, Supporting Information. RNA-seq Analysis: Total RNA was isolated from akr1a1a +/+ , akr1a1a −/− and akr1a1a +/+ with ACR treatment larvae at 120 hpf. Library construction and sequencing were performed with BGISEQ-500 (Beijing Genomic Institution, www.bgi.com, BGI). Gene expression analysis were conducted by the Core-Lab for microarray analysis, center for medical research (ZMF). Quality control and data analysis were performed as described previously. [48] The RNA-Seq datasets produced in this study are available at GEO (Gene Expression Omnibus, NIH) under the accession number: https://www.ncbi.nlm.nih.gov/geo/query/acc.cgi?acc = GSE168786. Protein Sequence Alignment: The amino acid sequences of the Akr1a1a proteins from zebrafish (Q6AZW2_DANRE), human (AK1A1_HUMAN), and mouse (AK1A1_MOUSE) were accessed from the UniProt Database (http://www.unip rot.org/). For the comparison, the genes were selected and aligned with the UniProt-own alignment tool (http://www.uniprot. org/align/). Software: For zebrafish akr1a1a exon/intron region, amino acid and other schematic diagrams generation, websites of Ensembl (https://www. ens embl.org/), Biorender (https://biorender.com/) were used. Analysis of retinal vasculature was carried out by using LAS AF Lite Software from Leica for taking screenshots, Gimp for image cutting and ImageJ for quantification. The "GCMS solution" software (Shimadzu) was used for data processing of the GC/MS analysis. Statistical Analysis: Sample size for all experiments was more than three independent biological replicates. Data were displayed as mean with standard deviation. Statistical significance between different groups was analyzed using two-paired Student's t-test or one-way ANOVA (followed by Tukey's multiple comparisons) in GraphPad Prism 6.01 or 8.3.0. p-values of 0.05 were considered as significant: *p < 0.05, **p < 0.01, ***p < 0.001, ****p < 0.0001. Material Requests: The anti-Akr1a1a antibody and the akr1a1a zebrafish mutant generated in this study are available from the corresponding author with a completed Materials Transfer Agreement. Supporting Information Supporting Information is available from the Wiley Online Library or from the author.
2021-07-20T06:22:51.637Z
2021-07-18T00:00:00.000
{ "year": 2021, "sha1": "03fd6b1e8fb345b0832c9493c074b859d20b8d9d", "oa_license": "CCBY", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/advs.202101281", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "a56b0729941cbdbf9765b7beb4c68a84e2c1a467", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
252846800
pes2o/s2orc
v3-fos-license
Hierarchical Instance Mixing across Domains in Aerial Segmentation We investigate the task of unsupervised domain adaptation in aerial semantic segmentation and discover that the current state-of-the-art algorithms designed for autonomous driving based on domain mixing do not translate well to the aerial setting. This is due to two factors: (i) a large disparity in the extension of the semantic categories, which causes a domain imbalance in the mixed image, and (ii) a weaker structural consistency in aerial scenes than in driving scenes since the same scene might be viewed from different perspectives and there is no well-defined and repeatable structure of the semantic elements in the images. Our solution to these problems is composed of: (i) a new mixing strategy for aerial segmentation across domains called Hierarchical Instance Mixing (HIMix), which extracts a set of connected components from each semantic mask and mixes them according to a semantic hierarchy and, (ii) a twin-head architecture in which two separate segmentation heads are fed with variations of the same images in a contrastive fashion to produce finer segmentation maps. We conduct extensive experiments on the LoveDA benchmark, where our solution outperforms the current state-of-the-art. I. INTRODUCTION Semantic segmentation aims to predict, for each individual pixel in an image, a semantic category from a predefined set of labels. Such a fine grained understanding of images finds numerous applications in aerial robotics [1]- [9], where it has achieved remarkable results by leveraging deep learning models trained on open datasets with large quantities of labeled images. However, these results do not carry over when the models are deployed to operate on images that come from a distribution (target domain) different from the data experienced during training (source domain). The difficulty in adapting semantic segmentation models to different data distributions is not only limited to the aerial setting and it is tightly linked to the high cost of generating pixel-level annotations [10], which makes it unreasonable to supplement the training dataset with large quantities of labeled images from the target domain. A recent trend in the state-of-the-art addresses this challenge using domain mixing as an online augmentation to create artificial images with elements from both the source and the target domain, thus encouraging the model to learn domain-agnostic features [11]- [14]. In particular, both DACS [12] and DAFormer [13] rely on ClassMix [15] to dynamically create a binary mixing mask for a pair of source-target images by randomly selecting half of the classes from their semantic labels (the true label for the * Equal contribution. As a result, it generates erroneous images that are detrimental to Unsupervised Domain Adaptation training in the aerial scenario. Instead, our HIMix extracts instances from each semantic label and then composes the mixing mask after sorting the extracted instances based on their pixel count. This mitigates some artifacts (e.g. partial buildings) and improves the balance of the two domains. source, the predicted pseudo-label for the target). Although this mixing strategy yields state-of-the-art results in driving scenes, it is less effective in an aerial context. We conjecture that this is largely caused by two factors: Domain imbalance in mixed images. Segmentationoriented aerial datasets are often characterized by categories with vastly different extensions (e.g., cars and forest). While this may be dealt with techniques such as multi-scale training in standard semantic segmentation [16], the disparity in raw pixel counts between classes may be detrimental for an effective domain adaptation through class mixing, as the composition may favor either domain (see Fig. 1 left). Weak structural consistency. The scenes captured by a front-facing camera onboard a car have a consistent structure, with the street at the bottom, the sky at the top, sidewalks and buildings at the sides, etcetera. This structure is preserved also across domains, as in the classic Synthia [17] → CityScapes [10] setting. Thus, when copying objects from an image onto the other they are likely to end up in a reasonable context. This is not true for aerial images, where there is no consistent semantic structure (see Fig. 1 left). To solve both problems, we propose a new mixing strategy for aerial segmentation across domains called Hierarchical Instance Mixing (HIMix). HIMix extracts from each semantic mask a set of connected components, akin to instance labels. The intuition is that aerial tiles often present very large stretches of land, divided into instances (e.g., forested areas separated by a road). HIMix randomly selects from the individual instances a set of layers that will compose the binary mixing mask. This helps to mitigate the pixel imbalance between source and target domains in the artificial image. Afterwards, HIMix composes these sampled layers by sorting them based on the observation that there is a semantic hierarchy in the aerial scenes (e.g., cars lie on the road and roads lie on stretches of land). We use the pixel count of the instances to determine their order in this hierarchy, placing smaller layers on top of larger ones. While not optimal in some contexts (e.g., buildings should not appear on top of water bodies), this ordering also reduces the bias towards those categories with larger surfaces in terms of pixels as they are placed below the other layers of the mask (see Fig. 1 right). Besides the mixing strategy itself, there is also the general problem that the effectiveness of the domain mixing is strongly dependent on the accuracy of the pseudo-labels generated on the target images during training. This is especially true when the combination itself requires layering individual entities from either domain into a more coherent label. A key factor for an effective domain adaptation using self-training is in fact the ability to produce consistent predictions, resilient to visual changes. For this reason, we propose as a second contribution a twin-head UDA architecture in which two separate segmentation heads are fed with contrastive variations of the same images to improve pseudo-label confidence and make the model more robust and less susceptible to perturbations across domains, inevitably driving the model towards augmentation-consistent representations. We test our complete framework on the LoveDA benchmark [18], the only dataset designed for evaluating unsupervised domain-adaption in aerial segmentation, where we exceed the current state-of-the-art. We further provide a comprehensive ablation study to assess the impact of the proposed solutions. The code will be made available to the public to foster the research in this field. Concerning the application to aerial images, despite the comparable processing pipeline as in other settings, there are peculiar challenges that demand for specific solutions. Firstly, aerial and satellite data often include multiple spectra besides the visible bands, which can be leveraged in different ways, such as including them as extra channels [9] or adopting multi-modal encoders [4]. Visual features represent another major difference: unlike other settings, aerial scenes often display a large number of entities on complex backgrounds, with wider spatial relationships. In this case, attention layers [28] or relation networks [29] are employed to better model long-distance similarities among pixels. Another distinctive trait of aerial imagery is the topdown point of view and the lack of reference points that can be observed in natural images (e.g., sky always on top). This can be exploited to produce rotation-invariant features using ad-hoc networks [30], [31], or through regularization [32]. Lastly, aerial images are characterized by disparities in class distributions, since these include small objects (e.g. cars) and large stretches of land. This pixel imbalance can be addressed with sampling and class weighting [13], or ad-hoc loss functions [33]. B. Domain Adaptation Domain Adaptation (DA) is the task of attempting to train a model on one domain while adapting to another. The main objective of domain adaptation is to close the domain shift between these two dissimilar distributions, which are commonly referred to as the source and target domains. The initial DA techniques proposed in the literature attempt to minimize a measure of divergence across domains by utilizing a distance measure such as the MMD [34]- [36]. Another popular approach to DA in Semantic Segmentation is adversarial training [37]- [40], which involves playing a minmax game between the segmentation network and a discriminator. This latter is responsible for discriminating between domains, whereas the segmentation network attempts to trick it by making features of the two distributions identical. Other approaches, such as [41]- [43], employ image-to-image translation algorithms to generate target pictures styled as source images or vice versa, while [44] discovers the major bottleneck with domain adaptation in the batch normalization layer. More recent methods like [45]- [47] use self-learning techniques to generate fine pseudo-labels on target data to fine-tune the model, whereas [12], [13] combine self-training with class mix to reduce low-quality pseudo-labels caused by domain shifts among the different distributions. These mixing algorithms are very effective on data with a consistent semantic organization of the scene, such as in selfdriving scenes [10], [48]. In these scenarios, naively copying half of the source image onto the target image increases the likelihood that the semantic elements will end up in a reasonable context. This is not the case with aerial imagery (see Fig. 1). HIMix not only mitigates this problem, but it also reduces the bias towards categories with larger surfaces. A. Problem statement We investigate the aerial semantic segmentation task in the context of unsupervised domain adaption (UDA). Let us define as X the set of RGB images constituted by the set of pixels I , and as Y the set of semantic masks associating a class from the set of semantic classes C to each pixel i ∈ I . We have two sets of data accessible at training time: (i) a set of annotated images from the source domain, denoted as X s = {(x s , y s )} with x s ∈ X and y s ∈ Y ; (ii) a set of N t unlabelled images from the target domain, denoted as The goal is to find a parametric function f θ that maps a RGB image to a pixel-wise probability, i.e., f θ : X → R |I |×|C | , and evaluate it on unseen images from the target domain. In the following, we indicate the model output in a pixel i for the class c as p c i , i.e., p c The parameters θ are tuned to minimize a categorical crossentropy loss defined as where y c i represents the ground truth annotation for the pixel i and class c. B. Framework We present an end-to-end trainable UDA framework based on the use of target pseudo-labels. To better align domains, we construct artificial images using our HIMix strategy (III-C), which generates mixed images exploiting the instances produced both from the source ground truth and the target pseudo-label. Rather than using a secondary teacher network derived from the student as an exponential moving average as in [12], [13], we propose a twin-head architecture (III-D) with two separate decoders trained in a contrastive fashion to provide finer target pseudo labels. C. Hierarchical Instance Mixing Given the pairs (x s , y s ) and (x t ,ŷ t ), whereŷ t = f θ (x t ) are the pseudo-labels computed from the model prediction on the target domain, the purpose of the mixing strategy is to obtain a third pair, namely (x m , y m ), whose content is derived from both source and target domains using a binary mask M. While techniques based on ClassMix have been successfully applied in many UDA settings, we discover that the same may not be optimal in the aerial scenario since it superimposes parts of the source domain onto the target without taking into consideration of their semantic hierarchy (e.g., cars appear on top of roads, not vice versa). In contrast, we propose an Hierarchical Instance Mixing strategy (HIMix), which is composed of two subsequent steps: (i) instance extraction and (ii) hierarchical mixing. Instance extraction. Aerial tiles often present uniform land cover features, with many instances of the same categories in the single image. In the absence of actual instance labels, this peculiarity can be exploited to separate semantic annotations into connected components. Here a connected component is a set of pixels that have the same semantic label and such that for any two pixels in this set there is a path between them that is entirely contained in the same set. Figure 2 illustrates an example of this process, with a forest that is separated in two instances by a road. This increases the number of regions which can be randomly selected for the mixing phase, thus mitigating the pixel unbalance in the final mixed sample between source and target domains. Note that this procedure is applied on the concatenation of source and target label. Hierarchical mixing. We observe that instances in aerial imagery have an inherent hierarchy that is dictated by their semantic categories. In other words, land cover categories such as barren or agricultural frequently appear in the background w.r.t. smaller instances such as roads or buildings. The mixing step follows this hierarchy when combining the instances from source and target, and it is illustrated in Fig. 2 D. Twin-Head Architecture State-of-the-art, self-training UDA strategies, such as [13], make use of teacher-student networks to improve the consistency of the pseudo-labels. Albeit dealing with consistency in time, teacher-based approaches do not directly cope with geometric or stylistic consistency. We propose a twin-head segmentation framework to directly address this, providing more consistent pseudo-labels and outperforming the standard tested methodologies, as shown in the ablation study IV-C. Our architecture (see Fig. 3) comprises a shared encoder g, followed by two parallel and lightweight segmentation decoders, h 1 and h 2 . Training is carried out end to end, exploiting annotated source data and computing pseudolabels from target images online, as detailed hereinafter. Source training. With the purpose of driving the model towards augmentation-consistent representations, we feed the two heads with variations of the same image in a contrastive fashion. More specifically, given a source image x s we alter it with a sequence of random geometric (horizontal flipping, rotation) and photometric augmentations (color jitter), obtaining new pairs of samples. Specifically, at each iteration, the final input is composed of B s = (x s x s , y s ỹ s ), where x s and y s represent the original batch of images and respective annotations, whilex s andỹ s represent the same samples, altered by the geometric and photometric transformations, i.e.,x s = T p (T g (x s )), andỹ s = T g (y s ). The full augmented batch B s is first forwarded to the shared encoder module g, producing a set of features. The latter, containing information derived from the images and its augmented variants, are split and forwarded to the two parallel heads, effectively obtaining two comparable outputs, h 1 (g(x s )) and h 2 (g(x s )). A standard cross-entropy loss, as shown in Eq. 1, is computed on both segmentation outputs. Working independently on different variations of the same images, the two heads can evolve in different ways while trying to minimize the same objective function. Using the same encoder yields a more robust, contrastive-like feature extraction that is less susceptible to perturbations. This is essential for producing more stable and precise pseudo-labels. Mix training. The twin-head architecture is expressly designed to generate more refined pseudo-labels. Given an unlabeled target image x t , the probabilities after forwarding the image to both heads σ (h 1 (g(x t ))) and σ (h 2 (g(x t ))) are compared, where σ indicates the softmax function. In order to extract a single pseudo-label, the most confident output is selected for each pixel. Formally, for each position i the output score is computed as p c i = max(σ i (h 1 (g(x t )), σ i (h 2 (g(x t )))), selecting the maximum value between the two. Once p c i is derived, the pseudo-labelŷ t necessary for class-mix is generated through: At this point, the mixed pairs of inputs can be computed through HIMix, as described in previous sections, obtaining (x m , y m ) as a composition of the source and target samples. Similar to source training, an augmented batch B m = (x m x m , y m ỹ m ) is computed through geometric and photometric transformations, then fed to the model to compute L seg (B m ). To reduce the impact of low-confidence areas, a pixel-wise weight map w m is generated. Similar to [12], [13], the latter is computed as percentage of valid points above threshold. Formally, for each pixel i: where m τ represents the Max Probability Threshold [49] computed over pixels belonging to the pseudo-label as follows: In practice, each pixel of the mixed label is either weighted as 1 for regions derived from the source domain, or by a factor obtained as the number of pixels above the confidence threshold, normalized by the total amount of pixels. Note that during all of these computations the gradients are not propagated. The training procedure is detailed in Algorithm 1. Algorithm 1: Twin-head UDA training procedure Initialize: Model f θ : X → R |I |×|Y | with encoder g and twin heads h 1 , h 2 ; Input: X s source domain with N s pairs (x s , y s ), x s ∈ X , y s ∈ Y and semantic classes C ; X t target domain with N t images x t , lacking ground truth labels; where p c i the model prediction of pixel i for class c and Y the label space; while epoch in max epochs do while x s , y s , x t in X s × X t do Train on source X s Compute augmented source batch B s = (x s x s , y s ỹ s ); Train f θ on source labels with L seg (B s ); end Mix source and target pairs Compute pseudo-labels via majority votinĝ y t = max h 1 (g(x t )), T −1 g (h 2 (g(x t ))) ; Extract source instance labels i s = CCL(y s ) with instances ∈ K s ; Extract target instance pseudo-labels i t = CCL(ŷ t ) with instances ∈ K t ; Compute one-hot encoded labels, sorted by pixel size as Compute mixed image and labels as ; A. Training Details We assess the performance of our approach on the LoveDA dataset [18]. According to that benchmark, we conduct two series of unsupervised domain adaptation experiments: rural→urban and urban→rural. We measure the performance on the test set of each target domain. Dataset. To our knowledge, the LoveDA dataset [18] is the only open and free collection of land cover semantic segmentation images in remote sensing explicitly designed for UDA. Both urban and rural areas are included in the training, validation, and test sets. Data is gathered from 18 different administrative districts in China. The urban training set has 1156 images, while the rural training set contains 1366 images. Each image is supplied in a tiled format of 1024x1024 pixels annotated with seven categories. Metric. Following [18] we use the averaged Intersection over Union (mIoU) metric to measure the accuracy of all the experiments conducted. Implementation. To implement our solution we leverage the mmsegmentation framework, that is based on PyTorch. We train each experiment on a NVIDIA Titan GPU with 24GB of RAM. We refer to DAFormer [13] for the architecture and configuration of hyperparameters. We use the MiT-B5 model [27] pretrained on ImageNet as the encoder of our method while the segmentation decoder module corresponds to the SegFormer head [27]. We train on every setting for 40k iterations using AdamW as optimizer. The learning rate is set to 6x10 −5 , weight decay of 0.01, betas to (0.9, 0.99). We also adopt a polynomial decay with a factor of 1.0 and warm-up for 1500 iterations. To cope with possible variations, every experiment presented has been obtained as the average over three seeds {0, 1, 2}. Training is performed on random crops, by augmenting data through random resizing in the range [0.5, 2.0], horizontal and vertical flipping, and rotation of 90 degrees with probability p = 0.5, together with random photometric distortions (i.e., brightness, saturation, contrast and hue). As [12], [13], we set τ = 0.968. The final inference on the test set is instead performed on raw images without further transformations. B. Results Urban→Rural. The results for this set of experiments are reported in Tab. I. They corroborate the complexity of the task due to a strong and inconsistent class distribution in the source domain, which is dominated by urban scenes with a mix of buildings and highways but few natural items. This causes a negative transfer to the target domain, since both adversarial strategies and self-training procedures achieve overall performance equivalent to, if not worse than, the Source Only model. Specifically, when we evaluate the best performing Adversarial Training technique, which is represented by CLAN, we gain just a +1.8 improvement HIMix exhibits its ability to boost rural and underrepresented classes, such as agriculture, as also evidenced by qualitative results in Fig. 4. In comparison to DACS and DAFormer, our technique recognizes and classifies better contours and classes, such as water, despite their underrepresentation in the source domain. This is also true in common categories with different visual features such as road, which can appear in paved and unpaved variants. Fig. 4 support the superior ability of our model to discern between rural and urban classes. While DACS does not recognize buildings and DAFormer misclassifies parts of them as agricultural terrain, our model demonstrates its efficacy in minimizing the bias towards those categories with larger surfaces providing results close to the ground truth. C. Ablation Twin-Head and HIMix. To demonstrate the effectiveness of the twin-head architecture, we compare it to the traditional single-head structure, which generates pseudo-labels using a secondary teacher network derived from the student as an exponential moving average. This study also demonstrates the potential of the HIMix when paired with traditional single-head training. For both the settings, we perform an extensive ablation study considering the MiT-B5 [27] as the backbone and we report the results in Tab. III. The twin-head design paired with the Standard Class Mix (line 3) is more performing than the single-head architecture (line 1), implying that our solution is better at providing finer pseudo-labels with correct class segmentation, as also shown in the first column of Fig. 5. HIMix increases recognition performance even when paired with a single-head architecture (line 2), particularly for categories with a lower surface area in terms of pixels, which are placed below those with larger surfaces when using the Standard Class Mix. That is why, in the topleft image of Fig. 5, the model is unable to grasp their semantics effectively and erroneously classifies building as agricultural pattern. In comparison, HIMix can accurately distinguish buildings (top-right picture in Fig. 5) even though the prediction has poorly defined contours. The best results are obtained when the twin-head ability to provide an enhanced segmentation map is combined with the HIMix ability to maintain a correct semantic structure (line 5), yielding the best results in terms of accuracy and finer segmentation map, as shown in the bottom-right image of Fig. 5. We finally ablate the different components of our HIMix to assess each term's contribution to overall performance (lines [4][5]. The Hierarchical Mixing always increases the Instance Extraction by +1.1 and +1.3 in the two Urban→Rural and Rural→Urban scenarios, respectively. V. CONCLUSIONS We investigated the problem of Unsupervised Domain Adaptation (UDA) in aerial Semantic Segmentation, showing that the peculiarities of aerial imagery, principally the lack of structural consistency and a significant disparity in semantic class extension, must be taken into consideration. We addressed these issues with two contributions. First, a novel domain mixing method that consists of two parts: an instance extraction that chooses the connected components from each semantic map and a hierarchical mixing that sorts and fuses the instances based on their pixel counts. Second, a twin-head architecture that produces finer pseudo labels for the target domain, improving the efficacy of the domain mixing. We demonstrated the effectiveness of our solution with a comprehensive set of experiments on the LoveDA benchmark. Limitations. Despite the excellent results, we observed that our solution has worse performance than the source only model in the barren class, particularly in the Urban→Rural scenario. This is possibly due to the large disparity in absolute pixels count between source and target domains in this category. Additionally, the twin-head architecture, while its superior performance, has a greater number of parameters that slow down the training (approximately 15h). Future Works. We will evaluate lighter segmentation heads and other contrastive techniques to accelerate overall training and improve performance, particularly on specific semantic classes.
2022-10-13T01:15:58.107Z
2022-10-12T00:00:00.000
{ "year": 2022, "sha1": "4d63012c48c0c2dd9fde88b84d80ba10a7769b53", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "4d63012c48c0c2dd9fde88b84d80ba10a7769b53", "s2fieldsofstudy": [ "Computer Science", "Environmental Science" ], "extfieldsofstudy": [ "Computer Science" ] }
270819894
pes2o/s2orc
v3-fos-license
Course of Cumulative Cost Curve (CCCC) as a Method of CAPEX Prediction in Selected Construction Projects : Forecasting the actual cost of the implementation of a construction project is of great importance in the case of technical management and enables financial resources to be initially maintained in a controlled manner and in a way that is as close as possible to the actual state. Based on the analysis of the developed knowledge base, which contains data from 612 reports of the Bank Investment Supervision regarding 45 construction projects from 2006 to 2023 with a total value of over PLN 1,300,000,000, best-fit curves were determined, and the expected area of the cumulative actual cost of selected construction projects was specified. The obtained polynomial functions and graphs of real areas of cost curves (in the form of nomograms) constitute a reliable graphical representation that enables the application of research results in typologically similar groups/sectors of the construction industry. The elaborated course of the cumulative cost curve (CCCC) as a method of CAPEX prediction in selected construction projects stands for a combined approach of the S-curve, polynomial functions, and the best-fit area of cumulative earned cost. The research used scientific tools that can be practically and easily used by both managers and participants of the investment process. Introduction Each participant in the investment process plays a crucial role in limiting the cost of implementing the construction project.The investor, construction manager, and designer must plan the value and structure of the investment cost and also control its cumulative course over time.The issue refers mainly to execution costs, which constitute the lion's share (over 80%) of the capital cost estimation (CAPEX) of any construction project. It is the manager of the construction project that faces a fundamental managerial challenge, which involves managing the cumulative investment cost in such a way that its actual value (performed and paid construction works) is as close as possible (with the least deviation) to the planned value. Exceeding the budgeted cost and extending the work completion time are commonly fundamental elements in the realization of numerous construction projects [1][2][3].The exceedance values vary depending on the source of the data.According to Flyvbjerg et al. [4], cost overruns occur in as many as 9 out of 10 construction projects, and the overrun value can reach up to 183%.For example, in the Netherlands, the average cost overrun was 16.5% [5], in Portugal, 24% [6], and in Qatar, 54% [7]. A cost overrun (alternatively: cost increase; CAPEX overrun) in the construction industry refers to a situation in which the actual cost incurred during the implementation of a construction project exceeds the initial budget or the estimated cost determined by the investor.This means that a budget overrun occurs when the final cost of a construction project exceeds its original budget.Engineering practice confirms that in most construction projects, the actual cost increases when compared to the budgeted cost [8,9], and this phenomenon has become an almost natural part of construction projects, both for buildings and infrastructure. Inaccurate estimated costs at the initiation phase of a construction project (i.e., when budgeting and defining the scope of the project) and improper planning of the investment process are some of the most often discussed causes of cost overruns in the literature [10].Other causes of cost overruns include: errors or omissions in design documentation that lead to design changes [11][12][13][14][15], an unrealistic duration of construction works [16], a lack of availability of qualified labor [17], a lack of management staff [18], a lack of effective coordination between the participants of the investment process [19], a lack of experience of contractors [20], and variable weather conditions [21]. Cost overruns have become a global problem due to the complex nature of the planning, design, implementation, and maintenance of a construction project.Discrepancies between budgeted and actual costs have both negative effects and financial consequences for the public and private sectors.They cause negative economic consequences for the entire construction industry, as well as for construction contractors, subcontractors, designers, etc. Cost overruns have a substantial impact, among others, on the profitability of construction companies and the deadline for completing the construction project.They may also damage the reputation of the construction company and lead to a loss of trust among stakeholders (including investors, customers, and employees).Additionally, banks granting investment loans face a significant challenge related to financing and unrealistic budget reserves. When preparing construction projects, investors prepare investment budgets, which, as professional experience shows, do not always reflect reality.To make budget provisions more realistic, banks need engineering specialists who will conduct research to measure the actual cost of implementing a construction project that is intended to be financed with an investment loan.Therefore, it seems justified to conduct research that leads to adjusting investment reserves to the actual situation and then matching the budgeted cost of forecast construction projects to their actual cost. The international organization PMI (Project Management Institute), which has existed since 1969 and brings together over 600,000 project management experts operating in almost every country in the world, states that 80% of construction projects currently end in budget overruns. Cost overruns constitute a significant challenge for investors and construction contractors, making it difficult to achieve a profit from the completed project.It is only thanks to effective monitoring mechanisms that it is possible to minimize the occurrence of this phenomenon, which is why it is so crucial to implement an effective method of planning the investment budget and controlling the actual cost during the implementation of a construction project.Exceeding the time limit (alternatively: delay, extension of time) in construction refers to a situation in which the actual duration of a construction project is longer than the planned time in the schedule and in the concluded construction contract [22]. On the basis of the literature review, the main causes of delays in construction projects can be distinguished.One of the most common reasons for delays indicated by subsequent researchers is the incorrect development of the work schedule [23], also understood as inefficient resource planning [24], or inaccurate estimation of the duration of individual tasks as well as the entire construction project [25].Other reasons for delays in construction projects include financial difficulties of contractors [26], changes in the scope of the project during implementation [27], lack of effective communication between participants in the investment process [28,29], shortage of skilled labor [30], mistakes during construction [31], low productivity of work teams [32], weather conditions [33], as well as insufficient archeological exploration of the area [34]. Delays in construction projects are closely correlated with cost overruns [35].The reasons for project delays are, in most cases, the causes of cost overruns and vice versa.Some researchers even treat delay and cost overruns as the same [36,37].In the research by Belay and Torp [38], it was also shown that there is a positive, strong correlation between the increasing duration of the project and the cost variance. The goal of the research was to develop a method that would allow for the forecasting of the cumulative budgeted cost with the greatest fitting to/smallest deviation from the actual cost in a selected group of construction projects. Methods and Models The commonly known and used methods and tools for planning and monitoring construction projects include a method based on the analysis of the curve of the cumulative cost of implementing a construction project (the "S" curve method), and the earned value method (EVM).Both methods presented in the article (the "S" curve method and the earned value method), are used to control and monitor the course of construction works.The scientific research and professional experience of the authors of the article indicate that the existing models of planned cumulative cost curves often diverge from reality and are overly complex, making them impractical for managing construction projects.Therefore, in the research, attempts were made to find a compromise between the affordability and low degree of complexity of calculations and the information potential of the proposed proprietary method of forecasting the actual course of the implementation cost in selected construction projects.The essence of the method is its applicability, based on commonly available computer systems/programs. Cumulative Cost Curve-S-Curve Method Presenting planned financial flows on a timeline using a cumulative cost chart is a simple and efficient tool for measuring the use of financial expenditures in a construction project [39].The cumulative cost curve "S" illustrates the project's progression from the start of construction activities to their completion.It represents the total expenditure incurred by all allocated resources for each task.Graphically, the cumulative cost curve typically resembles the shape of the letter "S".By continuously collecting financial data, it is possible to generate and compare budgeted versus actual cost curves [40]. Many tools have been used to map the shape of the cumulative cost curve, including: the theory of fuzzy sets [41,42], the method of least squares and fuzzy regression [43], methods using elements of artificial intelligence [44,45], and elements of BIM technology [46]. The research also uses empirical methods of forecasting the course of the cumulative cost curve in various construction projects.The mathematical models of the cost curve existing in the literature are based on real, historical data concerning construction projects conducted, among others, in the UK [47], Iran [48], Taiwan [49], the United States [50], and Asian countries [51]. Subsequent researchers have endeavored to depict the trajectory of the cost curve by formulating mathematical relationships involving variables such as time and cost.Figure 1 presents their graphical interpretation in the form of cumulative cost curve charts. The cumulative cost curves presented in Figure 1 determine the area of cash flows within the specified envelope.It was noticed that it is not possible to use one theoretical model or one empirical mathematical expression that illustrates the course of the cumulative cost curve.When planning and monitoring the cost curve, it is advisable to utilize the curve envelope. When comparing the mathematical models proposed by various authors to describe the shape of the cumulative cost curve, the most frequently used are a sixth-degree polynomial [41,52,53], a third-degree polynomial [49,54,55], and less commonly, a second-degree polynomial and a linear function [56]. The cumulative cost curve is characterized by varying slopes, making it inappropriate to use first-degree polynomials (linear functions) or second-degree polynomials (quadratic functions) to accurately depict its course.Employing descriptions may lead to inaccuracies and produce an unreliable representation of the cost curve [56,57].The cumulative cost curve is characterized by varying slopes, making it inappropr ate to use first-degree polynomials (linear functions) or second-degree polynomia (quadratic functions) to accurately depict its course.Employing descriptions may lead t inaccuracies and produce an unreliable representation of the cost curve [56,57]. Therefore, it is justified to use higher-order polynomials, at least of the third degre to describe the course of the cumulative cost curve.While a sixth-degree polynomial a lows for a high correlation coefficient (close to unity, indicating a strong relationship an a good description of the phenomenon) and a low coefficient of variation (indicating low variability and homogeneity), its practical application may prove challenging and overl complex for decision-makers, including investors and contractors. Cost Curves-The Earned Value Method (EVM) Using the earned value method (EVM) is a popular project management system tha is recommended by well-known methodologies such as the PMBoK Guide [58] and th International Project Management Association [59].It combines the cost and timing of th managed project [60,61].The EVM is a method for measuring the actual progress of th project [62].It involves controlling (in accordance with the adopted planned material an financial schedule developed at the beginning of the implementation of a constructio project) the investment task by periodically comparing the actual completed scope o work with the planned execution time and planned implementation cost [63].This metho enables cost and schedule deviations and performance indicators to be calculated, as we as the cost and duration of a construction project to be forecasted [64,65].It also allow project implementation indicators to be recognized early, which in turn is helpful for plan ning possible corrective actions [66]. In EVM, cost values are a function of time and can be graphically presented in th form of curves, as in cumulative cost curve analysis.Therefore, at the planning stage of construction project, a BCWS (budgeted cost of work scheduled) curve is created, whic shows the budgeted cost of the planned works.The remaining two curves, BCWP (budg eted cost of work performed) and ACWP (actual cost of work performed), are calculate during the project based on data collected during its monitoring.These curves represen the current status of the investment at the time of monitoring, specifically on the curren inspection day. The EVM assumes that the duration and cost are determined for the current momen of investment monitoring, and its indicators show whether the construction project is de layed or whether the budget has been exceeded.The earned value method does not de termine whether the deviations from actual values are within (or not) the range of possibl deviations from the planned values that result from the expected variability of the projec In other words, even if a construction project is delayed at the time of inspection, give the inherent variability of the project and its tasks, the delay is likely to remain within th range of possible and acceptable delays.In such a case, the decision-maker is not force Therefore, it is justified to use higher-order polynomials, at least of the third degree, to describe the course of the cumulative cost curve.While a sixth-degree polynomial allows for a high correlation coefficient (close to unity, indicating a strong relationship and a good description of the phenomenon) and a low coefficient of variation (indicating low variability and homogeneity), its practical application may prove challenging and overly complex for decision-makers, including investors and contractors. Cost Curves-The Earned Value Method (EVM) Using the earned value method (EVM) is a popular project management system that is recommended by well-known methodologies such as the PMBoK Guide [58] and the International Project Management Association [59].It combines the cost and timing of the managed project [60,61].The EVM is a method for measuring the actual progress of the project [62].It involves controlling (in accordance with the adopted planned material and financial schedule developed at the beginning of the implementation of a construction project) the investment task by periodically comparing the actual completed scope of work with the planned execution time and planned implementation cost [63].This method enables cost and schedule deviations and performance indicators to be calculated, as well as the cost and duration of a construction project to be forecasted [64,65].It also allows project implementation indicators to be recognized early, which in turn is helpful for planning possible corrective actions [66]. In EVM, cost values are a function of time and can be graphically presented in the form of curves, as in cumulative cost curve analysis.Therefore, at the planning stage of a construction project, a BCWS (budgeted cost of work scheduled) curve is created, which shows the budgeted cost of the planned works.The remaining two curves, BCWP (budgeted cost of work performed) and ACWP (actual cost of work performed), are calculated during the project based on data collected during its monitoring.These curves represent the current status of the investment at the time of monitoring, specifically on the current inspection day. The EVM assumes that the duration and cost are determined for the current moment of investment monitoring, and its indicators show whether the construction project is delayed or whether the budget has been exceeded.The earned value method does not determine whether the deviations from actual values are within (or not) the range of possible deviations from the planned values that result from the expected variability of the project.In other words, even if a construction project is delayed at the time of inspection, given the inherent variability of the project and its tasks, the delay is likely to remain within the range of possible and acceptable delays.In such a case, the decision-maker is not forced to suddenly take corrective action.Moreover, due to the deterministic estimation of, e.g., the completion date, the earned value method does not allow the range of possible expected implementation effects to be determined. As a result of the research, the EVM has been, and still is, constantly modified [67].The extension of the method was achieved by introducing new, previously unavailable parameters/indicators, which allow (according to their authors) for more accurate calculations, e.g., the Schedule Forecast Indicator (SFI) [68], the Earned Value Forecasting Stability Indicator [69], the risk effectiveness index [70], the determination of the impact of unplanned time and cost deviations on the financial liquidity of a construction project [71], risk analysis [72], the analysis of uncertainty conditions [73], the assessment of the profitability of construction projects in random implementation conditions [74,75], and also the introduction of time variance of the schedule and the budgeted cost [76]. Despite the availability of various methods and tools that support the planning and monitoring of construction projects, contractors still very often do not achieve the planned cost and time goals. Both presented methods (the S-curve method and earned value method), in their basic applications, are used to control and/or monitor the progress of construction works.The planned cumulative cost curve models proposed so far often diverge from reality and are overly complex, making them impractical for effectively managing construction projects.As some researchers and practitioners point out, it is important for the decision-makers of construction projects (i.e., investors, construction managers, and work managers) to use available and easily functional algorithms, programs, or calculation methods when planning, as well as when monitoring and controlling the progress of work.It is crucial that these methods are not burdened with many variables and uncertainties that are difficult to measure and difficult to define unambiguously.The computational appliance should be handy to any user [77]. Therefore, in the research, efforts were made to find a compromise between the accessibility and low complexity of calculations and the informative potential of the proposed proprietary method of forecasting the actual implementation cost of selected construction projects.The essence of the proposed original method is its applicability, which is based on commonly available computer systems/programs. Approach to the Research The definite objectives of the research and analyses were as follows: • Building (through research) a representative set of data concerning the course of construction projects for the purpose of the research; • Developing an original research methodology for forecasting the course of the cumulative cost curve and the cost area in selected construction projects; • Analyzing the planned cost resulting from the work schedule and the actual cost incurred during the implementation of construction projects; • Proposing an original, effective method for forecasting the actual cost in selected construction projects; • Developing and using correlation coefficients to evaluate the proposed methods and models, as well as the providing of their parameters; • Proposing, based on the course of the planned and actual accumulated costs, a model for forecasting the best adjustment of the cost curve in selected construction projects in the form of a polynomial function; • Proposing the area of best adjustment of the cost curve for planning and monitoring the cumulative cost in selected construction projects. Research Sample The data collected for the research comes from the authors' own investigations and professional work.This work involved providing the services of the Bank Investment Supervision (BIS) inspector on behalf of banks that grant investment loans for non-public procurement. As part of the research, a targeted research sample was collected, containing data on 45 construction projects, with limitations resulting from: • The duration of construction projects; • Access to reliable data on the course of construction projects.Collecting a complete data set for the course of a single construction project takes from 6 to sometimes even 34 months and results from the total duration of the analyzed construction project.In the collected data set, the average duration of a single construction project is 18 months. The research sample contains data on selected construction projects, which means that it is a closed set.To facilitate comparison of the collected research material, it is essential that the source documents, in this case reports, adhere to a standardized method of collecting data on construction projects, irrespective of the type of construction project.And so, in the research, a research sample was created, which comes from a single, independent entity providing the services of the BIS. In the conducted research, the characteristics of homogeneous construction sectors were important; therefore, the selection of the research sample was deliberate.On the basis of the collected data, it was possible to distinguish typological research samples for construction projects with a similar profile and category of construction objects, which allowed for the gathering of a typologically representative research sample. The collected knowledge base from 2006 to 2023 contains data on 45 construction projects, i.e., 612 reports with a total value of over PLN 1,300,000,000.A summary of the number of analyzed construction projects and obtained reports is presented in Table 1.Table 2 presents a detailed knowledge base that is divided into the analyzed construction sectors and the type of report (RW-Preliminary report, RM-monthly report, RK-final report).Research concerning the cost trend of a construction project was carried out in a constant cycle.It is long-lasting and cannot be accelerated or repeated.Therefore, it constitutes value itself and has an original and authorial character (it is not survey research).Each time, at each construction site, the actual (percentage) advancement of the work performed was measured by the Bank Investment Supervision.It involved measuring the amounts of the performed construction works (including earthworks, foundations, floors on the ground, the reinforced concrete structure, the steel structure, the roof, facades, finishing works, land development, etc.).An example structure of the work progress report for one of the analyzed construction projects is presented in Table 3.The measurements were made on the basis of acceptance reports for the completed works (work progress reports), which confirm the quantitative performance of the works and are made for the purpose of monthly settlements of remuneration between the investor and the contractor.This means that under one cost value, which characterizes one measurement for a single construction project, there are from several dozen to several hundred measurements of work progress for individual types of work, which in turn leads to several thousand measurements in the entire research sample. As part of the research, a purposeful research sample was collected, which contains data on 45 construction projects.The data have limitations resulting from, among others: • the duration of construction projects, • the availability of reliable data concerning the progress of construction projects. Obtaining reliable research material for the research (in the form of preliminary, monthly, and final reports) is a long-term and labor-intensive process.Collecting a complete set of data for a single construction project takes from 6 to even 34 months.This time is due to the total duration of the analyzed construction project.In the collected data set, the average duration of a single construction project is 18 months. The collected research sample contains data about the selected construction projects, which means that it is a closed set.The selection of the construction projects for the research was independent and resulted directly from the orders for providing the services of the BIS.In order for the collected research material to be compared with each other, it is essential that source documents (in this case reports) are prepared according to a uniform method of collecting data on construction projects, regardless of the type of building structure.Therefore, the research included a research sample that came from one independent entity that provides Bank Investment Supervision services. In the research, characteristic features of homogeneous construction groups/sectors were important, and therefore the selection of the research sample was purposeful.Based on the collected data, it was feasible to obtain typological research samples consisting of construction projects with similar profiles and categories of construction objects.This approach facilitated the assembly of a typologically representative research sample. Research Methodology In order for the obtained results to be reliable and for the analyses and decisions made on their basis to be correct, the research should be comprehensive and methodical.A methodology for conducting comprehensive research was developed, verified, expanded, and improved.The research methodology consists of seven stages and is presented in Figure 2: In the research, characteristic features of homogeneous construction groups/sectors were important, and therefore the selection of the research sample was purposeful.Based on the collected data, it was feasible to obtain typological research samples consisting of construction projects with similar profiles and categories of construction objects.This approach facilitated the assembly of a typologically representative research sample. Research Methodology In order for the obtained results to be reliable and for the analyses and decisions made on their basis to be correct, the research should be comprehensive and methodical.A methodology for conducting comprehensive research was developed, verified, expanded, and improved.The research methodology consists of seven stages and is presented in Figure 2 The research methodology allows for the examination of the shape and trajectory of the cost curve during the implementation of investment projects.This is achieved through the cyclical calculation of cost and schedule deviations, as well as performance indicators, cost forecasts, and project duration.The research focused on the cost of construction works, which constitutes 75-85% of the budget of the construction project (commonly called "hard cost"). Development of a Knowledge Base In stage 2, based on the analysis of reports, a knowledge base was developed-a summary of data in Microsoft Excel-characterizing individual construction projects.Data for each project are organized in a two-dimensional table.Each row in the table represents data for subsequent reporting periods.Each data set includes the following values: planned cost of the construction project, cumulative value of the planned cost, planned percentage of the progress of work, earned cost of the construction project, cumulative value of the earned cost, percentage of the progress of work performed, incurred cost of the construction project, cumulative value of the incurred cost, and percentage of the progress of invoiced work. The research assumed that the cost of a construction project is the sum of financial outlays allocated to the implementation of construction works, in particular: • The budgeted cost of the construction project (the cost of construction works planned before the commencement of the investment task); • The earned cost of the construction project (the cost of actual performed construction works); • The incurred cost of the construction project (the cost of paid construction works).Table 4 presents a fragment of the summary of the knowledge base for apartment houses (group A).Under the main headings in Table 4, the column number (1-7) has been added as an auxiliary number, along with the determination of the relationship between the data contained in the table (for columns 4 and 7).In the knowledge base and in the table in column 1, the method of coding construction projects adopted in the research is presented: X.Y, where: X-symbol of a category/group of building objects (A-I), according to Table 1; Y-number of the analyzed construction projects (1-14), according to Table 1. The Processing of Collected Data The data collected in the knowledge base describes individual construction projects, each with unique durations and implementation costs.To facilitate comparative analysis, the data were normalized in stage 3.This involved processing and standardizing the collected data. Each project varies in duration and implementation cost.Therefore, to enable meaningful comparison across different construction projects, the collected data needed to be processed accordingly.For each analyzed construction project, processed data were determined based on the primary data (a fragment of the data presented in Table 5).The processed data (a fragment of the data is presented in Table 6) was derived for this purpose.Normalization consisted in determining the unitary value of cost and time for each individual examined period, assuming that for each construction project, regardless of the number of settlement periods, the total planned duration is 1.0 and the total planned budget is 1.0. Tables 5 and 6 present a fragment of the knowledge base. The Course of Cumulative Cost Curves (CCCC) As part of the research concerning the implementation of the selected construction projects, an analysis and comparative appraisal of the planned, incurred, and completed schedule and costs were carried out. For the developed data characterizing the analyzed construction projects, full modeling of the planned, earned, and incurred cost curves was carried out.After that, charts of the planned, earned, and incurred cost values were developed for the surveyed typological construction groups/sectors.The charts were prepared in homogeneous groups and also in a diversified group (which consisted of all the analyzed construction projects).An assessment of the actual earned cost was carried out for various construction projects. Figure 3 shows the course of the planned cumulative cost curves for the construction projects from group/construction sector A-apartment houses, while Figure 4 shows the course of the incurred cost curves for the same analyzed group of projects. The research revealed that there is a certain level of similarity in the cumulative cost curve for the various analyzed groups/sectors of the construction industry, but there is no similarity in the cost curves for the entire set of analyzed construction projects. To confirm the presented conclusion, Figure 5 shows the course of the planned cumulative cost curves for construction projects from sector/group C-hotel buildings, which have visibly different shapes than the ones for group A (presented in Figure 3).The research revealed that there is a certain level of similarity in the cumulative cost curve for the various analyzed groups/sectors of the construction industry, but there is no similarity in the cost curves for the entire set of analyzed construction projects. To confirm the presented conclusion, Figure 5 shows the course of the planned cumulative cost curves for construction projects from sector/group C-hotel buildings, which have visibly different shapes than the ones for group A (presented in Figure 3).The research revealed that there is a certain level of similarity in the cumulative cost curve for the various analyzed groups/sectors of the construction industry, but there is no similarity in the cost curves for the entire set of analyzed construction projects. To confirm the presented conclusion, Figure 5 shows the course of the planned cumulative cost curves for construction projects from sector/group C-hotel buildings, which have visibly different shapes than the ones for group A (presented in Figure 3). Determining the Area of Cumulative Cost Curves Using the determined cost curves, the area in which the analyzed construction projects were located was determined.The area of the cumulative cost of the construction overly complex for practical decision-makers such as investors and contractors.Therefore, further scientific research aimed at identifying simpler mathematical models or formulas was pursued to accurately represent the cumulative cost curve.It was therefore considered reasonable to use a third-degree polynomial in order to describe the best fit of the cost curve. The shape of the cumulative cost curve was analyzed, noting its initial and final flat segments during the construction project.This is attributed to the gradual commencement and completion of the project.Initially, activities involve organizing human resources, finalizing contracts with contractors and subcontractors, preparing the construction site, and conducting basic preparatory work.As time progresses, activities accelerate, as reflected in the curve's shape.Multiple workstations operate concurrently with specialized work brigades, leading to increased construction activity and costs.This phase contrasts sharply with the slower initial and final stages of implementation. Based on the analysis of both the literature on this subject and the shape of the cost curve for construction projects collected in the knowledge base, an attempt was made to best fit the cost curve using a third-degree polynomial.An S-shaped cost curve can be mathematically described by two convexities and one inflection point (x 0 ). The cost curve in the initial phase of construction (the first phase) exhibits convexity, geometrically meaning that the curve lies above its tangent at each point within the interval < 0, x 0 ).As construction progresses and activities intensify over time, the cost curve steeply inclines relative to the time axis in the missile phase.The cost curve reaches an inflection point (x 0 ), signaling the transition to the second phase of implementation, where the rate of cost increase begins to decelerate.In the second phase of construction, the cost curve becomes concave, meaning it curves upwards; geometrically, this indicates that the curve lies below its tangent at each point within the interval (x 0 , 1 > ).This description of the cost curve's trajectory supports the use of a third-degree polynomial to accurately predict its shape and behavior, presented in the Table 8. Development of the Three Sigma Rule When implementing construction projects, it is crucial for decision-makers to make informed decisions in response to anomalies or changes that may occur at various stages of the investment implementation.For example, a construction manager, depending on his role, when preparing to implement an investment, determines certain parameters that characterize the investment project.When planning a construction project, the investor determines the available investment budget and completion date.In turn, the contractor develops a material and financial schedule to estimate the cost of construction works and subsequently determines the required time for their completion. The best-fit curves and designated areas of cost curves, which were obtained as a result of the research, help decision-makers plan the course of the construction project and, at the same time, take into account the investment's budget and its duration.Additionally, using the proposed cost curve areas, it is possible to monitor the progress of the construction project and respond appropriately to emerging situations.Depending on the moment in time at which the inspected project is completed, it is possible to quickly estimate deviations from the planned values in terms of cost and time.The determined polynomial functions and graphs of real areas of cost curves (in the form of nomograms) constitute a reliable graphic representation that is useful for the simple application of research results in typologically similar construction sectors. In order to monitor and control the progress of a construction project, it was proposed, in accordance with the Three Sigma Rule, to divide the area of cost curves into three ranges that correspond with three scenarios.The area was divided according to this rule due to the fact that such a division is used (with great effectiveness) as a warning system about danger, abnormal behavior, or something unusual.For this purpose, a model with specific parameters/"a warning system for irregularities" was developed.Therefore, the cost curve area was divided into the following three areas: • The range within <−σ, σ>, identified as the acceptable range (green in Figure 6); • The range within <−2σ, σ> ∪ <−σ, 2σ>, identified as the tolerable range (orange in Figure 6); • The range within <−3σ, 2σ> ∪ <−2σ, 3σ>, identified as the unacceptable range (red in Figure 6). Appl.Sci.2024, 14, x FOR PEER REVIEW 16 of 21 • Scenario 3: The analyzed value falls within the unacceptable range (red area).This indicates significant deviations that could significantly increase costs and extend project completion time.In such a situation, it is crucial to compare the cost curve with the reference ACWP curve and develop corrective actions accordingly.• Figure 6 shows an example area (nomogram) of cumulative cost curves. Conclusions A manager of a construction project faces a fundamental managerial challenge, which consists of managing the course of the accumulated investment cost in such a way that its actual value (actually performed and paid for construction works) is as close as possible (with as little deviation as possible) to the planned value.Therefore, forecasting the actual course of the implementation cost is of great importance in the management of construction projects and enables predictable maintenance of planned investment budgets in a controlled manner and as close to the actual state as possible. On the basis of the analysis of the developed knowledge base, containing data from 612 reports of the Bank Investment Supervision on 45 construction projects from 2006 to 2023 with a total value of over PLN 1,300,000,000, the best-fit curve was determined, and the predicted area of the accumulated cost in selected construction projects belonging to different groups/sectors of the construction industry was determined.The determined polynomial functions and graphs of the areas of real cost curves, in the form of nomograms, constitute a reliable graphical representation enabling the application of research results in typologically similar groups/sectors of construction. The conclusions from the research, applied methods, and developed modeling are as follows: a. Research related to the analysis of the cumulative cost curve with the potential to forecast costs and their exceedances was carried out.b.On the basis of the collected reports of the Bank Investment Supervision, a repre- Monitoring is conducted based on the material and financial schedule planned by the decision-maker, specifically verifying against the budgeted cost of work scheduled (BCWS) curve.When evaluating the status of a construction project, three scenarios may arise, each with corresponding recommendations: • Scenario 1: The analyzed value falls within the acceptable range (green area).This indicates that the project is progressing as planned with minor deviations, and ongoing monitoring relative to the BCWS curve suffices. • Scenario 2: The analyzed value falls within the tolerable range (orange area).This indicates deviations that could impact the budget and project completion date.It is advisable to compare the cost curve with the reference ACWP curve.• Scenario 3: The analyzed value falls within the unacceptable range (red area).This indicates significant deviations that could significantly increase costs and extend project completion time.In such a situation, it is crucial to compare the cost curve with the reference ACWP curve and develop corrective actions accordingly.• Figure 6 shows an example area (nomogram) of cumulative cost curves. Conclusions A manager of a construction project faces a fundamental managerial challenge, which consists of managing the course of the accumulated investment cost in such a way that its actual value (actually performed and paid for construction works) is as close as possible (with as little deviation as possible) to the planned value.Therefore, forecasting the actual course of the implementation cost is of great importance in the management of construction projects and enables predictable maintenance of planned investment budgets in a controlled manner and as close to the actual state as possible. On the basis of the analysis of the developed knowledge base, containing data from 612 reports of the Bank Investment Supervision on 45 construction projects from 2006 to 2023 with a total value of over PLN 1,300,000,000, the best-fit curve was determined, and the predicted area of the accumulated cost in selected construction projects belonging to different groups/sectors of the construction industry was determined.The determined polynomial functions and graphs of the areas of real cost curves, in the form of nomograms, constitute a reliable graphical representation enabling the application of research results in typologically similar groups/sectors of construction. The conclusions from the research, applied methods, and developed modeling are as follows: a. Research related to the analysis of the cumulative cost curve with the potential to forecast costs and their exceedances was carried out.b. On the basis of the collected reports of the Bank Investment Supervision, a representative set of data was created to conduct research on the development of an original method for forecasting the best match of cost curves and cost area in selected construction projects.c. A model was developed, and the course of planned, actual, and developed cost curves for selected construction projects collected in the developed knowledge base was developed.d. A methodology of cost curve research has been proposed by combining two methods used so far for the control and monitoring of construction projects (the cumulative cost curve and the earned value method) into one original method of forecasting the best fit of the cost curve and the cost area in selected construction projects.e. It has been shown that the shape of the cumulative cost curve within a homogeneous group/sector of construction is similar, but when comparing them between different groups of investment projects, a large diversity is visible.f. A research model was developed and its parameters were given in order to elaborate the best fit of the cost curve based on the course of the planned and actual cost of selected construction projects in the form of a third-degree polynomial function.g. The area of best matching of the course of cost curves to the planning and monitoring of costs in various construction projects has been proposed.h. Developed a model with specific parameters of the 'irregularity alert' system, based on the area of cost curves and the three sigma rule. Discussion and Summary The verification of the models was carried out, and is still being carried out, by the main recipient and user of the proposed solutions, i.e., banks.Banks granting investment loans for construction projects are interested in the research results prepared by the author, and they implement them into everyday practice. The research results were presented and discussed with a leading bank in Poland during a seminar/training entitled "Variability of the trend and size of deviations of the planned and incurred costs in various investment tasks".The seminar was conducted for over 45 employees.The participants of the training were employees of, among others, the Real Estate Valuation and Analysis Office, the Risk Department, the Risk Management Division, and also the Investment Banking and Real Estate Financing Department. The training was a response to the needs of the financial market, which has a problem with unrealistic budget reserves being accepted by investors when granting investment loans.The correct financing of an investment depends on how the bank, but also the auditor-the Bank Investment Supervision-assesses the construction project.Banks try to adjust the budget of a construction project and take into account the actual cost by adopting differentiated budget reserves depending on construction groups/sectors and by reducing the budget failure rate from 0.8 (given by PMI) to a value corresponding with the adjusted budget reserve. The research results presented in this paper do not cover all the problems related to the modeling of construction projects.Within the scope of the discussed issue, the subject is developing and should be continued, e.g., by examining the possibility of using other methods, such as artificial intelligence methods, to predict cost curves and their areas in various construction projects.Moreover, it can be used to examine the relationship between the amounts of the performed construction works and the deviation of the actual cost from the budgeted one. It is also advisable to further update the knowledge base with new construction projects, because with an increasing number of analyzed construction projects, it will be possible to iteratively narrow the area of actual costs in various construction projects (shown in Figure 7).It will then be possible to provide banks with new, practical results in the form of, e.g., correction factors that will bring the budgeted cost closer to the actual cost.The presented approach is the result of many years of research on the methodology and tools for modeling construction projects.As a result, a method was developed that allows the course of the actual implementation cost to be forecasted, the best fit of the cost curve to be determined, and the area of the correct cost planning for selected construction The presented approach is the result of many years of research on the methodology and tools for modeling construction projects.As a result, a method was developed that allows the course of the actual implementation cost to be forecasted, the best fit of the cost curve to be determined, and the area of the correct cost planning for selected construction projects to be specified.The method was developed using a reliable knowledge base that contains archival information on various construction projects. The research extends the previously applied approach that uses the earned value method and aims to propose a comprehensive approach to forecasting cost curves.The developed model allows the decision-maker to receive an early warning about the possibility of the occurrence of cost overruns.By developing the original method, two previously used methods for controlling and monitoring construction projects were combined (the cumulative cost curve method-the "S" curve method and the earned value method) into one original course of cumulative cost curve (CCCC) method-the best fit of the cost curves and the area of the curves in selected construction projects.Monitoring that is carried out in accordance with the elaborated model that has specific parameters/"an irregularity warning system" allows for the effective cost management of a construction project and also reduces the possibility of cost overruns. • Stage 1 : Obtaining data on construction projects; • Stage 2: Development of the knowledge base; • Stage 3: Processing of collected data; • Stage 4: Graphical representation of the processed collected data; • Stage 5: Determination of best-fit curves; • Stage 6: Determination of the area of cost curves; • Stage 7: Designation of procedure scenarios. : • Stage 1 : Obtaining data on construction projects; • Stage 2: Development of the knowledge base; • Stage 3: Processing of collected data; • Stage 4: Graphical representation of the processed collected data; • Stage 5: Determination of best-fit curves; • Stage 6: Determination of the area of cost curves; • Stage 7: Designation of procedure scenarios. Figure 2 . Figure 2. Flowchart of the research methodology. Figure 2 . Figure 2. Flowchart of the research methodology. Figure 3 . Figure 3.The planned CCCC of construction group/sector A-apartment houses. Figure 4 . Figure 4.The incurred CCCC of construction group/sector A-apartment houses. Figure 3 . Figure 3.The planned CCCC of construction group/sector A-apartment houses. Figure 3 . Figure 3.The planned CCCC of construction group/sector A-apartment houses. Figure 4 . Figure 4.The incurred CCCC of construction group/sector A-apartment houses. Figure 5 . Figure 5.The planned CCCC of construction group/sector C-hotel buildings. Figure 6 . Figure 6.Example of the best-fit curve and the area of cumulative cost curves (nomogram). Figure 6 . Figure 6.Example of the best-fit curve and the area of cumulative cost curves (nomogram). 21 Figure 7 . Figure 7. Iterative narrowing of the real cost area in various construction projects. Figure 7 . Figure 7. Iterative narrowing of the real cost area in various construction projects. Table 1 . Summary of the number of analyzed construction projects and reports. Table 2 . Number of cost monitoring reports in BIS reports. Table 3 . A sample structure of the work progress report. Table 4 . Fragment of the knowledge base. Table 5 . Primary data-a fragment of the knowledge base. Table 6 . Processed data-a fragment of the knowledge base. Table 8 . Best fit curves-some of the results.
2024-06-29T15:15:52.762Z
2024-06-27T00:00:00.000
{ "year": 2024, "sha1": "75cde2ea4eca151061b1819cba6153fd5ea00937", "oa_license": "CCBY", "oa_url": "https://doi.org/10.3390/app14135597", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "8181341840a1930f220eec7598ca7b7e1d63342f", "s2fieldsofstudy": [ "Engineering", "Business" ], "extfieldsofstudy": [] }
213893599
pes2o/s2orc
v3-fos-license
Intelligent spraying installation for dust control in mine workings Design solution of an intelligent spraying installation for dust control in mine workings, developed within the ROCD project is presented. Current solution for reduction of airborne dust in the underground mining industry is described. Design as well as principles of operation of new solution based on the adaptive operation, consisting in adaptation of spraying intensity to the measured dust concentration are discussed. In conclusions, the directions of further research project are pointed out. Introduction Current requirements regarding the application of dust control devices in hard coal mining industry are focused on the source of dust generation, i.e. when mining coal with the mining machines, during crushing and transferring the run-of-mine on the conveyor belts. Unfortunately, the local method of dust reduction does not eliminate completely the generated dust, contributing to its movement with the air stream of the mine, along the mine workings. This causes that the miners are exposed to the negative effect of dust [1,2]. The presence of dust in mines, in addition to the explosion hazard, is also the cause of the development of pneumoconiosis, which is the most common occupational disease of miners in hard coal mines. Pneumoconiosis is manifested by chronic bronchitis and emphysema, and sometimes heart failure and hypertrophy, and the effects of dust on the workers' organ are visible only after a few or several years of work ( Figure 1) [3]. The suggested concept of an intelligent air-water device intended to prevent against propagation of fine dust particles, i.e. PM2.5 and PM10, to be used in potentially explosive atmospheres will be the response to the demand for dust reduction in roadways, where dust it is carried with air stream. Dust precipitation with use of water drops and its deposition on the floor will be the main task of the installation. The idea of effective dust elimination by the developed device will be based on the generation of a spraying stream of water drops with the diameter close to the size of dust particles, while maintaining a high ejection energy from the nozzle. The solution for producing water drops with diameters close to PM10 and PM2.5, will use the compressed air. Production of spraying streams with small water droplets is to increase the efficiency of dust particles elimination by combining them together, leading to their precipitation. The neutralization process ( Figure 2) can take place as a result of combining several transportation mechanisms, the main ones are: inertial, hook and diffusion mechanisms, and in the case of electric potential differenceselectrostatic mechanism [4]. The mechanism of inertial deposition is based on the impact of a solid particle on the surface of a water drop, the hook mechanism -on its tangential contact, the diffusion mechanism -on the collision of a solid particle with a water drop in a result of random movements, regardless of the stream line, and the electrostatic mechanism -on attracting the opposite electrically charged particles of water and dust. The spraying installation will be equipped with an intelligent control system which, basing on the information received from the dust sensor about actual dust concentration, will select an adequate intensity of the spraying stream, enabling the reduction of dust concentration below maximum allowable . Review of currently used solutions Operation of currently used dust control installation, especially in hard coal mines, installed in roadways is based on suction dust from these places [5] or using the water spraying devices [4]. The wet dust control devices are most commonly used to clean airstream from dust particles. Their use allows not only to capture dust particles, but also to neutralize their explosive properties. These devices use the spraying systems in which water is atomized in the air stream and the dust particles penetrate into the water droplets. Regardless of how the particles are mixed with water drops, a mixture of air with water drops and dust particles is swirled, while the drops, due to the centrifugal force, are directed towards the walls of the dust collector, where ribbed surfaces stop them and they flow to the water reservoir. The water circuit in the dust control device is schematically shown in Figure 3. Figure 3. Schematic diagram and principles of operation of the wet dust collector [5]. Dust control by use of water spraying system is one of the most common and cheapest methods used in mines. The sprinkling technique consists in creating a barrier from water mist throughout the cross-section of the excavation, so that the dust is connected with the current of air, it merges with sprayed drops of water, creating moistened agglomerations of dust particles devoid of volatile properties. There are many concepts and design solutions for spraying devices, both domestic and foreign, which are used to reduce dust in pavement excavations. One of the first solutions to reduce dust with water and air, was developed in the 1960s. This solution was intended for use in Soviet hard coal mines (Figures 4 and 5) [6]. The system consisted of several nozzles, to which air and water were supplied by means of ducts and was located on a roadway support. This installation was supplied with water and air coming from the pipelines available in the roadway, and the direction of the nozzles was opposite to the direction of the roadway ventilation. Unfortunately, this solution has never been used in underground conditions. In 2010, a solution of the "mist system with net diaphragms", using air-water spraying system to reduce dust concentration in the roadway, developed by the Telesto Sp. z o.o., appeared in the Polish mines. The solution is also based on the use of water and compressed air to capture dust. The device is made of a spraying installation suspended in the roadway cross-section as well as of moving grids catching water drops combined with dust, in the form of a labyrinth, placed in area of the personnel [7]. The spraying system consists of air-water nozzles, located on a frame consisting of pipes supplying water and air to them ( Figure 6). Figure 6. Model of the spraying installation being a part of the mist system [7]. High demand for compressed air is the solution disadvantage. This installation, with a flow of 9 dm³/min of water and a pressure of 0.6 MPa, needs about 5 m³/min of air, which is unobtainable in many mines and the need for such air volume raises many problems. The solution developed in ITG KOMAG called CZP BRYZA ( Figure 7) and FOG, where they were equipped with innovative mesh lattices that allow air flow while capturing wetted dust particles ( Figure 8) [4,8] was the answer to high consumption of compressed air in the spraying system. In turn, the air-water nozzles used there enable to achieve high efficiency of dust reduction, with low pressure of spraying media of 0.3-0.5 MPa and low water consumption of 3.0 dm 3 /min and compressed air of 0.75÷1.5 m 3 /min. This property results from the application of special solutions for spray nozzles in which, already inside, atomization of water takes place due to the use of much smaller amount of compressed air than in the case of a known, external atomization process of water, ultimately providing several times lower air flow. Air-water roadway spraying devices are solutions that significantly contribute to the improvement of safety and comfort of work and they improve the environmental working conditions in roadway workings. They cause a significant reduction in the risk of coal dust explosion. Low efficiency in reducing the PM10 dust, which is the most dangerous fraction for the human body is their important limitation. Underground tests of airborne dust concentration carried out within the project, in the case of using the currently used spraying devices, showed an effectiveness not exceeding 33% (Figure 9). These solutions often cause problems with free passage of the personnel and are characterized by high water consumption resulting from their continuous work, which increase the cost of using such solutions and adversely affects the work comfort of miners. The newly developed concept of an intelligent sparaying installation to be used in roadways, whose main task is to reduce airborne dust concentration is the answer to the above limitations. Such installation should ensure low water and air consumption and high efficiency of capturing even the smallest dust fractions. Concept of intelligent spraying installation The solution of intelligent spraying installation for controlling airborne dust concentration in mine underground workings was developed within the ROCD project. Adaptive spraying system consisting in adaptation of its output to the dust concentration measured by the EMIDUST dust meter is the solution advantage. The developed spraying installation will use water and compressed air to generate water drops of the size close to the size of PM10 dust. The intelligent spraying installation ( Figure 10) will have dozen or so spraying units installed on the roadway support circumference and they will be fed from the water and compressed air filtrationand-distribution units. Operation of the spraying units will be controlled by the controller, which will decide about spraying duration and intensity depending on PM10 and PM2.5 dust particles concentration measured by a dust meter to reduce airborne dust concentration below MAC. Water in the water preparation and distribution unit will be divided into three stabilizing-andsupplying lines in which water pressure will be reduced to the set values (three different values Vr 1 =0.1MPa, Vr 2 =0.2MPa and Vr 3 =0.3MPa). Water will flow through each spraying line after opening one of three controlled check valves, directing water stream to the hose supplying the spraying units. By analogy the compressed air in the preparation and distribution unit will be divided into three supplying lines in which air pressure will be reduced by the check valves to the set values (initial settings Vr 1 =0.2MPa, Vr 2 =0.3MPa and Vr 3 =0.4MPa), and then air will be fed to the hoses supplying the spraying units through the controlled check valves (ZZS). Pressure in water and compressed air lines will be controlled by the pressure transducer. The control of each ZZS controlled check valve is realized by double pre-control valves of Marco GmbH, installed on the distribution board ( Figure 13) and it takes place in pairs (activation of one water ZZS check valve and activation of one compressed air ZZS check valve at the same time), so that water and compressed air are supplied to the spraying units. Electric equipment (Figure 14) of the intelligent spraying installation is responsible for proper operation of the spraying system. Intrinsically safe MDJ controller, responsible for collecting information from the EMIDUST EMAG optical dust meter and from the pressure transmitters of water and compressed air and then deciding about activation of double pre-control valves responsible for water flow to the spraying units, is the main subassembly of the electric equipment. The controller is powered from the ZIS intrinsically safe power feeder. The spraying unit consists of a two-media nozzle, to which the connections of spraying media, i.e. water and compressed air, are mounted. The nozzle is mounted on a special articulating arm that allows adjusting the flow direction to the current needs for the roadway. The design of mountings makes them possible to adapt to various sizes of the roadway arch support frames, so the spraying nozzles can be placed in the mine workings of different cross-section shape. The advantage of the twomedia nozzle is the wide fractional range of produced water drops, depending on the parameters of supplied water and compressed air. Such nozzles enable to operate in the range of low water pressure and compressed air pressure (0.05÷0.5 MPa), with low water consumption. The estimated minimum water consumption is 5 dm 3 / h. Principle of operation The developed solution for the intelligent spraying system, after its installation in the roadway, has to measure PM10 and PM2.5 dust using the EMIDUST ITI EMAG dust meter. Basing on the signals from the dust meter, the pressure parameters for water and compressed air will be selected (one of the nine combinations of settings of water and compressed air parameters), enabling to obtain the highest dust control efficiency below MAC. The hydraulic-pneumatic-electric diagram is shown in Figure 16. Water and compressed air will be supplied to water and compressed air preparation and distribution unit from the pipelines. First they will be filtrated and then be distributed into three independent lines controlled by the ZZS check valves. Each water line will have different setting within the range 0.1÷0.3 MPa, and each compressed air line will have different setting within the range 0.2÷0.4 MPa. After getting information from the EMIDUST dust meter, the control unit will select parameters of water and compressed air, and then will open the water and compressed air valves (Figure 17). Proper selection of pressures of water and compressed air supplied to the spraying units will enable the generation of drops with such fractional distributions reduce the identified dust concentration most effectively below MAC. Conclusions The intelligent spraying installation developed within the ROCD project is a response to the problem of airborne dust, especially the PM10 and PM2.5 fractions, occurring in underground hard coal mines. Its task is to eliminate the unfavourable parameters of spraying installations currently used for dust control, by applying the adaptive actions, consisting in adjusting the operational parameters and drops sizes to the dust concentration measured online. This will allow controlling the amount of water, adjusting it to the actual level of dust concentration. The installation consists of preparation and distribution units for water and compressed air. After passing them the media are distributed to each spraying line of individual settings of water and compressed air. The flow of media to each line of the spraying installation is realized by the controlled check valves. Water and compressed air of the set parameters are directed to the spraying units installed on the support arches, where water mist streams of the most favourable parameters will be generated, with drops size adequate to the measured dust concentration (Figure 18). Design of the intelligent spraying installation, which will enable the construction of a prototype device will be based on the developed concept. Further stand tests aiming at determination of dust control efficiency depending on the generated water drops parameters [9], which will allow the development of the installation operation algorithm are planned.
2020-01-02T21:46:48.701Z
2019-12-20T00:00:00.000
{ "year": 2019, "sha1": "049bb942fd4c0ec4b145548f53d9606b7ce69426", "oa_license": null, "oa_url": "https://doi.org/10.1088/1757-899x/679/1/012019", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "dcb3a3e7b8456b8f5a75eac0c9a33d51be3f9d98", "s2fieldsofstudy": [], "extfieldsofstudy": [ "Environmental Science" ] }
119679718
pes2o/s2orc
v3-fos-license
Dynamic SPECT reconstruction with temporal edge correlation In dynamic imaging, a key challenge is to reconstruct image sequences with high temporal resolution from strong undersampling projections due to a relatively slow data acquisition speed. In this paper, we propose a variational model using the infimal convolution of Bregman distance with respect to total variation to model edge dependence of sequential frames. The proposed model is solved via an alternating iterative scheme, for which each subproblem is convex and can be solved by existing algorithms. The proposed model is formulated under both Gaussian and Poisson noise assumption and the simulation on two sets of dynamic images shows the advantage of the proposed method compared to previous methods. Introduction Single Photon Emission Computed Tomography (SPECT) and Positron emission tomography (PET) [1][2][3][4] are nuclear medical imaging modalities that detect the trace concentrations of radioactively labeled pharmaceutical injected in the body within chosen volumes of interest. After an isotope tagged to a biochemical compound is injected into a patients vein, the biochemical compound travels to body organs (liver, kidney, brain, heart and the peripheral vascular system) through the blood stream, and is absorbed by these organs according to their affinity for the particular compound [5,6]. The SPECT system, usually consisting of one, two or three detector head(s) [7][8][9], can record radiopharmaceutical exchange between biological compartments and isotope decay in the patients body as the detector(s) rotate around the body. As very few views can be obtained in one time interval, dynamic SPECT reconstruction is an ill-posed inverse problem with incomplete noisy data. On assuming that motion and deformation are negligible during the data acquiring procedure, we aim to reconstruct the dynamic radioiostrope distribution with high temporal resolution. In fact, it is crucial to extract the actual decay of the isotope, i.e. time activity curves (TACs) of different organ compartments [10] [11], either from projections or reconstructed images [5,[11][12][13][14][15][16][17]. Besides methods that reconstruct each frame of dynamic sequence independently [18,19], many approaches [7,[20][21][22] have been proposed to monitor the tracer concentrations over time, under the assumption of static radioactivity concentration during the acquisition period. However, as the measuring procedure usually takes a considerable amount of time, physiological processes in the body are dynamic and some organs (kidney, heart) show a significant change of activity. Hence, ignoring the dynamics of radioisotopes over the acquisition and applying the conventional tomographic reconstruction method (such as filtered back projection method (FBP)) yields inaccurate reconstructions with serious artifacts. Joint reconstruction approaches for dynamic imaging were proposed in [23,24], where dynamic images of the different time are treated collectively through motion compensation and temporal basis function. Then, the dynamic processes of blood flow, tissue perfusion and metabolism can be described by tracer kinetic modeling [5]. For example, a spatial segmentation and temporal B-Splines were combined to reconstruct spatiotemporal distributions from projection data [?, 14]. In a different form, dynamic SPECT images are represented by low-rank factorization models in [25,26], and further constraints are enforced for the representation coefficients and basis. In some recent work, motion of the organs has been taken into account for the dynamic CT reconstruction in [27][28][29][30][31][32][33]. In this paper, we propose a new variational model in which we take local and global coherence in temporal-spatial domain into consideration. The key idea is that dynamic image sequences possess similar structures of radioactivity concentrations. In fact, the boundary of organs, which are the locations with large gradient, are preserved or changed mildly along time. Inspired by color Bregman TV for color image reconstruction [34] and PET-MRI joint reconstruction [35], we introduce the infimal convolution of Bregman distance with respect to total variation as a regularization, to obtain the dependence of edges of sequential images. Furthermore, based on our previous work [25,26], low rank matrix approximation U = αB T is demonstrated to be a robust representation for dynamic images, especially with proper regularization on the coefficient α and the basis B that corresponds to the TACs of different compartments. Specially, the group sparsity on the coefficient α is enforced as the concentration distribution is mixed from few basis elements for each voxel. Finally, the proposed variational model is composed of a data fidelity term and several regularization elements, to overcome the incompleteness and ill-posedness of the reconstruction problem. The proposed model is solved alternatingly, with each subproblems can be solved with popular operator splitting methods at ease. In particular, the primal-dual hybrid gradient (PDHG) algorithm [36][37][38] is applied to solve the subproblem for the images U and Proximal Forward-Backward Spliting (PFBS) for the coefficients α and the basis B. Our numerical experiments on simulated phantom shows the feasibility of the proposed model for reconstruction from highly undersampled data with noise, compared to conventional methods such as FBP method and least square methods (or EM). Monte-Carlo simulation of noisy data is also performed to demonstrate the robustness of the proposed model on two phantom images. The paper is organized as followed: Section 2 presents the proposed model, where section 2.2 briefly introduces the concept of infimal convolution of Bregman distance and models the edge alignments of images by infimal convolution and section 2.3 models the spatiotemporal property of dynamic image. Section 3 describe the numerical algorithm. Finally, Section 4 demonstrates numerical results on simulated dynamic images, in the settings of Gaussian, Poisson noise and Monte Carlo simulation. Model The regularization of our proposed variational model is based on some properties of dynamic SPECT images. First of all, dynamic image sequences are originated from the radioactivity concentration of few compartments in the field of view. Thus we can naturally use a low rank matrix factorization representation for the image, with the basis related to the time activity curves, which are usually smooth, and the coefficients are related to the locations of the organs that are piecewise constant. For the scenario of large motion, such as the case of cardiac and respiratory dynamic imaging, a proper modeling of motion correction is necessary for a reconstruction with high accuracy. In this paper, we assume that the body movement is minor which means the boundaries of organs of image sequence are almost static and aim to capture the activity decay in each region. One can incorporate a registration or motion correction process in the model for the extension to the case of large motion. In other words, the edges of the image sequence share the similar locations. We will then use the tool of infimal convolution of the Bregman distance with respect to total variation to enforce edge alignments. In the following, we will present the regularization terms in details. Observation model In dynamic SPECT, the goal is to reconstruct a spatiotemporal radioiostrope distribution u t (x) for x ∈ Ω ⊂ R 2 in a given time interval. Given a sequence of projection data f 1 , f 2 , · · · f T with different view angles, we aim to reconstruct the samples of continuous image u t (x) at t-th time interval, i.e. the image sequence u 1 (x), u 2 (x), · · · u T (x). If we denote A 1 , A 2 , · · · A T as T corresponding projection matrices, the observation model can be described as For ease of notations, we present the discrete form of u t (x) with M pixel/voxel at each frame, and the dynamic image is represented as U ∈ R M ×T . The sequence of projections are formulated in linear form: where AU = (A 1 u 1 , A 2 u 2 , · · · A T u T ), and f = (f 1 , f 2 , · · · f T ). In practice, the observed projection data often inevitably accompany with noise. If white Gaussian noise is considered, i.e. The negative log likelihood functional leads to Similarily, if Poisson noise is considered, this term can be replaced by the log likelihood Edge alignments For any two nonzero vectors p, q ∈ R 2 , we first define a relative distance measure where · is the standard dot product on R d , p denotes Euclidian norm. It is easy to see that if and only if p and q are parallel and point to the same direction, d( p, q) = 0. In order to avoid penalizing the opposite direction, one can define a "symmetric" distance as For two given images u and v, we are interested in measuring the degree of parallelism of the gradients at each pixel as a correlation criterion of two images. We first consider the Bregman distance with respect to the total variation of two images (in a continuous setting and assume that ∇u |∇u| and ∇v |∇v| are well defined a.e.): where p = ∇ T ∇v ∇v ∈ ∂J(v) and the Bregman distance is defined as for a convex and nonnegative functional. We can see that there is no penalty for aligned image gradients if the angle between ∇u |∇u| and ∇v |∇v| is zero, independent of the magnitude of the jump in u and v. To define a symmetric distance as in (6), the infimal convolution of the Bregman distance in [34] is introduced to gain independence of the direction of the gradient vector. The infimal convolution [39]) of two convex function I, J : X → (−∞, ∞] is defined as For example, if p, q ∈ R n , the infimal convolution between the 1 Bregman distances , thus the support of p is contained in the support of q, no matter the sign of p(i) and q(i). Inspired of this, the regularization in the form of infimal convolution of Bregman distance with respect to total variation is considered as a measure of parallelism of the edges between two images: This formulation is originally proposed as color Bregman total variation in [34] to couple different channels of color images. Ehrhardt et al. [40] [41] [35] used the above derivation and the resulting measure for PET-MRI joint reconstruction. More rigourous definition in bounded variation space and geometric interpretation, one can refer to [34] [35]. For the dynamic SPECT image, we consider the infimal convolution of the Bregman distance of the total variation to enforce the alignment of the edge sets of sequential frames. Specifically, let u n i be the estimate of frame i at iteration n, we consider an average of the deviation of next estimate to this image and the other frames u n 1 , u n 2 , · · · u n i−1 , u n i+1 , · · · u n T . That is, where T j=1 w i,j = 1. Low rank and sparse approximation The compartment model is often used to describe the concentration change of the tracer in the dynamic image [42]. It is assumed that the transportation and mixture take places between the different physical compartments, such as organ and tissue. We impose the lowrank structure of the dynamic images by assuming that the unknown concentration distribution is a sparse linear combination of a few temporal basis functions which represent the TACs of different compartments. In other words, it assumes that concentration distribution of the radioisotope u t (x), for each pixel/voxel x ∈ Ω at time t can be approximated as a linear combination of some basis TACs: where B k (t) denotes the TAC for k-th compartment at time t, and α k (x) denotes the mixed coefficients. This can be written in matrix form as where α ∈ R M ×K and B ∈ R T ×K with K as the number of compartments. As in general K is a small number compared to the number of time intervals T , we naturally obtain a low rank matrix representation for U . Furthermore, α k is the k-th column of the coefficient α, and each element of α k represents the contribution of the k-th basis to the current pixel. For the image only have few compartments, the nonzero coefficients in α is sparse. In temporal direction, we want to use the least number of bases, that is, as many columns of α as possible are entirely zero. We can use 1,∞ to describe this quantity This term is designed to select few number of basis to represent the images u t . This type of column sparsity was previously studied in [43] and in [44] for hyperspectral image classification. Finally, to enfore the smoothness of the decay of radioactive distribution, we also use as another regularization. Now, we summarize the model that we propose for the reconstruction of dynamic images. Given an estimate of U n at step n, we propose to solve the following reconstruction model The combination of multi regularization aims to take into account the spatial-temporal factorization with the constraints on the basis and the sparsity of representation coefficients, and the alignments of edges. We will show that the proposed model is robust for overcoming the incompleteness of projection data and noise. Numerical algorithms There are three variables U, α and B in the proposed model (12), and non-smooth and complex regularization terms are involved. Thus it is a rather complex problem to solve directly for the three variables. We propose to solve the nonconvex optimization problem with alternating scheme on updating the image U , the coefficient α and the basis B. In the following, we present the alternating algorithm that solves (12) with H(f, AU ) is defined as (4). • In the subproblem (13a), given u n i and p n i ∈ ∂J T V (u n i ), R(u i ) can be rewritten as followed: inf where ∇ T q n i = p n i , i = 1, · · · , T. Then, the subproblem (13a) is reformulated as We solve problem (15) by primal-dual hybrid gradient (PDHG) Algorithm [37,38]. The problem formulation is as follows, where K : U → V is a linear and continuous operator between two finite dimensional vector spaces U and V. For our model, we have is the characteristic function of the set {U ≥ 0}. According to (17), we obtain the iterative scheme for each subproblem. -For the subproblem (17a), the dual variables can be updated as where i = 1 · · · T , j = 1, · · · , i − 1, i + 1, · · · , T and Π B ∞ (1) (z) = z max (1,z) . -For the subproblem (17b), the primal variables are updated as . It is easy to see that U k+1 is the nonnegative projection of ( We note that the variables q n i can be updated by the following property We update q n+1 i = d n ii w i,i + q n i . The overall algorithm for the U subproblem is summarized in Algorithm . Algorithm 1 PDHG Algorithm for U Input: σ, τ , λ, w i,j , i, j = 1, 2 · · · T , Initial: while not satisfy stopping conditions do Dual update: • The subproblem (13b) can be solved by PFBS (Proximal Forward Backward Spliting) (20b) can be rewritten as We can see that α is separable in column from the formulation above. Thus, we can solve for each column α j : The solution of this problem is given by Moreau decomposition [45][46][47].The Moreau decomposition of a convex function J in R n is defined as where J * (p) is the conjugate function of J [46,48]. Let J(α) β max i |α i,j |, then the conjugate function From equation (22), we can see that J * (p) is the set characteristic function of the convex set C β = {p ∈ R n , max(p, 0) 1 ≤ β}. By Moreau decomposition, α k+1 is then given by where Π C δ (α k+ 1 2 ) is the projection of each column of α k+ 1 2 onto C δ and δ = τ β. Overall, the entire algorithm is summarized as follows Simulation results We present the simulation results to validate the proposed model and algorithm. First of all, for the edge correlation regularisation term R(u i ), in practice, it is computational expensive and also unnecessary to consider the correlation of each frame to all the other frames. In our computational results, we only consider the edge correlation to the last iterate, and the former two images and the later two images, that is w i,j = 0 when For the boundary frames, we have the following setting, when i = 1, j = 1, 2, 3; i = 2, j = 1, 2, 3, The proposed method is tested on numerical phantoms for a proof of concept study. We simulate 90 image frames of size 64 × 64 and 2 projections per frame. Three time activity curves (TAC) for blood, liver and myocardium, previously used in [14] (see Figure 1), are used to simulate the dynamic images. The first simulated dynamic phantom is composed of two ellipses. In temporal direction, the positions of the two ellipses are stationary while the intensity in 90 frames within the region of each ellipse is generated according to the TAC of blood or liver. The projections are generated by using Radon transform sequentially performed for each frame. The second numerical experiment is performed on a synthetic image simulating rat's abdomen, where the bright region represents the heart of a rat. We use the TAC in Figure 2 to simulate the dynamic images. Gaussian noise We compare our method with the filtered back projection (FBP) method, the results by alternatingly solving least square model arg min α,B AαB − f 2 F , our previous model, sparsity enforced matrix factorization(SEMF) proposed in [25]. As for the initial value of U , α and B, we use uniformed B-spline, B ∈ R 90×20 as initial basis to solve arg min α,B AαB − f 2 F for α and B. Then, the same α, B, and U = αB T are used as the initialization of U ,α and B to solve our proposed model. In the tests, projections at two orthogonal angles are simulated for every frame to mimic 2-head camera data collection. The projection angles increase sequentially by 1 • along temporal direction. For example, at frame 1, projections are simulated at angle 1 • and 91 • , and at frame 2, angle 2 • and 92 • , etc. Finally, 10% white Gaussian noise is added to the projection data. Reconstruction results with different methods are shown in Figure 3. Since the number of projections is very limited for each frame, the traditional FBP and least square methods cannot reconstruct the images satisfactorily, while the proposed method is capable to reconstruct the images effectively. Compared with SEMF model, when the edge of images jump (see frame 21 -frame 31 in Figure 3), the proposed model can better capture the change of the tendency of TAC. Figure 4 illustrates the comparison of the TACs of blood and liver. The dash lines are the normalized true TACs and the solid lines are the normalized one extracted from the reconstruction images by our method. Even with high level noise and fast change of radioisotope, the reconstructed one fit closely to the true one. For the simulated images of rat's abdomen, the same procedure is applied to generate projection data. Also, 10% noise was added to the sinogram. Figure 5 compares the frames reconstructed by different methods. Clearly, the traditional FBP method and least square method cannot reconstruct the dynamic images with very few projections, however the proposed method reconstructs the images quite accurately. Figure 6 illustrates the comparison of the true TACs and those reconstructed by the proposed method. We can see that they are quite accurate and present small errors. The relative error of the reconstructed image and the true one for t-th frame is defined as , where U rec is the reconstructed frame by the proposed method and U true is the ground truth image. Figure 7 demonstrates the relative error of T images reconstructed We can see that the relative error is smaller by proposed model compared with SEMF. This is due to the fact that in the proposed method, we set the former and later images as reference and the referred images can provide edge information for the image. Again, the proposed method is compared with FBP, and alternating applying the EM algorithm and update of the basis for solving min α,B D KL (f, AαB T ). As for the initials U , α and B of our methods, we use uniformed B-spline, B ∈ R 90×20 as initial basis B and solve the above model to obtain for α and B and U = αB T . Figure 8 and 9 show the results of the ellipse and rat phantom with Poisson noise. Since the number of projections is very limited and the corruption by Poisson noise, the reconstruction by both FBP and EM (with updating basis) are not satisfactory, while the proposed method is capable to reconstruct the main structure of the images faithfully. Figure 10 and 11 shows the comparison of the TACs of blood and liver for the two phantoms. The dash lines are the normalized true TACs and the solid lines are the normalized one extracted from the reconstruction images by our method. Even with high level noise and fast change of radioisotope, the reconstructed one fit closely to the true one. Figure 12 demonstrates the relative error of the image reconstructed by the proposed method with poisson noisy data for the two dataset. By the proposed model, the relative error are small for the proposed method, while for the structure of the second image is complex, the relative error is bigger than the first one. Monte Carlo simulation In order to test the performance of the proposed method in a more realistic scenario, we perform a Monte Carlo simulation for dynamic SPECT imaging. First, we created a 129 × 129 phantom image consisting of three circles as region of interests, shown in Figure 13. The TAC over a time period of 90 time steps of the outer and the two inner circles were displayed in 13(b). For each single frame, the photon counts is a probability proportional to the concentration in every region. The events are detected by a virtual double heads gamma camera rotating around the patient by 1 degrees per time step, which consists of 374 detector bins. Every simulated decay event is projected and counted by the corresponding detector bin. We set the number of events counted by the detector as events = 2 × 10 4 and Figure 14. Based on the sinogram data, we compare the proposed method with the alternating EM algorithm. The results for both test cases are shown in Figure 15. We can see that for the case of a low count number, the proposed method is able to reconstruct the regions properly. Within a number of iterations, the algorithm presents a reasonable reconstruction of the region of interest and the corresponding regional tracer concentration curves. Figure 16 illustrates the comparison of the TACs of two regions. The dash lines are the normalized true TACs and the solid lines are the normalized one extracted from the reconstruction images by our method. The first row are TACs of the events equal to 20000 and the second row are TACs of 200000 events. We also perform Monte carlo simulation on the more complex images: the rat's abdomen phantom. By setting events = 200000, the sinogram image is shown in Figure 17. The images reconstructed can be found in Figure 18 for events = 200000. Figure 19 illustrates the comparison of the true TACs and those reconstructed by the proposed method. We can see that the proposed method is robust to reconstruct most of the structures present in the images. Conclusion and Outlook In this paper, we presented a new reconstruction model for dynamic SPECT from few and incomplete projections based on edge correlation. Both Gaussian noise and Poisson noise are investigated. The proposed nonconvex model is solved by an alternating scheme. The reconstruction results on two 2D phantoms indicate that our algorithm outperforms the conventional FBP type reconstruction algorithm, least square/EM method and the former SEMF model. The reconstructed image sequences are very close to the exact ones, especially for those frames with changed edge directions. Extensive numerical results show that the choice of the regularization methods as well as the reconstruction approach is effective for a proof of concept study. Nevertheless, there are still many aspects that needs to be improved in future. Firstly, the method is tested on simulated 2D images with low spatial resolution while real clinical dynamic SPECT is 3D with higher spatial resolution. Consequently, computation time and acceleration method should be taken the into account. Furthermore, the model involves many parameters that needs to be set in a more automatical way. Therefore it is necessary to discuss the parameter choice in a future work.
2018-01-19T06:28:04.000Z
2017-07-01T00:00:00.000
{ "year": 2017, "sha1": "50bcd2a44948e7fece2a08d1a85c9b0b2423196b", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1707.00158", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "50bcd2a44948e7fece2a08d1a85c9b0b2423196b", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Computer Science", "Mathematics" ] }
543159
pes2o/s2orc
v3-fos-license
Deadly Outbreak of Iron Storage Disease (ISD) in Italian Birds of the Family Turdidae ABSTRACT A widespread deadly outbreak occurred in captive birds belonging to the family Turdidae in Italy. The present study was performed on 46 dead birds coming from 3 small decoy-bird breeders in central Italy. Only Turdus pilaris, Turdus iliacus, Turdus philomelos and Turdus merula were affected. No other species of bird held by these breeders died. A change of diet before the hunting season was reported from all breeders. Full necropsy of the animals and histological investigations of representative tissue samples were performed. Microscopical examination showed marked iron deposits in liver samples. Bacteriological investigations and molecular analysis to exclude bacterial and viral diseases were carried out. Contamination of food pellet samples by mycotoxins and analysis to detect heavy metal contaminants in food pellet samples were considered. An interesting result was the high iron content found in food pellets. It was higher than that considered suitable for birds, especially for species susceptible to development iron storage disease (ISD). Taken together, the results suggested an outbreak of ISD caused by the high iron content of food given to the birds before the hunting season. The high mortality recorded only in species belonging to the family Turdidae suggests a genetic predisposition in the affected birds. Iron storage disease (ISD) is a basically life-threatening disease of captivity reported in several species of toucans (Rhamphastidae), mynahs (Sturnidae), birds-of-paradise (Paradisaeidae), curassows (Cracidae) and quetzals (Pharomachrus species) [4,5,7,10,27]. The pathogenesis of ISD is still poorly understood, and probably multifactorial causes should be taken into account. Several studies have been published on the key role played by dietary iron in hepatic iron stores [5,6,12,21,23,25]. Therefore, dietary modification is still considered an essential element in its treatment [6,8,21]. For this reason, although diets for birds should be formulated to contain between 50-100 mg/kg iron on a dry matter basis [23], levels of 50-65 mg/kg [5] or even lower levels (30 mg/kg) [7,8] were recommended for diets fed to susceptible species. As is well known, ISD refers to avian patients with clinical signs related to iron overload and altered organ function. In affected animals, an increased deposition of hemosiderin in different types of tissue, especially in the liver, heart and spleen, can be observed [13,14,16,17,22,26]. In particular, liver hemosiderosis is microscopically characterized by iron pigment overload in Kupffer cells and hepatocytes [4,14]. Although iron accumulation with hemosiderosis may not always be associated with clinical disease, in severe cases, hepatic damage [1,2] and heart failure [3,21] may occur. Clinical signs commonly reported are not pathognomonic for diagnosis of ISD, and they reflect to end-stage malfunction of various organs in which iron has been accumulated. Common clinical signs of ISD include dyspnea, abdominal distension and ascites, weight loss and depression. Enlargement of the liver, heart or spleen is often observed, but hematology and serum biochemical analysis appear to be of little specific diagnostic value [20]. In humans, the most important iron overload syndrome is a hereditary condition called hemochromatosis [6]. MATERIALS AND METHODS From October to January, a widespread deadly outbreak occurred in captive birds belonging to the family Turdidae in Italy. Six adult blackbirds (Turdus merula), 6 adult fieldfares (Turdus pilaris), 20 adult song trushes (Turdus philomelos) and 14 adult redwings (Turdus iliacus) were sent to the Diagnostics and Animal Welfare of Istituto Zooprofilattico Umbria e Marche for necropsy examination. Nineteen of the birds were male, and 27 were female. All animals came from 3 small decoy-bird breeders in central Italy. The mortality rates were 15% for fieldfares, 30% for redwings, 32% for song trushes and 14% for blackbirds. No other species of bird held by these breeders (Coccothraustes coccothraustes, Alauda arvensis and Fringilla montifringilla) died. The breeders reported a recent change of diet before the hunting season. A commercial low iron (25 mg) balanced diet specifically formulated to encourage singing birds was used. The clinical histories of all the dead birds included anorexia, lethargy and finally dyspnea and death. Further significant anamnestic data were not reported. A full necropsy examination of all dead birds was performed, and representative tissue samples were removed and fixed in 10% neutral buffered formalin for routine histological examination. Hematoxylin and eosin, Masson's trichrome and Perls' Prussian blue staining were performed. During necropsy, fresh tissue samples from lungs, brain and intestines were collected and subjected to molecular analysis for Newcastle disease virus (NDV), avian influenza virus and West Nile disease virus. Viral RNA was extracted from the collected tissues using a QIamp Viral RNA Mini Kit (QIAGEN, GmbH, Hilden, Germany) according to the manufacturer's recommendation. The extracted RNA was store at −80°C until use. A 316 bp fragment of the fusion protein gene (F) of NDV was amplified from randomly transcribed cDNA using the primers NDVF-for2 and NDVF-rev1 [11]. PCR products of the expected size were verified by agar gel electrophoresis. A fragment of gene M of avian influenza virus was amplified using a real-time one-step RT-PCR as previously described by Spackman et al. [24]. Two one-step real-time quantitative reverse transcription polymerase chain reaction (qRT-PCR) assays for the simultaneous detection of West Nile virus (WNV) lineage 1 and 2 strains were performed. The primers and probe for assay 1 targeted the 5′-untranslated region (UTR), whereas the amplicon for assay 2 was located in the nonstructural region NS2A, which enables an unambiguous and independent WNV diagnosis based on 2 different amplicons as described previously [9]. Intestinal and hepatic specimens were also aseptically collected for bacteriological examinations. Coprological examination of fresh stool specimens was also performed. Moreover, touch imprint (TI) cytology from heart and liver samples were prepared. These specimens were stained with a combination of May-Grünwald and Giemsa-Romanowsky stains, using a method described by Pappenheim [18]. Contamination of food pellet samples by mycotoxins (aflatoxin B1 − AFLB1, ochratoxin A − OTA and zearalenone − ZEA, fumonisin B1, B2 and B3) was investigated. AFLB1, OTA and ZEA were examined in a single chromatographic run by high-performance liquid chromatography (HPLC) coupled with a fluorimetric detection after purification by an immunoaffinity column (IAC) (Vicam AOZ, Milford, MA, U.S.A.). Samples were homogenized by preparation of a slurry with water at a ratio of 1:1.6 (w/v) and then extracted with ACN. The retention time (RT) was used for identification purposes, while concentrations of samples were calculated by interpo- lation with a calibration curve. Fumonisins B1, B2 and B3 as well also were determined by IAC (R-Biopharm Rhône P31). The feed samples were homogenized by preparation of a slurry with water at a ratio of 1:2.5 (w/v) and then extracted with ACN/MeOH. The purified extracts were analyzed by triple quadrupole LC-MS/MS (API 3200 QTrap) in MRM mode. For each molecule, two transitions were monitored (quantifier and qualifier). The retention time (RT) and ion ratio (IR) were used for identification purposes, while concentrations were calculated by interpolation with a calibration curve. Analysis of heavy metal contaminants in food pellet samples (iron (Fe), copper (Cu), zinc (Zn) and lead (Pb)) was determined by atomic absorption spectrophotometry (AAS) with an air-acetylene flame after mineralization of the sample. Mineralization was carried out by a dry process. Ten grams of sample were incinerated in a muffle at 410 ± 20°C for 6 hr. The ashes were treated with 10 ml HNO3/H2O 1:1 (v/v). Subsequently, the sample was placed again in the muffle at 410 ± 20°C until it was reduced to white ashes. The ashes were dissolved in 10 ml of HCl/H2O 1:1 (v/v), and the volume was brought to 100 ml with water. The elements were determined by AAS at appropriate wavelengths. RESULTS At necropsy, all birds showed moderate to marked congestion and enlargement of the liver and spleen. The liver was found to be dark brown in color in around 10% of the birds. Moreover, the heart of some animals appeared enlarged, showing small well-demarcated, pale grayish areas consistent with necrosis. The necrosis involved mainly the apex and right ventricle. Microscopic examination of liver samples showed venous congestion and severe, mainly periportal, parenchymal cell degeneration. Yellow/gold to brown granular pigment in the cytoplasm of hepatocytes consistent with hemosiderin was detected in all liver samples (Fig. 1). Necrosis of isolated hepatocytes was also observed. Perls's Prussian blue staining revealed a variable grade of iron deposits in liver samples. Forty percent of birds showed marked iron deposits in periportal (zone 1) hepatocytes; 50% showed accumulation of iron in periportal and midzonal (zones 1 and 2) hepatocytes; 10% of birds showed marked panlobular iron deposits accumulation (zones 1, 2 and 3) (Fig. 2). Blackbirds showed the most severe iron accumulation, while song trushes appeared to be a less affected species. The levels of hemosiderosis in the livers of the investigated species are summarized in Table 1. Masson's trichromic staining did not show hepatic fibrosis. Microscopical examination of heart samples showed acute myocardial necrosis characterized by small groups of eosinophilic and fragmented myocardial fibers in 37% of the birds examined (Fig. 3). Capillary dilatation and mucoid edema with scant leukocyte infiltration were also observed. Perls's Prussian blue staining revealed occasionally granular iron deposits in scattered myocardial cells. Iron deposits were also detected in tubular epithelial cells. No other histological lesions were observed. Analysis for Newcastle disease, avian influenza and West Nile virus showed negative results, as did bacteriological exams. Coprological examination of fresh stool specimens revealed coccidian oocysts of the genus Isospora in 36% of the birds. Touch imprint cytology of heart and liver samples showed microfilariae in 13% of the examined birds. HPLC and IAC for evaluation of AFLB1, OTA and ZEA and Fumonisins B1, B2 and B3, respectively, in food pellets showed negative results. Analysis of heavy metal contaminants in food pellets by AAS revealed the following values: iron 111 mg/kg; copper 6 mg/kg; zinc 38 mg/kg; lead 0.132 mg/kg. In this report, we described a sudden deadly outbreak in Italian birds of the Turdidae family. Firstly, Newcastle disease, avian influenza, West Nile fever and bacterial diseases were excluded by PCR and bacteriological exams. Considering the change of diet before the deadly outbreak, analyses of commercial diet for mycotoxins and metal contamination were performed. Analysis for mycotoxins with HPLC showed negative results. On the other hand, analysis for heavy metal contaminants in food pellet samples showed a higher level of iron than that reported on the food label. Moreover, the iron content was higher than that considered suitable for birds [23]. In particular, it was three times higher than that recommended for feeding to iron-sensitive species [7,8]. Microscopical examination of the liver showed marked accumulation of iron characterized by a progressive involvement of hepatocytes from zone 1 to zone 3. The blackbird seems to be more severely affected, showing marked panlobular iron deposit accumulation. On the other hand, the song trhush seems to be the least resistant species, showing a high mortality despite less marked iron deposition compared with the other investigated birds. However, clinical signs of ISD, usually considered expression of chronic liver damage [23], fibrosis, cirrhosis or neoplasms, were not detected in our birds. Conversely, acute liver damage represented by degeneration and necrosis of hepatocytes with iron overload (sideronecrosis) was constantly found. It is likely that the time required for these birds to develop hepatic chronic lesions has been too short and that acute liver and heart failure resulted in sudden death. Taken together, the results suggested an outbreak of ISD caused by a high iron content in the diet given before the hunting season. Since the environmental and nutritional conditions of the three small bird-decoy breeders were uniform, the mortalities that occurred only in blackbirds, fieldfares, song trushes and in redwing may suggest a predisposition of these species to development of ISD. Captivity-related stress and parasitism found in these birds could be considered possible co-factors triggering ISD [3,19].
2016-05-12T22:15:10.714Z
2014-06-11T00:00:00.000
{ "year": 2014, "sha1": "84b896f133109a8191be8106dbb8a653b8064648", "oa_license": "CCBYNCND", "oa_url": "https://www.jstage.jst.go.jp/article/jvms/76/9/76_14-0129/_pdf", "oa_status": "BRONZE", "pdf_src": "PubMedCentral", "pdf_hash": "5041d12d116302c0ee7eab2b91fc4b676acd6938", "s2fieldsofstudy": [ "Biology", "Environmental Science" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
246099049
pes2o/s2orc
v3-fos-license
When Ads Become Invisible: Minors’ Advertising Literacy While Using Mobile Phones It has been traditionally estimated that children begin to understand the persuasive intent of advertising at about the age of 8 which is when they acquire the skills of adult consumers. The ability to identify and interpret the persuasive content that minors are exposed to via mobile phones was analyzed through semi‐structured interviews of children aged 10 to 14 years along with their parents in 20 households. Although minors seem to be able to recognize the persuasive intent of advertising, this does not necessarily mean that they have a deep understanding of the new digital formats that combine persuasion and entertainment. Data analysis of the interviews shows low recognition of the persuasive intent of commercial messages that are not explicitly identified as such, particularly on social networks. Data collected after minors viewing of different examples allowed researchers to conclude that standardized advertising is mainly identified by its format. Three levels of advertising processing were detected in minors: the liking of the advertisement, the affinity for the advertised product, and the ability to contrast the claims with searches for comments, forums or opinions of influencers. Recent research verified that conceptual knowledge of the persuasive intention of the advertising does not suffice for minors to interpret the message, a fact that must be taken into account when developing advertising literacy. For parents, the amount of time spent on these devices and the type of use minors make of their cellphones or the relationships they establish on them are more relevant than exposure to advertising itself. Introduction Mobile phones are widely present in Western societies. The improvement of mobile internet connection has turned this personal screen into the main point of access, communication, and consumption of digital content for many users (IAB Spain, 2021), including minors. Among Chilean children aged 10 to 13, the penetration of mobile phones is over 80% (Cabello et al., 2020;VTR, 2019). The personal nature of mobiles and their ubiquitous presence (Ohme et al., 2020) gave rise to a relation-ship between users and cellphones that, as Beer (2012) suggests, surpasses that of a mere portal to the digital world. The massive spread of cell phone use and its impact on consumption habits and lifestyles of internet users have transformed this device into an advertising medium. In fact, according to Statista (2019), in 2022 advertising expenditure for mobile media will outpace desktop expenditure. As mobile phone users, minors are highly exposed to advertising when using these devices. Exploratory studies (Feijoo et al., 2020) show that, through their mobile phones, minors spend a significant amount of time connected to platforms such as YouTube, game apps, and Instagram, in which advertising exposure has been quantified to be 14 minutes per hour, slightly higher than that of traditional media such as television. In this context this article aims to research the ability of minors to understand the persuasive intentionality of the advertising they are exposed to through their mobile phones. Particular attention is paid to hybrid advertising formats, which lack intentional transparency (van Reijmersdal & Rozendaal, 2020) and therefore, hinder the recognition of the advertising phenomenon. Children's Advertising Literacy in the Face of New Digital Formats Advertising literacy, also called persuasive knowledge, can be defined as the beliefs that consumers form about the motives, strategies, and tactics used in advertising (Rozendaal et al., 2013). Several theoretical models (e.g., Wright et al., 2005) establish the specific components of advertising literacy. The model proposed by Rozendaal et al. (2011), that differentiates two dimensions of advertising literacy, is used as reference in this study. The first dimension comprises conceptual advertising literacy, which refers to the ability to recognize a commercial message and its intentions. Specifically, this dimension implies: 1. The recognition of advertising, differentiating advertising from other media content such as information or entertainment; 2. Understanding the commercial intention (that the advertising is trying to sell products); 3. Recognition of the source of advertising (who pays to insert ads); 4. Identification of the target audience (understanding the concept of targeting and audience segmentation); 5. Identification of the persuasive intention (that advertising tries to influence consumer behavior by, for example, changing attitudes towards a product); 6. Persuasive tactics (understanding that advertisers use specific tactics to persuade); 7. Capturing advertising bias (being aware of discrepancies between the advertised product and the actual one). The second dimension is attitudinal advertising literacy, which is evaluative in nature. This dimension consists of two components: skepticism towards advertising (the tendency towards disbelief in advertising), and the level of like/dislike towards advertising. Previous studies on advertising in traditional media, assumed that the conceptual dimension of advertising literacy was sufficient for children to filter out and process advertising messages. Nevertheless, several authors have done research on new digital advertising formats Rozendaal et al., 2011Rozendaal et al., , 2013van Reijmersdal et al., 2017;Vanwesenbeeck et al., 2017) and their results indicate that conceptual knowledge of the persuasive intentionality of advertising is necessary but does not suffice for minors to properly process messages that exhibit non-traditional features (Livingstone & Helsper, 2006;Rozendaal et al., 2011). This is due to the fact that when children are exposed to non-traditional advertising, they would be applying a low-effort cognitive processing, according to the model presented by Buijzen et al. (2010;PCMC model), and would fail to activate the associative network of knowledge on advertising they have developed Mallinckrodt & Mizerski, 2007;Rozendaal et al., 2011Rozendaal et al., , 2013van Reijmersdal et al., 2017;Vanwesenbeeck et al., 2017). The embedded, subtle, and enveloping nature of these digital ad formats increases low cognitive elaboration during exposure to them (van Reijmersdal & Rozendaal, 2020). Moreover, children's attention is concentrated on the recreational part of the format, and therefore persuasive message processing abilities are left on the back burner (Rifon et al., 2014). The studies cited herein highlight the need to consider the attitudinal dimension of advertising literacy, which is much more effective in helping children to question and interpret advertising. Despite the difficulties that recognizing persuasive intentionality poses, formats that present blurred boundaries between entertainment, information, and advertising are what younger audiences demand. The AdReaction study by Kantar Millward Brown (2017), revealed that younger audiences are most likely to qualify digital advertising as annoying, however their attitude becomes more positive when exposed to advertising that include rewards, use special effects, or incorporates new immersive elements. In addition, teenagers, for example, accept the presence of brands and sponsorships when it is mediated by influencers of their choice as long as the ratio between entertainment and commercial content is not disturbed (van Dam & van Reijmersdal, 2019). However, the difficulty exhibited by minors in identifying the advertising intention of certain content, the possibility of airing contents unaccompanied by clear warnings given imprecise regulation and the perception of credibility with which influencers infuse commercial communications (Feijoo & Pavez, 2019;Tur-Viñes et al., 2018), all add up to increase the risk of the current advertising context. The need for explicit identification of the commercial interest of content is key to activate persuasive knowledge in the user (Friestad & Wright, 1994). This has led legislators to demand adequate and clear marking of these formats as a way to protect vulnerable audiences (Boerman et al., 2012). Nonetheless, national legislations lag behind on the dynamism of the phenomenon (Sixto-García & Álvarez Vázquez, 2020). There is growing literature on the advertising literacy of minors in the digital context, specifically on advergaming (Hudders et al., 2017;Mallinckrodt & Mizerski, 2007;van Reijmersdal et al., 2012;Vanwesenbeeck et al., 2017), social networks (Rozendaal et al., 2013;Zarouali et al., 2018), personalized digital advertising (van Reijmersdal et al., 2017), or influencer marketing (van Dam & van Reijmersdal, 2019). However, empirical evidence on the advertising literacy of children with mobile phones is still missing. The use of this screen is particularly relevant among minors given its features in terms of mobility, autonomy, and universality which are incomparable to those of other means of online access (Beer, 2012). Thus, the way in which content is consumed on mobile phone needs to be considered: Its current ubiquity allows individuals to communicate, inform, or be entertained anywhere, at any time (Ohme et al., 2020). Likewise, comparatively speaking, the perception of intrusion and invasion of the private sphere is greater via mobile than on other channels. It is considered the most personal communicational extension of human beings (Gómez-Tinoco, 2012). In the last decade, there have been many investigations focused on the analysis of the use of mobile devices by children and young people (Mascheroni & Ólafsson, 2014), given the high penetration the devices have had among the audience mentioned herein. Several authors have conducted exploratory studies on the consumption of advertising through mobile devices in younger children (An & Kang, 2014;Chen et al., 2013;Terlutter & Capella, 2013) and report a certain degree of inconsistency with respect to differentiation and categorization of persuasive messages. For example, researchers such as Chen et al. (2013) showed that age recommendations for services or content offered by apps do not cover the supervision of the inserted advertising. Another exploratory study (Feijoo et al., 2020) revealed that this age group spends much of their time connected to mobile phones in which the level exposure to non-traditional advertising is comparatively higher than media such as television. What seems beyond doubt is that minors are using their mobile phones to access the internet as a priority and this implies a high exposure to commercial content. It is therefore necessary to question whether children are prepared for activating their persuasion knowledge in the mobile context. Therefore, the following research questions are posed: RQ1a: What is minors' conceptual advertising literacy with respect to advertising they receive through mobile phones, specifically in terms of (a) recognition of advertising, (b) understanding selling intent, (c) understanding persuasive intent, (d) recognition of advertising source, and (e) understanding persuasive tactics? RQ1b: What is minors' attitudinal advertising literacy with respect to advertising they receive through mobile phones, specifically disliking it and skepticism towards it? Furthermore, advertising literacy can be dispositional or situational (Hudders et al., 2017): Having dispositional advertising literacy involves various abilities such as (a) being in possession of the knowledge and skills about a phenomenon, and situational literacy; (b) being able to process advertisements as such; and (c) having sufficient consumer's knowledge (cognitive, moral, and affective) with regards to the advertising phenomenon. All these need to be activated when the viewer is exposed to advertising, in order for them to recognize the persuasive intention and critically reflect on the message received. To reflect on the level of correspondence between the minor's self-reported advertising literacy and their actual advertising literacy, a second research question is posed: RQ2: Based on concrete ad mobile examples, what type of contents do children recognize as advertising? Advertising Literacy of Minors From the Perspective of Parents The question arises as to the extent to which access and the specific ways in which certain devices such as cellphones are used individually, hinders direct parental mediation (Oates et al., 2014). It seems pertinent to pay attention to the perceptions of parents about their children's advertising consumption through these screens. Parental responsibilities also include mediating the relationship between minors and the content they consume, which can also be seen as an opportunity to teach them to differentiate between fiction and reality and to help them acquire healthy consumption patterns (Saraf et al., 2013). In fact, some studies suggest that parental concern may be highly relevant when it comes to acquiring certain skills (Condeza et al., 2019;Shin, 2017). However, when parents are asked about the advertising their children consume, they continue to point to television as the main source of this content (Oates et al., 2014). In this context, the last research question is formulated: RQ3: What perceptions do parents have about their children's exposure to advertising on their mobile phones? Chile, a Case Study Chile is an interesting case study due to its high access and consumption of the internet through mobile devices . Its 85% internet penetration of cell phones is similar to that of other OECD countries (Subtel, 2020). The internet is mostly widely accessed through mobile devices (84.2%), more specifically via smartphone, which account for 80% of total access (Subtel, 2020). This access pattern is replicated and accentuated by Chilean children who mainly access internet from their mobile phones, compared to other connection modes such as computers or tablets (Cabello et al., 2020;Feijoo & García, 2019;Subtel, 2020). As is the case in other Western countries (Kabali et al., 2015), although some significant differences related to technological specificities of the equipment, influenced by the socioeconomic stratum and setting (urban vs. rural), are present (Cabello et al., 2018), the penetration of cellphones is the most socially uniform of the cited screens. Methodological Procedures The objective of this research is to analyze the ability and aptitudes of minors to critically navigate the advertising they receive through their mobile phone. To this avail, minors aged 10 to 14 and one of their parents/guardians were included in an interviewing process which incorporated semi-structured interviews. Interviews have been confirmed as an adequate instrument since most children at this age have already acquired the necessary skills to achieve successful levels of verbal exchange (Zarouali et al., 2019). This methodological approach responds to the need for new qualitative studies that can provide in-depth exploration of digital skills, including those related to critical capacity (van Deursen et al., 2016). The interview was designed taking into consideration the following questions: Block 1. Recognition of the advertising phenomenon: Children explained what they understood by advertising, what their opinion of advertising was, what characteristics they associated with advertising, and what level of attention they paid to advertising or what degree of realism they assigned to advertising. Block 2. Attitude towards advertising that children were exposed via mobile phone: We tried to under-stand how children identified and processed commercial messages and their feelings during these encounters, whether advertising was liked, perceived as bothersome, if there was a willingness to watch an ad, and if it was considered as such. Block 3. A 2 min video was played that included 17 mobile digital formats with examples from social media advertising, emailing, SMS, advertisement display from video games, and unmarked commercial content published by influencers. The aim was to confirm children's ability to identify persuasive intention. Block 4. Parental perceptions: What do parents know and think about the role of their children as recipients of advertising. Qualitative data were obtained by means of a thematic analysis using NVivo (Boyatzis, 1995). The research questions and the topics included in the interview script guided which coding categories were established. Given the researchers' long-standing engagement with the topic, both authors participated in the coding process in order to improve the quality of the ensuing interpretation of the analyzed material. Sample Twenty homes were visited between June and August 2019, all located in the metropolitan area of Santiago de Chile to interview one child and one of their parents or guardian per household. As for the minors, 12 were girls and eight were boys; 10 were aged 10 to 12 years old, and the other 10 children were aged 13 or 14 years old; 11 had their own mobile and the rest (nine) used their parents' mobile. As for adults, mothers were generally interviewed (18), with only two exceptions in which a father and an older sister (the child's guardian) were interviewed. Regarding the socioeconomic level of the families, 10 qualify as belonging to level C1 (high), 6 to C2-C3 (middle), and 4 to D (low). The homes sampled had participated in a previous phase of the research project to which this study belongs, in which face-to-face surveys were applied in 501 households to both one minor and one parent/guardian following a probabilistic design by areas/macrozones. A social studies company was in charge of the field work (Feedback S.L) who constructed their network of interviewers with previous experience in research studies with minors available to the authors. Households in which a minor aged 10 to 14 lived was randomly selected within each macrozone in the quantitative process. In those cases in which there was more than one individual who met the selection characteristics, the one who had his birthday closest to the day of the survey was selected. It is from this sampling frame that 20 families, who agreed to participate in the project, were selected. The children should meet the age and gender criteria defined for this qualitative stage, in addition to having telephone ownership, since the navigation registered on the device directly influences the type of advertising the user receives. It is important to clarify that in this second phase an attempt was made to access all kinds of family profiles, as had been achieved in the quantitative process, however, families of well-off levels were more collaborative, hence in the interviews there is a greater representation of the groups C1 while other socioeconomic groups are not equally represented. During the interview, the interviewer first explained the essence of the interview to the parent or adult responsible for the household, who had to issue a signed consent for the minor to participate in this stage of the study. Next, the consent of the minor himself had to be obtained. In a neutral area of the home (kitchen or living room) the interview was completed with a maximum duration of 20-25 min with the aim of preserving the child's attention. An attempt was made to ensure that guardians were not present during the interview to prevent any possible interference with the responses of minors. Finally, after interviewing children, interviews finished with a last set of questions addressed to adults regarding their perceptions of the relationship of minors with advertising on mobile phones. All documents had been previously reviewed and validated by the Ethics Committee of the university to which the research project is linked (University of Los Andes). Conceptual Advertising Literacy The following elements of conceptual advertising literacy (Rozendaal et al., 2011) were expressed by children: (a) recognition of advertising; (b) understanding of selling intent; (c) recognition of advertising source; (d) identification of the target audience; (e) understanding of persuasive intent; (f) understanding of persuasive tactics; and (g) the advertising bias, but in variable degrees depending on their experience as consumers and mobile phone ownership. Minors are aware that advertising "sells things": "It is something that companies use to get people's attention and make them buy their product or do get people to do whatever the company aims at them doing" (I11-girl, 10-to-12 years old, parental smartphone). It was interesting to see that, although at first, they were asked about the phenomenon in general, they spontaneously associated advertising with the digital context, mobile phones, and social networks. Other advertising media, such as television or advertising present in their milieu, appeared in conversations, but in a suggested way; others such as print media or radio were not mentioned: "Advertising is like a way of informing using images and other means during short periods of time when you are looking for something or they appear in all apps or networks" (I3-boy, 13-to-14 years old, own smartphone). Regarding the recognition of the source of advertising, a certain degree of confusion was apparent, caused by the digital context and the normalization of social networks. Thus, while the majority referred to companies or brands as the main sources, some of the younger children connected the source of advertising to people: "[Advertising is] what you get on the networks, what people offer you through cell phones" (I14-girl, 10-to-12 years old, own smartphone). In this study, it was found that minors in general understand that ads seek to get viewers interested in wanting to have the products displayed, "that they want to convince you to buy the product, to go to the place they are promoting" (I11-girl, 13-to-14 years old, own smartphone). Children who declared having experience as consumers and who own a mobile phone tended to be more aware of the purpose of advertising and were able to reason that the ads, and certain content launched by influencers, was aimed at attracting user attention with the goal of selling: They convince the person, for example, that the application is good, that this product is good, and they include sales so that the person buys it and more people buy it. And in the end, they get their way, because if more people buy it, they earn more. (I16-boy, 10-to-12 years old, own smartphone) Indeed, influencers have become recurring intermediaries between brands and young consumers in the digital context. Therefore, those who identify the persuasive intentionality of this commercial relationship, deem it as normal and appropriate. Moreover, they believe it contributes to getting to know brands and products in a "more entertaining way": I like that Mis Pastelitos [a YouTuber] tells me what flour they choose to use, for example, or the fact a pastry bag number six is needed; then you have to go and buy a number six pastry bag and make the cupcake in question. Perfect. In other words, these are things that help me resolve my questions. (I13-girl, 13-14 years old, parents' mobile) Some minors reflect on the addressee of the ads. Minors are aware that certain messages to which they are exposed are not addressed to them but to a different target audience, their parents, for instance. This is particularly true when minors access the internet using their parents' devices. Spontaneously in the conversation, the children alluded to certain tactics that are directly related to advertising, particularly repetition, since it directly influences their attitude towards these types of messages: "Suddenly they go a bit over the top, because they kind of always show, show and show. For example, on YouTube or in a video you see ten advertisements, and the same ones" (I1-girl, 10-to-12, parental smartphone). Moreover, some of the advertising resources detected by children allow them to identify that they are being exposed to advertising: "I realize that it is advertising because they make a saying, like Soprole [dairy brand], which is 'Soprole,' healthy and delicious" (I12-girl, 10-to-12 years old, parental smartphone). Other resources that they associate with the advertising messages are gifts, promotions, rewards, or eye-catching elements: "First when you download [a game] it's free, but then some things you have to pay for. The first day they give you them for free and then you have to pay" (I2-boy, 10-to-12 years old, parents smartphone). Unlike other media, such as television, where they consider the display of advertisements as "orderly," advertisements, on mobile phones they pop up unexpectedly which is perceived by minors as if advertising is continually "going to their encounter." However, in they didn't relate this situation to the personalization of digital advertising. Interruption is another element that most of the interviewees associate with mobile advertising, which they say makes them miss out on other input that may be of greater interest to them. There were few minors interviewed who reflected on the final intention of these tactics. Only two of them (males between 13 and 14 years old with their own mobile) spontaneously commented that advertising is not objective, that it tends to be unrealistic and exaggerated: "There are some advertisements I can't believe, such as those that say that life can be easier by buying some things, but later when you buy them, they are easily wrecked" (I3-boy, 13-to-14 years old, own smartphone). Attitudinal Advertising Literacy To measure the attitudinal dimension of advertising literacy, attention was paid to answers to the like/dislike generated by mobile advertising and the degree of skepticism with which they face it. Minors do not dislike mobile advertising as long as they have control over it, that is, when they, as users, can decide to view the ad or not, and when ads provide some added value, either in the form of entertainment or a reward, especially in gaming apps, in which they gladly invest their attention in exchange for benefits in the game: Suddenly advertising gives you a chance to test a game, I do like that. Or when you can turn your phone into a 360°phone, and by turning your phone it shows your what is around you, that does attract attention. (I1-girl, 10-to-12 years old, parental smartphone) For minors, mobile advertising as content is interesting because it can provide new information, although children are unanimously bothered by the ensuing interruption in what they were doing, in addition to the fact advertising is repetitive and excessive. Hence their main reaction is to omit advertising instantly: "I don't care if advertising appears, but I do want it to appear between songs, not in the middle of the song" (I5-girl, 13-to-14 years old, own smartphone). When analyzing children's responses, we identified three arguments they used to discriminate the advertisements that interest them from those that do not. The most widely used criteria is their own taste and appetite: A significant percentage of children identify advertising based on whether they like it or not, which directly depends on the degree of entertainment advertising provides them. Others apply a second criteria, that relates to their affinity with the advertised product: "My ideal advertisement would be something like toys or things like that, chocolates, but not cars, or wines, or beers, or anything like that" (I14-girl, 10-to-12 years old, parental smartphone). A third, smaller percentage of the sample demonstrated that they contrast the arguments asserted by advertising with their own searches for information: If I am interested in buying [a cell phone], then I would look further to see if it is really necessary, if it is good, if it suits me or I should wait for a different one, if the price is really high for what that cell phone really offers, things like that. (I16-boy, 10-to-12 years old , own smartphone) This more critical attitude is present among minors who have their own device and who acquired previous experiences as consumers: I don't believe advertising when it shows something that is very spectacular because of the image, perhaps in person, in real life, it is not like that. I don't know, for example, the other day I saw a tracksuit that looked very cute, but its fabric, when I later bought it was not like it was in the advertisement. (I6-girl, 13-to-14 years old, own smartphone) Credulity during the discrimination process is present, more so among younger profiles, with a rather relative questioning of advertising bias: "I trust advertising because, if it were bad advertising, companies would probably not be able to air it" (I17-boy, 10-to-12 years old, parental smartphone). On the other hand, only one minor of the 20 interviewees alluded to the influence of their parents' opinion in their processing of the advertisements encountered. Situational Recognition In order to analyze the level of advertising literacy among minors from a situational approach (Zarouali et al., 2019), 17 advertisements launched by mobile phones were displayed to children. These ads included displays in video games, as well as standard formats used in Instagram, Facebook, YouTube, SMS, and emailing formats, in combination with examples of hybrid commercial content . Exposure to specific cases showed that minors tended to recognize standard format advertisements: "When I get an ad that interrupts what I'm doing at the most interesting point, I wait until it ends or until it says skip the ad" (I9-girl, 10-to-12 years old, own smartphone). The fact that children can identify commercial messages by some type of signal enables them to be aware of their presence, meaning that identification is not a result of critical processing: Every now and then an image appears [on Instagram] if you click it, you are taken directly to the store. For example, you can click on the Nike shirt and a tick appears at the bottom, something resembling a bag. At first, I didn't know what it was, but then you click on the photo and the price of the shirt and the store where it sold will appear. (I1-girl, 10-to-12 years old, parental smartphone) Consequently, examples that did not display any kind of warning were not singled out as advertising by participants: [The influencer wearing a Nike t-shirt] is not using the method of pushing people to do something, as he/she does not provide information on it nor tell you to "go, go, go" It is simply a normal photo, like me wearing clothes, for example. (I11-girl, 13-to-14 years old, own smartphone) Indeed, the interest-mediated relationship between brands and influencers was detected by five of the 20 interviewees, all of them minors aged 13 or 14 years old: Influencers are paid, they must say "hey, look, we pay you X and you have McDonald's appear," and the influencer must say "Yeah, no problem." That is typical among that YouTubers who say "This video is sponsored by X," and they wear X clothes to promote them. (I13-girl, 13-to-14 years old, parental smartphone) Minors don't question this practice, nor the fact that YouTubers are self-promoting themselves: "It doesn't bother me, if they are famous, they will sell their own things, such as clothes and all that kind of stuff" (I20-girl, 13-to-14 years old, own smartphone). It was also revealed that minors related varying exposure to advertising depending on the platform. For example, minors considered that YouTube and video games were saturated with ads, and reported less advertising pressure on Instagram and TikTok. Parents' Position on Their Children's Advertising Exposure Regarding parental opinion on the exposure to advertising their children encounter when browsing on mobile phones, the greatest level of agreement is in the high pressure of advertising: "There is nothing on the internet that is not invaded by advertising" (mother, I16-boy, 10to-12 years old, own smartphone). However, they are not concerned about this high presence of commercial content and consider that it does not pose a risk to their children. There are two main reasons that parents give for being calm. The first one is their children's age or attitude towards advertising, "it is rare that she sees much advertising, she always chooses to avoid it. She clicks it off at once. She doesn't pay attention to it" (mother, I9-girl, 10-to-12 years old, own smartphone). As it could be seen, parents' perception favors their children's age as one of the most important containment barriers to being worried about the amount and the type of advertising they consume through their mobile phones. The second one is precisely the fact that advertising is personalized based on the content children consume (games, hobbies), which, in their opinion, defines the type of advertising they receive and limits it to these interests: "In general, I think children don't receive harmful advertising, in general it's merely on video games, and we have those under control" (mother, I7-boy, 10-to-12 years old, own smartphone). Also, the fact that much of this advertising is also shown on television validates it as not harmful to minors. It is hard to find more elaborate visions on the relationship between minors and advertising among parents. Their perception of online risks lies, fundamentally, in the consumption of certain content or in the possibility of being exposed to other dangerous situations. They also tend to minimize their own role in this context, something that, according to previous research (Condeza et al., 2019;Shin, 2017) could be a lot more relevant than the children's age to acquire the skills necessary to adequately cope with this content. Discussion This study provides additional evidence which verifies that the conceptual knowledge of advertising is not enough to be able to identify it in the digital environment, as advanced by Livingstone and Helsper (2006), as well as Rozendaal et al. (2011). Minors are aware of the presence of advertising in the digital environment and they acknowledge its excessive presence, a notion unanimously shared by their parents. This study provides more evidence supporting the idea that when children encounter hybrid content, they respond with low-effort cognitive processing Mallinckrodt & Mizerski, 2007;Rozendaal et al., 2011Rozendaal et al., , 2013van Reijmersdal et al., 2017;Vanwesenbeeck et al., 2017). The role played by the presence of formal aspects in advertisements which help children identify advertisements becomes particularly relevant, as indicated by . Thus, participants in this study tend to distinguish the advertising they see in their cellphones from other types of messages not by the content, but by the form it takes, which is to say that recognition derives from technical aspects, not critical processing. However, when these external signals are not present, minors do not classify the content as advertising. Format brings along trust and thus intentionality remains unquestioned. This would explain why the majority of those interviewed did not question whether the recommendations provided by the influencers they follow on social networks may be promoted content. Furthermore, there seems to be a certain transfer of positive sentiment towards advertising when ads pop up in an entertainment context (Mallinckrodt & Mizerski, 2007;van Reijmersdal et al., 2012). Also contributing to this positive feeling towards advertising is the fact that ads adjust to their tastes and preferences, as pointed out by van Reijmersdal et al. (2017). Advertising only becomes bothersome when children feel they cannot control its presence, it is perceived as boring or it interrupts their browsing experience. This study also provides evidence in the direction that mobile ownership and degree of expertise in the digital environment are related to more critical attitudes towards advertising. Thus, the extent to which these two factors are related to the child's age, the relevance of the aforementioned factors would go in the same direction of what has been proposed by Chu et al. (2014) and Hudders et al. (2017) regarding greater cognitive development among older minors. For the new generations, the mobile phone has become the main advertising medium, ahead of other classic media such as television. The fact that mobile screens are mainly for personal use seems to generate low tolerance levels towards interruption, repetition, or content beyond their immediate interests. Minors, however, do not seem to connect this rather negative attitude to advertising itself. They seem to associate negativity to how saturated of advertisement the media are and to their lack of control (and ensuing frustration) over unsolicited advertising. Now, if advertising provides added value in the form of tangible compensation (promotions, discounts, rewards in games) or in the form of entertainment, the perception of minors on mobile ads improves. Therefore, advertising forms such as content marketing and commercial content created by influencers turn out to be the persuasive communication that best captures minors attention and intention. This presents a great dilemma because it is the audience itself that demands formats with blurred boundaries between advertising, entertainment, and information on mobile phones. This fact shows the need for those responsible for child development to reinforce children's advertising literacy with regard to the use of mobiles. This reinforcement stems from critical thinking, an ability that has been qualified as one of the key digital skills of the 21st century (van Laar, 2019). The challenges that these results pose for advertising literacy are clear: Minors have knowledge that allows them to identify advertising as long as it is marked or includes resources with which they are familiar (repeti-tion, presence of certain icons, etc). However, the ability to identify advertising is hindered, particularly among those with less browsing expertise or when advertising is integrated within other content. In addition, recognition does not imply the activation of critical thinking, given that if advertising is perceived as an entertaining element (something particularly demanded from mobile advertising by the youngest), acceptance sets in and limits the cognitive resources they have to processing the message. Parents, as a filter in their children's advertising literacy, seem concerned about the amount of advertising to which their children are exposed in a generic way. However, they view message advertising customization as some type of protective effect and think that their children's age makes them resistant to possible commercial interests for products out of other children's age range. According to the literature, parents consider television to be a main source of advertising consumption by their children's (Oates et al., 2014) and also seem to think that their children's age makes them only vaguely interested in advertising content. Thus, parents do not seem to be aware that the acquisition of healthy advertising consumption habits by minors can depend much more on parental intervention than on their children's age (Condeza et al., 2019;Shin, 2017). Conclusions This study once again highlights what many researchers have been saying for some time: the need to abandon arguments solely based on the amount of time children spend in front of screens, and focus the debate on qualitative questions, taking into account variables such as content, context, and connections (Livingstone, 2018). Messages to parents need to be improved, as parents try to enforce rules based on the control of the amount of time spent on screens, an area that is particularly difficult to restrain given the ubiquity of technology. In a digital environment in which hybrid content abounds, signaling of commercial content is a must but does not suffice: More research is needed to learn how to make everyone aware of the need to develop advertising literacy through which the use of critical thinking can be ensured. This becomes crucial at a time in which children are interacting with a screen that can be accessed anywhere, anytime, and in a very personal and personalized way with whatever filter they may have been able to individually establish.
2022-01-22T16:16:32.874Z
2022-01-20T00:00:00.000
{ "year": 2022, "sha1": "f9465d0705186c9f8dc4cc31162bcd3748473b48", "oa_license": "CCBY", "oa_url": "https://www.cogitatiopress.com/mediaandcommunication/article/download/4720/4720", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "0b17733a3810f71a5b695d64b302a49556492d81", "s2fieldsofstudy": [ "Business" ], "extfieldsofstudy": [] }
16434421
pes2o/s2orc
v3-fos-license
Utility of urgent colonoscopy in acute lower gastro-intestinal bleeding: a single-center experience Background. The role of urgent colonoscopy in lower gastro-intestinal bleeding (LGIB) remains controversial. Over the last two decades, a number of studies have indicated that urgent colonoscopy may facilitate the identification and treatment of bleeding lesions; however, studies comparing this approach to elective colonoscopy for LGIB are limited. Aims. To determine the utility and assess the outcome of urgent colonoscopy as the initial test for patients admitted to the intensive care unit (ICU) with acute LGIB. Methods. Consecutive patients who underwent colonoscopy at our institution for the initial evaluation of acute LGIB between January 2011 and January 2012 were analysed retrospectively. Patients were grouped into urgent vs. elective colonoscopy, depending on the timing of colonoscopy after admission to the ICU. Urgent colonoscopy was defined as being performed within 24 hours of admission and those performed later than 24 hours were considered elective. Outcomes included length of hospital stay, early re-bleeding rates, and the need for additional diagnostic or therapeutic interventions. Multivariable logistic regression analysis was performed to identify factors associated with increased transfusion requirements. Results. Fifty-seven patients underwent colonoscopy for the evaluation of suspected LGIB, 24 of which were urgent. There was no significant difference in patient demographics, co-morbidities, or medications between the two groups. Patients who underwent urgent colonoscopy were more likely to present with hemodynamic instability (P = 0.019) and require blood transfusions (P = 0.003). No significant differences in length of hospital stay, re-bleeding rates, or the need for additional diagnostic or therapeutic interventions were found. Patients requiring blood transfusions (n = 27) were more likely to be female (P = 0.016) and diabetics (P = 0.015). Fourteen patients re-bled at a median of 2 days after index colonoscopy. Those with hemodynamic instability were more likely to re-bleed [HR 3.8 (CI 1.06–13.7)], undergo angiography [HR 9.8 (CI 1.8–54.1)], require surgery [HR 13.5 (CI 3.2–56.5)], and had an increased length of hospital stay [HR 1.1 (1.05–1.2)]. Conclusion: The use of urgent colonoscopy, as an initial approach to investigate acute LGIB, did not result in significant differences in length of ICU stay, re-bleeding rates, the need for additional diagnostic or therapeutic interventions, or 30-day mortality compared with elective colonoscopy. In a pre-specified subgroup analysis, patients with hemodynamic instability were more likely to re-bleed after index colonoscopy, to require additional interventions (angiography or surgery) and had increased length of hospital stay. INTRODUCTION Lower gastro-intestinal bleeding (LGIB) is a common condition, requiring hospitalization in 21 per 100 000 people [1]. The incidence of LGIB rises steadily with age and, in the elderly, may surpass that of upper gastro-intestinal bleeding (UGIB) [2]. Colonic diverticular bleeding is the most common source of bleeding making up 30-50% of cases reported in the literature [1]. Other lesions (colonic angiodysplasia, rectal or colonic ulcers, colitis, neoplasia, or small intestinal lesions) account for the remaining identifiable causes of LGIB. Endoscopy is the standard of care in the management of UGIB, and urgent esophago-gastroduodenoscopy (EGD) within 12-24 hours of admission has been shown to provide valuable prognostic information, to facilitate the treatment of high-risk lesions, and to improve patient outcomes and resource utilization. In contrast, the role of urgent colonoscopy has not been similarly applied to LGIB and remains controversial. Traditionally, colonoscopy in LGIB is performed electively, due to the need for bowel preparation; however, a major limitation with colonoscopy in this setting has been the low detection rate of bleeding lesions, thus precluding endoscopic hemostasis. A number of studies have indicated that urgent colonoscopy-defined as colonoscopy performed within 12-24 hours of admission-is safe and may facilitate the identification and treatment of bleeding lesions [3,4]; however studies comparing this approach with elective colonoscopy or with other interventions for LGIB are limited. The aim of this study was to determine the utility-and assess the outcome-of urgent colonoscopy in those with acute lower GI bleeding at our tertiary care center. MATERIALS AND METHODS Patients A chart review was performed, of a prospectively maintained database of patients who underwent colonoscopy for the initial evaluation of acute lower GI bleeding from January 2011 to January 2012. All patients were required to give informed consent prior to their procedure. The study was approved by the Institutional Review Board (IRB) at our institution. Demographics and clinical variables Demographic, clinical and procedural data were collected, including colonoscopy findings and complications. Patients were grouped into urgent vs. elective groups, depending on the timing of colonoscopy after admission to the intensive care unit (ICU). Urgent colonoscopy was defined as colonoscopy performed within 24 hours of admission and those performed later than 24 hours were considered elective. Colonoscopies were performed by a gastroenterology fellow, under the supervision of the attending gastroenterologist, after a standard polyethylene glycol preparation administered either orally or via naso-gastric tube. The quality of the preparation was graded as 'excellent' if there was no stool, blood, or clots covering the mucosa, 'fair' if less than 25% of the mucosa was obscured by stool, blood, or clots and 'poor' if there was formed stool or if greater than 25% of the mucosa was obscured by stool or blood. Outcome measurement The primary end-point was re-bleeding rates. Endoscopic therapy was considered successful if bleeding ceased at the end of the procedure. Re-bleeding was defined as bleeding occurring after colonoscopy and clinical cessation of index bleeding event during the hospitalization. Secondary end points were blood transfusion requirements, duration of ICU stay, need for angiography or surgery, and 30-day mortality. Statistical analysis Descriptive statistics were computed for all factors. These included means and standard deviations for continuous factors, and frequencies and percentage for categorical variables. A univariable analysis was performed to compare urgent with elective procedures. Student's t-tests were used to assess differences in continuous variables, and Pearson's 2 tests were used for categorical factors. A multivariable logistic regression analysis was performed to assess factors associated with red blood cell transfusion requirements. An automated stepwise variable selection method was performed on 1000 bootstrap samples to choose the final model. The timing of colonoscopy was forced into the model, and age, sex, hemodynamics and medical co-morbidities were considered for inclusion. Factors with an inclusion rate of 20% or more were kept in the final model. Univariable Cox regression analysis was used to assess factors associated with re-bleeding. In addition, a time-to-re-bleeding analysis was performed. Followup time was defined as time from index colonoscopy to re-bleed or 30 days if no re-bleeding was observed. A P-value of less than 0.05 was considered statistically significant. All analyses were performed using SAS (version 9.2; SAS Institute, Cary, NC, USA). RESULTS Fifty-seven patients underwent colonoscopy for the evaluation of suspected lower GI bleeding. The mean age of the patients was 68.0 AE 12.5 years. The demographic and clinical characteristics are shown in Table 1. There were no statistically significant differences between the urgent and elective groups in terms of demographics, co-morbidities, or medication use. Overall, 10.5% of patients were using clopidogrel, and 54.4% were using NSAIDs or aspirin. On presentation, 53% of patients were hemodynamically unstable. Markers of hemodynamic instability were defined as GI blood loss anemia or shock, requiring packed red blood cell transfusions or vasopressor therapy, respectively. The majority of patients received a standard 4 L polyethylene glycol preparation in the urgent (88%) and elective (73%) colonoscopy groups (P = 0.37). The remaining patients received either MoviPrep Õ or HalfLytely Õ and Bisacodyl bowel preparation. The endoscopic view in the urgent and elective groups was rated as excellent in 13% and 9%, good in 44% and 30%, fair in 44% and 39%, and poor in 0% and 21% of patients, respectively (P = 0.076). Ulcers (post-polypectomy, colon, or rectal), with either active bleeding or stigmata or recent bleeding, were found in 14 patients (42.4%) in the urgent colonoscopy group. Bleeding post-polypectomy ulcers were located as follows: one in the left colon, two in the transverse colon, and two in the right colon. No differences were found between the urgent and elective colonoscopy groups (83% vs. 79%; P = 0.067) in identifying the source of bleed or use of subsequent endoscopic therapies (70% vs. 51%; P = 0.14) to achieve hemostasis. Visceral angiography was performed in three patients and revealed a putative site of bleeding; in all three, angiodysplasia of the colon was presumed to be the cause of bleeding. The bleeding lesions were located as follows: one in the right colon and two in the cecum. Two patients underwent emergent surgery after elective colonoscopy (one right hemi-colectomy and one sub-total colectomy) for ischemic colitis and fulminant active colitis, respectively. The sources of bleeding and subsequent therapeutic interventions in each group are shown in Table 2. Rates of re-bleeding did not appear to be different between the urgent and elective colonoscopy groups [five (21%) and nine (28%) respectively; (P = 0.53)]. The length of stay in the ICU was lower in the urgent colonoscopy group (2.0 days vs. 5.0 days, respectively), but it did not reach statistical significance (P = 0.056) ( Table 3). The need for additional interventions, such as angiography or surgery, were no different between the two study groups. Patients who underwent urgent colonoscopy received significantly more blood transfusions (P = 0.003) and were more likely to be hemodynamically unstable (P = 0.019). No patients died within 30 days of index bleed in either group. In a specified sub-group analysis, patients requiring blood transfusions (n = 27) were more likely to be female (P = 0.016) and diabetic (P = 0.015); however, multivariate analysis revealed that only those patients presenting with hemodynamic instability required a significantly increased DISCUSSION The diagnostic evaluation method of choice in severe LGIB is generally considered to be colonoscopy [5]. Current guidelines recommend early colonoscopy, citing a diagnostic yield of 48-90%; however the timing of colonoscopy in various studies ranged from 12-48 h after presentation [5]. Whilst it is tempting to extrapolate the benefits of urgent endoscopy with hemostatic treatment in acute upper gastro-intestinal bleeding to LGIB [6,7], it should be emphasized that such extrapolation may not be applicable. In our study, we found no benefit of urgent colonoscopy in the primary endpoint-re-bleeding during hospitalization-or secondary endpoints including length of hospital stay, number of units of blood transfused, or number of subsequent interventions (i.e. angiotherapy or surgery). There was a trend towards shorter length of hospitalization in the urgent colonoscopy group, without further bleeding, although this difference was not statistically significant (P = 0.056). A potential explanation could be the lack of protocol guiding hospital discharge after colonoscopy and efficient triage of low-risk patients. Additionally, studies of UGIB have shown that, when triage is left to the discretion NSAIDs = non-steroid anti-inflammatory drugs, SSRI = selective serotonin reuptake inhibitor, PRBC = packed red blood cell. of the admitting team, patients with low-risk findings are frequently not discharged despite the evidence that this is safe and efficient [8]. Early performance of colonoscopy appears to improve diagnostic yield. We found that 83.3% of colonoscopies were diagnostic in the urgent group, compared with 78.8% in the elective group. A study by Laine et al. made similar findings, with a definitive bleeding source identified significantly more often in patients with LGIB undergoing urgent colonoscopy vs. elective colonoscopy [9]; however, diagnosis alone may have little impact on major clinical outcomes. Stigmata of hemorrhage need to be identified and treated in order to justify urgent interventions. In our study, 17 patients (70%) in the urgent colonoscopy group had endoscopic findings leading to a therapeutic intervention. The proportion of patients undergoing endoscopic therapy in our study is much higher than results from a pooled analysis of case series, which reported that only 12% of patients underwent endoscopic therapy [10]. We speculate that the high incidence of ulcers (post-polypectomy, colonic, and rectal) with active bleeding or stigmata of recent bleeding in our population could explain this disparity. Additionally, the majority of individuals in our study reported NSAID use, which may conceivably have contributed to the high prevalence of ulcers in our study. The mechanisms involved in the induction of GI bleeding by NSAIDs are incompletely understood but may include platelet activity inhibition, as well as concomitant use of warfarin, aspirin, or other anti-platelet agents. In our study, 25% of patients experienced an episode of re-bleeding while in the hospital. These findings are higher than the results from a pooled analysis of case series [11], which showed a re-bleeding rate for LGIB of 15%. This may in part be due to our patient cohort, which consisted of a predominately older and male-dominant population, as described in the study by Rabeneck et al. [12]. Additionally, in a subgroup analysis, patients with hemodynamic instability were more likely to re-bleed, require additional interventions (angiography or surgery) and had increased length of hospital stay. Our study probably reflects observations at most referral hospitals, as the vast majority of patients were admitted through the emergency department or transferred from the regular nursing floor to the intensive care unit (ICU); however, limitations of the present study include its retrospective design, which may have led to an underestimation of bleeding cases-in particular, referral of patients with severe bleeding for initial radiographic evaluation, rather than colonoscopy by emergency room physicians. The lack of standardization of care makes it difficult to draw conclusions regarding the effectiveness of the procedures. The small number of patients also limited the assessment of differences in outcomes and multiple predictors. This study was performed at a single tertiary care institution and may not be generally applicable to other hospital settings-particularly those without a designated GI bleeding team and on-call support staff, in which urgent procedures are performed at the bedside. Finally, physician preferences were not assessed but clearly contribute to management decisions in LGIB. In summary, urgent colonoscopy as an initial approach to investigation of acute LGIB did not result in significant differences in length of ICU stay, re-bleeding rates, the need for additional interventions or 30-day mortality, compared with elective colonoscopy. Individuals presenting with hemodynamic instability in the setting of LGIB were more likely to experience a re-bleed after index colonoscopy and may be best served by undergoing urgent angiography in conjunction with surgical consultation. Nevertheless, we believe that urgent colonoscopy potentially could play a role in identification of a definite site with active hemorrhage or stigmata of recent hemorrhage, allowing application of endoscopic therapy or guiding subsequent treatment. Further prospective studies are needed to directly compare the therapeutic efficacy and safety of urgent vs. elective colonoscopy for all sources of acute LGIB, so that an evidence-based, standardized approach to acute LGIB can be developed.
2017-04-08T21:01:31.124Z
2014-06-23T00:00:00.000
{ "year": 2014, "sha1": "3bd746fd15745897703d5bf561c85432a01298c9", "oa_license": "CCBYNC", "oa_url": "https://academic.oup.com/gastro/article-pdf/2/4/300/9523957/gou030.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "3bd746fd15745897703d5bf561c85432a01298c9", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
139132966
pes2o/s2orc
v3-fos-license
Laser Ablation of Xanthelasma Palpebrarum by Using the Pinhole Method Xanthelasma is large, may ablation we an adequate to a large xanthelasma palpebrarum lesion. Lasers; Engineering, Basic Research, INTRODUCTION Xanthelasma palpebrarum is a cutaneous xanthoma arising around the eyelids. It is the most common form of xanthoma and is histologically composed of foamy histiocytes. It can be accompanied by hyperlipidemia, although not all cases are associated with hyperlipidemia. 1 Surgical excision and laser treatment are the main treatment modalities, and chemoablation can also be performed. Although surgical excision is a certain method for complete removal in a single procedure, it is difficult to apply if the lesion is large or difficult to remove because of its location. Furthermore, both laser and chemoablation treatments have a long healing period, and can cause scar contracture in case of large xanthomas. At our center, we have treated xanthelasmas with laser ablation with the pinhole method since 2013. In this article, we reviewed prospectively enrolled data and evaluated the efficacy of the pinhole method. MATERIALS AND METHODS From June 2014 to December 2017, 13 patients were treated with pinhole laser ablation. We used carbon dioxide (CO2) laser or erbium:yttrium aluminum garnet (Er:YAG) laser. The patients were required to visit the clinic every month, during which they were photographed for evaluation. If there were remnant or recurring xanthomas, retreatment was done for 1 month until the lesions were successfully treated. Patients were treated with either CO2 or Er:YAG laser with a beam size of 1.5-2 mm (interbeam distance, 1-2 mm). Ablation was performed to a depth until the orbicularis oculi muscle was visible or the wound bled ( Fig. 1 and 2). The treatment was repeated until the lesion was successfully removed, and the results were assessed 1 month after the last treatment. Three independent plastic surgeons who were not involved in this study assessed the results in terms of grade of improvement and pigmentation. The improvement was graded as follows: 1. The size of the beam (blue) was 1.5-2 mm and the interbeam distance was 1-2 mm. Ablation was done until the orbicularis oculi muscle was visible or when bleeding occurred. 1 (minimal improvement), 2 (moderate improvement), 3 (marked improvement), and 4 (near-total improvement). The results of color matching were divided into hypopigmentation, normal, or hyperpigmentation (Table 1). In patients with multiple xanthomas, each individual lesion was assessed and counted separately. RESULTS A total of 13 patients (7 women and 6 men) with 23 le-sions were enrolled and assessed. The mean patient age was 50.1 ± 8.72 years. The mean follow-up period was 8.3 months, and the mean number of treatments was 3 times (Table 2). Of the 13 patients, 8 (61.50%) had bilateral xanthomas. The mean grade of improvement was 3.54, indicating that the pinhole method caused marked or near-total im- (Table 3). However, there were 3 cases of hypopigmentation, which tended to occur when the lesion was larger and broader. DISCUSSION Xanthelasma palpebrarum is the most common type of xanthoma and commonly observed in middle-aged female patients. 2 It histologically consists of lipid-foamy histiocytes, and up to 50% of patients show high levels of lipid profile. In their cross-sectional study, Pandhi et al. observed that patients with xanthelasma palpebrarum have a high risk of atherosclerotic disease. 3 Therefore, it is important to suspect and identify xanthelasma palpebrarum when patients visit for evaluation. 1 Because of the location of the xanthelasma, many patients visit the clinic seeking for its removal. Various treatment modalities are available, including surgical excision, chemical peeling, cryotherapy, and laser therapy (CO2 or Er:YAG). [4][5][6][7] For small xanthelasmas, surgical excision is a simple and definitive treatment. However, if the lesion is huge, it is impossible to close directly without extensive scarring or distortion such as an ectropion; le Roux removed large xanthelasmas by including them in excised skin during blepharoplasty. 8 Elabjer et al. excised large xanthelasmas and covered the defects with ipsilateral and/or contralateral eyelid skin grafts harvested using blepharoplasty. 4 This method, however, is only available when blepharoplasty was originally scheduled or when the patient has redundant skin on the eyelids. A method of excising the lesion and allowing the wound to heal secondarily was also reported. 9 In addition to surgery, cryotherapy, chemical cauterization, and laser ablation are simple methods for both patients and physicians, and have shown satisfactory results. 7,10,11 However, these methods also have limitations when the lesion is small or medium in size. Healing with secondary intention of large xanthelasmas may result in scar contracture and delayed healing. The pinhole method with laser can overcome these problems. Ahn et al. reported a case treated using the pinhole method in 2013. 12 The patient was successfully treated without scarring or distortion. In our study, we prospectively enrolled 13 patients and followed them for a relatively long period. Healing was faster because the small holes made by the pinhole method were each surrounded by intact epithelium (Fig. 2). Furthermore, because small defects rarely cause scar contracture, scarring or distortion is rare even after several treatments. Unlike in the report by Ahn et al. 12 , there were 3 cases of hypopigmentation in our study. Hypopigmentation was more frequent as the size of the lesion was larger and the skin color was darker. As shown in Fig. 3 and 6, large and broad xanthelasmas tend to become dyschromic. Therefore, physicians should inform the patient in advance about this possibility. In conclusion, we propose the pinhole method with CO2 or Er:YAG laser as a safe and effective treatment modality in patients with xanthelasma palpebrarum, especially when the lesion is large. As CO2 lasers are widely available, the pinhole method can be easily performed.
2019-04-30T13:07:48.581Z
2018-06-30T00:00:00.000
{ "year": 2018, "sha1": "13da4d1b22a45b093d001fe36e9580c326d82e9b", "oa_license": "CCBYNC", "oa_url": "http://www.jkslms.or.kr/journal/download_pdf.php?doi=10.25289/ML.2018.7.1.21", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "9c4999db8266de9a8776700fd20fb8caf737db5d", "s2fieldsofstudy": [ "Medicine", "Engineering" ], "extfieldsofstudy": [ "Medicine" ] }
259184396
pes2o/s2orc
v3-fos-license
Cheese consumption and multiple health outcomes: an umbrella review and updated meta-analysis of prospective studies This umbrella review aims to provide a systematic and comprehensive overview of current evidence from prospective studies on the diverse health effects of cheese consumption. We searched PubMed, Embase, and Cochrane Library to identify meta-analyses/pooled analyses of prospective studies examining the association between cheese consumption and major health outcomes from inception to August 31, 2022. We reanalyzed and updated previous meta-analyses and performed de novo meta-analyses with recently published prospective studies, where appropriate. We calculated the summary effect size, 95% prediction confidence intervals, between-study heterogeneity, small-study effects, and excess significance bias for each health outcome. We identified 54 eligible articles of meta-analyses/pooled analyses. After adding newly published original articles, we performed 35 updated meta-analyses and 4 de novo meta-analyses. Together with 8 previous meta-analyses, we finally included 47 unique health outcomes. Cheese consumption was inversely associated with all-cause mortality (highest compared with lowest category: RR = 0.95; 95% CI: 0.92, 0.99), cardiovascular mortality (RR = 0.93; 95% CI: 0.88, 0.99), incident cardiovascular disease (CVD) (RR = 0.92; 95% CI: 0.89, 0.96), coronary heart disease (CHD) (RR = 0.92; 95% CI: 0.86, 0.98), stroke (RR = 0.93; 95% CI: 0.89, 0.98), estrogen receptor-negative (ER−) breast cancer (RR = 0.89; 95% CI: 0.82, 0.97), type 2 diabetes (RR = 0.93; 95% CI: 0.88, 0.98), total fracture (RR = 0.90; 95% CI: 0.86, 0.95), and dementia (RR = 0.81; 95% CI: 0.66, 0.99). Null associations were found for other outcomes. According to the NutriGrade scoring system, moderate quality of evidence was observed for inverse associations of cheese consumption with all-cause and cardiovascular mortality, incident CVD, CHD, and stroke, and for null associations with cancer mortality, incident hypertension, and prostate cancer. Our findings suggest that cheese consumption has neutral to moderate benefits for human health. Introduction Cheese is generally a nutrient-dense and well-tolerated fermented dairy product consumed worldwide.However, the health effects of cheese consumption remain a matter of controversy.On one hand, cheese is a rich source of high-quality protein (mainly casein), lipids, minerals (e.g., calcium, phosphorus, and magnesium), and vitamins (e.g., vitamin A, K 2 , B 2 , B 12 , and folate), and probiotics and bioactive molecules (e.g., bioactive peptides, lactoferrin, short-chain fatty acids, and milk fat globule membrane), which may provide various health benefits.On the other hand, cheese contains relatively high contents of saturated fat and salt, which are perceived as unfavorable dietary components for cardiovascular health [1,2].Currently, most dietary guidelines recommend consuming dairy products as part of a healthy diet while avoiding intake of full-fat and high-sodium versions [3][4][5][6].Of note, the recommendation is primarily based on extrapolated benefits and harms of single nutrient contained in dairy.However, whole dairy foods are not a simple collection of isolated nutrients but have complex physical and nutritional structures (i.e., dairy matrix), which affect digestibility and nutrient bioavailability, thereby modifying the overall effects of dairy consumption on health and disease [7][8][9].In addition, dairy products are a heterogeneous group of foods regarding the dairy matrix due to different processing methods [8].Because various types of dairy products appear to have distinct influences on specific health outcomes [10], merging them into 1 group (i.e., total dairy consumption) may blur the true association.Thus, a separate assessment of the health effects of cheese consumption is required. Umbrella reviews can provide a comprehensive overview of evidence from existing meta-analyses on a given topic, with unique strengths of identifying the uncertainties, biases, and knowledge gaps of the evidence [11].Many meta-analyses on the association between cheese consumption and a range of health end points, such as all-cause and cause-specific mortality, cardiovascular diseases (CVD), cancer, metabolic diseases, bone fracture, and other diseases, have been published [12][13][14][15][16][17].An extensive summary of the breadth and validity of these associations with diverse health outcomes will help elucidate the role of cheese consumption in human health.Therefore, we conducted an umbrella review to synthesize the available evidence from meta-analyses of prospective studies to examine the various health impacts of cheese consumption.Furthermore, we contextualized the magnitude, direction, and significance of the identified associations, evaluated risk of potential biases, and assessed the credibility of the evidence. Methods The present umbrella review was conducted following the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines [18].The protocol of this study was registered in PROSPERO (CRD42022331328). Literature search We systematically searched PubMed, Embase, and Cochrane Library databases to identify existing meta-analyses (including pooled analyses) of prospective studies investigating the association between cheese consumption and any health outcome from inception to August 31, 2022.The search terms were as follows: (cheese) AND ("meta analysis" OR metaanalysis OR "meta analyzed" OR meta-analyzed OR "pooled analysis" OR "systematic review").We also extensively searched the 3 databases for recently published original prospective studies to update previous meta-analyses or derive de novo metaanalyses.Predefined search strategies for meta-analyses and primary studies are presented in Supplemental Tables 1 and 2. Two investigators (XD, MZ) independently performed a 3-step parallel screening of titles, abstracts, and full texts for all identified studies according to the inclusion and exclusion criteria.Any discrepancies were discussed and resolved by a third investigator (ZH). Eligibility criteria Meta-analyses of population-based prospective studies (i.e., prospective cohort studies, case-cohort studies, nested case--control studies, and randomized controlled trials) exploring the association between cheese consumption (primary or secondary exposure of interest) and major health outcomes were included in the umbrella review.Original prospective studies eligible for updated or de novo meta-analyses were also included.Conference abstracts, interviews, letters, and narrative reviews were excluded.Meta-analyses or original studies without full text, effect size [e.g., risk ratio (RR), odds ratio (OR), or hazard ratio (HR)], or not written in English were also excluded.Studies with changes in cheese consumption rather than absolute intake as exposure, using substitution analysis, or using surrogate end points (e.g., blood lipids, blood pressure, and body weight) as outcomes were removed.If more than 1 article reported the results for an identical outcome from the same study population (or cohort), only the one with the largest sample size, the longest follow-up, or the most complete information was included. Data extraction From each included meta-analysis, the following information was extracted and verified by three investigators (XD, MZ, ZH): the first author's name, publication year, outcome of interest, study population (general or disease status), study design of the primary studies, type of comparison (highest compared with lowest category of cheese consumption or each increment in cheese consumption), number of included studies, number of participants and cases, and the reported summary risk estimates (RR, OR, or HR) with corresponding 95% CIs.For meta-analyses on over 1 health outcome, each outcome was recorded separately.For original studies, the extracted data covered information on the first author's name, publication year, study design, study population characteristics, geographic location, number of participants and cases, length of follow-up (cohort study), dietary assessment method (e.g., food frequency questionnaire and 3-d 24-h dietary records), cheese type, categorization and amount of cheese consumption, adjustment factors, and effect size with 95% CIs. Evaluation of methodological quality The AMSTAR-2 (A Measurement Tool to Assess Systematic Reviews) tool [19] was used to evaluate the methodological quality of the included published meta-analyses and systematic reviews.It includes 16 individual items and 7 of them are identified as critical.Systematic reviews with no or 1 noncritical weakness are rated as high confidence; those with more than 1 noncritical weakness are rated as moderate confidence; those with 1 critical flaw with or without noncritical weaknesses are rated as low confidence; and those with more than 1 critical flaw with or without noncritical weaknesses are rated as critically low confidence.Two investigators (XD, MZ) implemented the evaluation independently, with disagreements reconciled by discussion and consensus. Statistical analysis We reanalyzed previous meta-analyses to obtain necessary information for subsequent assessment of the credibility of evidence.If the existing meta-analysis included crosssectional, retrospective, and prospective studies, we only kept the results from prospective studies in our meta-analysis.Furthermore, we incorporated newly identified original studies into previous meta-analyses to update or derive de novo meta-analyses, where appropriate.For each outcome, we recalculated the summary risk estimates and their corresponding 95% CIs for the highest compared with the lowest category of cheese consumption and/or per 30-g/d increment in cheese consumption by using the random-effects model by DerSimonian and Laird [21].When results from the same cohort were reported separately for different cheese types (e.g., hard cheese and cottage cheese or low-fat cheese and high-fat cheese instead of total cheese) and disease subtypes [e.g., coronary heart disease (CHD) and stroke rather than total CVD], we used a fixed-effects model to generate an overall estimate before pooling with other studies. Heterogeneity across studies was investigated using I 2 statistic.We also performed subgroup analyses according to adjustment for total energy intake in the models (adjusted and unadjusted) and geographical location (Europe, North America, and Oceania; Asia and other regions; multiregion) to explore potential sources of heterogeneity.We computed 95% prediction intervals (95% PIs) to predict the effect size range of a future original study will lie after considering both the uncertainty in the mean effect and the heterogeneity from the random-effects model [22,23].We assessed potential small-study effects by Egger's test [24] if !3 studies were available.A P value of <0.10 was interpreted as the presence of small-study effects.The excess statistical significance test was performed to evaluate whether the observed number of nominally statistically significant studies was larger than their expected number using the χ 2 test [25].A P value of <0.10 was considered statistically significant. We further examined the nonlinear dose-response association among studies that provided risk estimates with !3 exposure categories using a 2-stage restricted cubic splines (3 knots at 25, 50, and 75 percentiles) analysis [26,27].The P value for nonlinearity was calculated by testing whether the coefficient of the second spline was equal to 0 [27].We used the median/mean of each consumption category if available or the midpoint between the lower and upper bounds of each intake category to represent the intake levels.We assumed zero as the lower bound for the open-ended lowest category and multiplied the lower bound value by 1.2 as the upper bound for the open-ended highest category.All statistical analyses were conducted using the "metafor," "meta," "dosresmeta," and "forestplot" packages in R software version 4.1.0(The R Foundation). Subgroup analyses Of the 184 original studies included in the meta-analyses, 149 (81.0%) studies adjusted for total energy intake in the models, and 152 (82.6%) studies were conducted in North America, Europe, and Oceania.Among the 47 major outcomes in our study, 4 outcomes were solely based on studies without energy adjustment, 18 outcomes were only based on studies with energy adjustment, and 30 outcomes were entirely based on studies conducted in Europe, North America, and Oceania (Supplemental Tables 13-16). Subgroup analyses indicated that the association between cheese consumption and most health outcomes remained consistent regardless of adjustment for total energy intake or geographic location (Supplemental Tables 13-16).However, heterogeneity existed in subgroups by total energy adjustment for overall and breast cancer incidence, where higher cheese consumption was linked with an increased risk of overall cancer (RR ¼ 1.14; 95% CI: 1.01, 1.29; P ¼ 0.0305; I 2 ¼ 0%; P-subgroup ¼ 0.02) (Supplemental Table 14) and was marginally associated with elevated risk of breast cancer (RR ¼ 1.43; 95% CI:0.99, 2.06; P ¼ 0.0557; P-subgroup ¼ 0.04) (Supplemental Table 14) in studies without adjustment for total energy intake, whereas null associations were found in studies with adjustment for total energy intake.Although heterogeneity was observed in subgroup analyses by energy adjustment for metabolic syndrome (P-subgroup ¼ 0.03), the null association was consistent between subgroups.Meanwhile, an inverse association was detected for CHD mortality (RR: 0.67; 95% CI: 0.51, 0.89; P-subgroup ¼ 0.03) (Supplemental Table 15) in studies conducted in Asia and other regions but not in studies conducted in North America, Europe, and Oceania. Evidence credibility The credibility of the identified associations with cheese consumption is summarized in Supplemental Tables 10 and 11, and the detailed NutriGrade scores for each meta-analysis are presented in Supplemental Table 17.No health outcome met the standards for high meta-evidence.Eight health outcomes (17%)-death from any cause, cancer, and CVD and incidence of overall CVD, CHD, stroke, hypertension, and prostate cancer--presented moderate meta-evidence.Twenty-two health outcomes (47%)-site-specific cancer mortality (the colorectum, colon, rectum, lung, and stomach), CHD mortality, overall and site-specific cancer (colorectum, total and distal colon, rectum, total, ERþ and ERÀ breast, bladder, and pancreas), T2D, overweight/obesity, total and hip fractures, fall, and dementia-presented low meta-evidence.The rest health outcomes indicated very low meta-evidence. Discussion This umbrella review provides a broad overview of the existing evidence on the association between cheese consumption and 47 unique outcomes through 35 updated, 4 de novo, and 8 previous meta-analyses based on 184 prospective observational studies from 145 primary articles.Moderate quality of evidence showed that cheese consumption was associated with reduced risk of all-cause mortality, CVD mortality, and incident CVD, CHD, and stroke but not related to the risk of cancer mortality, hypertension, and prostate cancer.Low quality of evidence was observed for inverse associations of cheese intake with incidence of ERÀ breast cancer, T2D, total fracture, and dementia and null association with site-specific cancer mortality (i.e., colorectum, colon, rectum, lung, and stomach), CHD mortality, and incidence of overall, site-specific cancer and its subtypes (i.e., colorectum, total and distal colon, rectum, total and ERþ breast, bladder, and pancreas), overweight/obesity, hip fracture, and fall.The nonlinear dose-response analyses additionally suggested a U-shaped association between cheese consumption and the risk of all-cause mortality and cardiovascular mortality and an L-shaped association with the risk of overall CVD, CHD, stroke, and total and hip fractures with the optimal intake at ~40 g/d. Although cheese is theorized to have detrimental effects on blood pressure and blood lipid profile based on its high sodium and saturated fat contents, a moderate quality of evidence suggest that cheese consumption does not increase the risk of cardiovascular diseases and may even have protective associations with overall CVD, CHD, and stroke incidence and cardiovascular and all-cause mortality in our updated meta-analyses.The inverse associations are in line with findings from most previous meta-analyses [13,36,37,39,40,42].In terms of the nonlinear analysis, 1 meta-analysis in 2016 reported an L-shaped association with the risk of stroke leveling off at 25 g/d of cheese intake [41].Still, another meta-analysis in 2017 derived a U-shaped association with the lowest CVD risk at 40 g/d, which was consistent with the findings from our study [36].Regarding cheese intake and hypertension, a null association with moderate quality of evidence was observed in our study, which was in line with previous studies [43][44][45].In subgroup analyses, an inverse association between cheese consumption and CHD mortality was only notable in Asian populations but not in European and American populations, which may be attributable to the differences in the amount and patterns of cheese consumption among different regions [43]. Results from pervious meta-analyses [31,57,59,60,216] and large prospective cohort studies [217,218] have raised the concern regarding high consumption of dairy products (particularly whole milk) increasing the incidence and mortality of several cancers, for example, prostate, breast, ovarian, and liver cancers and lymphoma.By contrast, cheese consumption has been reported to be inversely related to the risk of colorectal cancer, breast cancer, and prostate cancer in earlier meta-analyses including both prospective studies and case--control studies [46,51,58].However, our latest meta-analyses of prospective observational studies found null associations between cheese consumption and overall and site-specific cancer incidence and mortality, consistent with previous meta-analyses for overall cancer incidence and mortality [14,30,31] and colorectal cancer [32,53].Of note, total energy intake is a crucial confounder in the association between cheese consumption and cancer risk.Failure to adjust for total energy intake in the analyses could lead to spurious conclusions, such as in the case that higher cheese consumption was associated with an elevated risk of overall and breast cancer when not controlling total energy intake.The quality of evidence was moderate for null associations with total cancer mortality and incident prostate cancer and low for null associations with mortality from colorectal, colon, rectal, lung, and gastric cancers and incidence of overall, colorectal, colon (total and distal), rectal, breast (total and ERþ), bladder, and pancreatic cancers.In addition, low-quality evidence revealed an inverse association between cheese intake and ERÀ breast cancer incidence, which was driven by a protective association of cottage/ricotta cheese consumption rather than hard cheese consumption with ERÀ breast cancer risk in a pooled analysis of 21 cohort studies [62].The findings for prostate cancer incidence are also warranted to be confirmed in large-scale, long-term, prospective cohort studies because our linear dose-response meta-analysis of 7 cohort studies suggested a borderline positive association between cheese consumption and prostate cancer risk. We found a low quality of evidence for an inverse association between cheese consumption and T2D risk in the highest compared with that of the lowest intake, which is in accordance with previous meta-analyses [67,68].Substitution analysis demonstrated that replacing red and processed meat (per 50 g/d) with cheese (per 30 g/d) was associated with 10% decreased risk of T2D [219].However, our linear and nonlinear dose-response analyses did not find significant associations, which was consistent with the most recent dose-response meta-analysis [220].Inconsistently, a meta-analysis published in 2013 showed that per 50-g cheese/d was associated with 8% reduced risk of T2D, and there was a marginal nonlinear association between cheese consumption and T2D risk, with a reduction 50 g/d [15].More studies are needed to clarify the discrepancy between categorized and dose-response analyses. Dairy products are rich in calcium, magnesium, phosphorus, and protein, which are essential for good bone health [221,222].Nevertheless, the role of dairy intake in preventing bone fractures remains debated [223].Previous meta-analyses reported both inverse and null associations between cheese intake and the risk of hip fracture [73][74][75] and fracture at any site [73].Our updated meta-analyses including only prospective studies supported a favorable association of cheese intake with total fracture risk in the highest compared with that of the lowest intake and with hip fracture risk per 30-g/d increase in cheese consumption.Nonlinear dose-response analyses showed an L-shaped association between cheese consumption and total and hip fracture risk, leveling off at ~40 g/d.Given that the quality of evidence was low, further research is warranted to confirm these findings. Additionally, low quality of evidence also showed that higher cheese intake was associated with a lower dementia risk in our de novo meta-analysis of 2 prospective cohort studies [197,198].The beneficial association was supported by previous randomized controlled crossover trial [224] and observational studies [225,226], suggesting that cheese consumption may improve cognitive function. The protective association of cheese consumption with mortality, CVD, bone fracture, and dementia may be attributed to the abundance of nutrients, bioactive compounds, and probiotics in cheese.Dairy products, especially cheese, are a predominant dietary source of vitamin K 2 in many regions [227,228].Vitamin K 2 can improve cardiovascular health by inhibiting and reversing vascular calcification [229,230], reduce age-related bone loss through promoting the γ-carboxylation of osteocalcin and increasing osteoprotegerin [230], and maintain neurocognitive functions through contributing to the biological activation of proteins Gas6 and protein S and the synthesis of sphingolipids [231].Probiotic bacteria in cheese may also interact with the gut microbiome [232], exerting various health enhancing functions [233].Additionally, the cheese matrix can mitigate the harmful effects of saturated fat and sodium [234][235][236][237].Besides the components of cheese itself (i.e., protein or specific micronutrients), the observed inverse associations could also be owing to the fact that increased cheese intake may replace consumption of other foods (e.g., processed/red meat and refined carbohydrates) that have been consistently associated with higher risk of incidence or mortality from chronic diseases [238][239][240] because the studies adjusting for total energy intake hold calories constant, as in isocaloric intervention trials. It is noteworthy that a borderline positive association between cheese intake and Parkinson disease risk was observed in a previous meta-analysis of 5 cohort studies, which accords with findings from the latest meta-analysis on total dairy and milk [241] and 1 recent prospective study on low-fat dairy foods (including cottage cheese and low-fat cheese) [242].If causal, suggested mechanisms include reduction on uric acid by dairy proteins and the inhibition of calcium and phosphate in dairy products on the formation of 1, 25(OH) 2 D 3 (1,25-dihydroxy-vitamin D 3 ¼ calcitriol) because urate and 1,25(OH) 2 D 3 may protect against Parkinson's disease [242,243].However, the quality of the meta-evidence was very low, and inconsistent results were observed for cheese and other fermented dairy products in some prospective studies [244,245].Accordingly, the findings should be interpreted with caution and validated by further studies. Strengths and limitations This umbrella review provides the most recent evidence from prospective observational studies on the association between cheese intake and a wide range of health outcomes.Different from traditional umbrella reviews focusing only on published metaanalyses, we thoroughly and systematically resynthesized the available evidence by incorporating newly identified prospective studies into prospective studies included in previous metaanalyses.On one hand, we updated those outdated meta-analyses to reflect up-to-date conclusions with more statistical power.On the other hand, we performed de novo meta-analyses for specific health outcomes without previous meta-analyses despite enough published original studies to include as many potentially related health outcomes as possible.Furthermore, dose-response analyses were conducted to reveal further the linear and nonlinear association between cheese intake and multiple health outcomes, thereby determining the optimal consumption level of cheese. There are also some limitations in our research.Given that the original studies included in this review are all observational, some of their inherent limitations could not be excluded, such as residual confounding and reverse causality.Besides, the updated metaanalyses for some health outcomes are highly heterogeneous (I 2 !50%), probably due to the inclusion of original studies involving different populations.Also, caution should be taken when generalizing the conclusions to populations with different genetic backgrounds and dietary habits because most primary studies included were conducted in Europe and North America.Moreover, different types of cheese also vary a lot in dairy matrix and nutrient content like fat and sodium, which may deliver divergent health effects.However, the lack of information on cheese type deterred finer stratified analyses by cheese type.Finally, the limited number of prospective observational studies in meta-analyses for cheese consumption and specific health outcomes-such as cancer at sites other than colorectum, breast, and prostate, overweight/obesity, dementia, fall, and frailty-leads to insufficient statistical power and low credibility of evidence.Thus, further large-scale prospective studies are warranted to ascertain the association of cheese intake with these health outcomes. Conclusions Our results indicate that cheese consumption has neutral to moderate benefits for human health, particularly !40 g/d, with a moderate quality of evidence for inverse associations with allcause and CVD mortality and overall CVD, CHD, and stroke incidences.Null associations were observed with cancer mortality, hypertension, and prostate cancer incidence.Although high saturated fat and sodium in some cheeses tend to be emphasized as a health concern in dietary guidelines, cheese also provides some nutrients and bioactive compounds, which potentially may confer some benefits.Environmental effects of cheese production should also be considered.manuscript; AF, ELG: had primary responsibility for final content; and all authors: contributed substantially to the interpretation of the data and read and approved the final manuscript. FIGURE 1 . FIGURE 1. Flow diagram of the study search and selection process. FIGURE 2 . FIGURE 2. Association between cheese consumption (highest compared with lowest intake level) and all-cause and cause-specific mortality. FIGURE 3 . FIGURE 3. Association between cheese consumption (highest compared with lowest intake level) and disease incidence. FIGURE 4 . FIGURE 4. Association between cheese consumption (per 30-g/d intake level) and mortality and multiple disease incidence.
2023-06-18T06:17:07.209Z
2023-06-01T00:00:00.000
{ "year": 2023, "sha1": "b33874d4c259e586467062fb5e09d2052a209f5f", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1016/j.advnut.2023.06.007", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "b4d6d81b6f982c13f4b3f720ef78ad0e218f78d3", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
40905429
pes2o/s2orc
v3-fos-license
G protein βγ subunits inhibit TRPM3 ion channels in sensory neurons Transient receptor potential (TRP) ion channels in peripheral sensory neurons are functionally regulated by hydrolysis of the phosphoinositide PI(4,5)P2 and changes in the level of protein kinase mediated phosphorylation following activation of various G protein coupled receptors. We now show that the activity of TRPM3 expressed in mouse dorsal root ganglion (DRG) neurons is inhibited by agonists of the Gi-coupled µ opioid, GABA-B and NPY receptors. These agonist effects are mediated by direct inhibition of TRPM3 by Gβγ subunits, rather than by a canonical cAMP mediated mechanism. The activity of TRPM3 in DRG neurons is also negatively modulated by tonic, constitutive GPCR activity as TRPM3 responses can be potentiated by GPCR inverse agonists. GPCR regulation of TRPM3 is also seen in vivo where Gi/o GPCRs agonists inhibited and inverse agonists potentiated TRPM3 mediated nociceptive behavioural responses. DOI: http://dx.doi.org/10.7554/eLife.26138.001 Introduction Proteins encoded by the TRPM3 gene form non-selective cation channels which are widely expressed in mammalian tissues. The discovery that TRPM3 can be activated by the endogenous neurosteroid pregnenolone sulphate (PS), has facilitated the study of this widely-expressed TRP channel and PS has been utilised as a pharmacological tool for channel characterisation and as a probe for TRPM3 expression (Wagner et al., 2008). TRPM3 is expressed in peripheral sensory neurons where it acts as a heat sensor (Vriens et al., 2011). Activation of TRPM3 channels in vivo has been shown to evoke nociceptive behaviours and mice without functional TRPM3 channels exhibit altered temperature preferences, compromised behavioural responses to noxious heat and fail to develop heat hyperalgesia associated with inflammation (Vriens et al., 2011). There have been relatively few studies of the mechanisms which regulate or sensitise TRPM3. Many TRP channels are regulated by signalling pathways associated with activation of G-protein coupled receptors (GPCRs). For example, activation of both Ga s and Ga q -coupled receptors can sensitise the heat sensitive nociceptor TRPV1 through protein kinase-dependent mechanisms (Bevan et al., 2014). Like other TRP channels, TRPM3 can be regulated by phosphoinositol 4,5bisphosphate (PI(4,5)P 2 ) and other phosphoinositides as loss or hydrolysis of PI(4,5)P 2 leads to a reduction in TRPM3 activity that can be restored by application of exogenous PI(4,5) P 2 (Badheka et al., 2015;Tó th et al., 2015). These findings suggest that TRPM3 activity can be regulated downstream of activation of G q coupled GPCRs. A human TRPM3 variant with a short carboxyl terminus was found to be insensitive to stimulation of G q -coupled muscarinic receptors or histamine H1 receptors (Grimm et al., 2003). However, another human splice variant, TRPM3a, was shown to be activated by muscarinic receptor stimulation (Lee et al., 2003), suggesting that individual genetic variants can be differentially regulated. Units of the G protein itself can directly interact with channel structures to modulate ion channel activity. Indeed activation of Ga q -linked receptors inhibits TRPM8 via a direct action of the Ga q subunit with the TRPM8 channel (Zhang et al., 2012) and TRPM1 is inhibited by interactions with either Ga or Gbg subunits (Shen et al., 2012;Xu et al., 2016). In this study we have examined the effects of GPCR activation on the function of endogenously expressed TRPM3 channels in isolated DRG neurons from mice and investigated if activation of G i/o coupled GPCRs can modulate TRPM3 mediated nociceptive responses in vivo. For these experiments we have studied three different receptors (opioid, GABA B and neuropeptide Y) which are known to modulate sensory neuron activity (Levine and Taiwo, 1989;Schuler et al., 2001;Smith et al., 2007). We have also examined the underlying mechanism in CHO and HEK293 cells exogenously expressing TRPM3. Our results demonstrate that activation of these GPCRs inhibits TRPM3 activity in DRG neurons by a Gbg mediated mechanism and inhibits PS-evoked nociceptive responses in mice. Morphine, Baclofen and PYY inhibit TRPM3 mediated PS-induced Ca 2+ responses Opioids remain one of the best known and most effective treatments for pain, and G i/o coupled GPCRs for opioid ligands are expressed on sensory neurons where receptor activation inhibits voltage-gated calcium channels (VGCCs) (Stein et al., 2003). Therefore, to determine if activation of G i/ o GPCRs can modulate natively expressed TRPM3 channels, we first examined the effects of the prototypical opioid receptor agonist, morphine. eLife digest TRPM3 belongs to a family of channel proteins that allow sodium and calcium ions to enter cells by forming pores in cell membranes. TRPM3 is found on the cell membranes of nerve cells; when ions flow into the nerves through the TRPM3 pores it triggers an electrical impulse. TRPM3 is responsible for helping us to detect heat, and mice without this protein find it difficult to sense painfully hot temperatures. Mice lacking TRPM3 also respond to other kinds of pain differently. Normally, a mouse with an injured paw becomes more sensitive to warm and hot temperatures, but this does not happen in mice that do not have TRPM3. When activated, other proteins called G-protein coupled receptors (or GPCRs for short) can make some members of this family of channel proteins more or less likely to open their pore. This in turn increases or decreases the flow of ions through the pore, respectively. Yet it was not clear if GPCRs also affect TRPM3 channels on the membranes of nerve cells. Quallo et al. have now discovered that "switching on" different GPCR proteins in sensory nerve cells from mice greatly reduces the flow of calcium ions though TRPM3 channels. The experiments made use of two pain-killing drugs, namely morphine and baclofen, and a molecule called neuropeptide Y to activate different GPCRs. GPCRs interact with a group of small proteins called G-proteins that, when activated by the receptor, split into two subunits, known as the a subunit and the bg subunit. Once detached these subunits are free to act as messengers and interact with other proteins in the cell membrane. Quallo et al. found that TRPM3 is one of a small group of proteins that interact with the bg subunits of the G-protein, which can explain how "switching on" GPCRs reduces the activity of TRPM3. Two independent studies by Dembla, Behrendt et al. and Badheka, Yudin et al. also report similar findings. There is currently a need to find more effective treatments for people suffering from long-term pain conditions and it has become clear that TRPM3 channels are involved in sensing both pain and temperature. These new findings show that drugs already used in the treatment of pain can dramatically change how TRPM3 works. These results might help scientists to find drugs that work in a similar way to dial down the activity of TRPM3 and to combat pain. Though first it will be important to confirm these new findings in human nerve cells. We used two consecutive applications of a submaximal concentration of pregnenolone sulphate (PS,20 mM,) to investigate the effect of morphine on TRPM3-mediated [Ca 2+ ] i -responses in isolated DRG neurons. This concentration of PS typically activates about 30% of cells in DRG cultures. The first application of PS was used to identify TRPM3 expressing neurons and the second to assess the effects of pharmacological treatments. We refer to the response amplitude of the second PS challenge as R (relative % response). In control experiments, the second PS challenge evoked [Ca 2+ ] iresponses (R) that were 63 ± 2% of the first PS response amplitude (Figure 1a). Treatment with morphine (10 mM) for 2 min before and during the second PS challenge significantly reduced R to 12 ± 1% (p<0.001, Figure 1b,c). In the majority of PS-responsive neurons (55%, n = 115/209), application of morphine completely abolished PS-evoked [Ca 2+ ] i responses (R < 5%). To confirm that the inhibitory effect of morphine is receptor mediated and not a direct effect of morphine on TRPM3, we examined the effect of morphine in the presence of the opioid receptor antagonist, naloxone. Naloxone (1 mM) inhibited the effect of morphine (10 mM) completely, since the relative amplitude (R) produced by the second PS challenge in the presence of morphine plus naloxone was 73 ± 3% (p>0.05, Figure 1d), very similar to untreated control neurons (R = 64 ± 3%). Thus morphine inhibits TRPM3 by activating an opioid receptor present on DRG neurons. The involvement of G i/o proteins was investigated using pertussis toxin (PTX), which inhibits the coupling of G i/o proteins to their cognate GPCRs by catalysing ADP-ribosylation of the Ga i/o subunits. Incubation with PTX (200 ng/ml, for~2 ½ À18 hr) significantly reduced the inhibitory effect of morphine in a large proportion of the neurons (morphine: R = 16 ± 2%; morphine +PTX: R = 46 ± 2%, p<0.001, Figure 1d). These findings indicate that activation of a sensory-neuron expressed opioid receptor and subsequent Ga i/o protein signalling is able to modulate the activity of TRPM3 channels. Opioid receptors, like many other types of GPCRs, may possess constitutive, agonist independent activity (Rosenbaum et al., 2009). We tested whether such tonic receptor activity modulates the activity of endogenously expressed TRPM3 in DRG neurons, using two consecutive challenges of a low concentration of PS (5 mM) that evoked [Ca 2+ ] i -responses in only a small percentage of neurons. In experiments where cells were exposed to naloxone during the second PS challenge the number of responding neurons increased significantly (p<0.001, Fisher's) from 3.6% (n = 36/999) to 9% (90/999, Figure 1e). In contrast, the number of PS responders fell from 3.5% (n = 40/1153) to 1.8% (n = 21/1153) in control experiments. These findings indicate that constitutive activity of opioid receptors exerts a tonic inhibition of TRPM3 in DRG neurons. We used patch-clamp recordings to confirm that the observed opioid receptor mediated inhibition of PS-evoked [Ca 2+ ] i -responses is associated with a corresponding inhibition of TRPM3 currents in DRG neurons. TRPM3 currents have a marked outward rectification (Wagner et al., 2008) and we therefore studied neurons at a holding potential of +40 mV. Application of PS (50 mM) rapidly evoked membrane currents in a subset of isolated DRG neurons ( Figure 1f). Subsequent introduction of morphine (10 mM) in the continued presence of PS produced a rapid, near-complete and reversible inhibition of the PS-evoked current. We next examined the effect of morphine on PSevoked currents in CHO cells co-expressing TRPM3 and the m opioid receptor. PS evoked large currents that were reversibly inhibited by co-application of morphine (inhibition 89 ± 4%, n = 9). Examination of the current-voltage relationships before and during PS application and in the presence of morphine ( Figure 1g) showed that the response to PS was greatly inhibited by morphine at all voltages studied. Morphine is a non-selective opioid receptor agonist and DRG neurons express all three naloxonesensitive opioid receptor subtypes (m, d and k). We examined the effects of sub-type selective opioid receptor agonists on PS-evoked [Ca 2+ ] i responses of DRG neurons, to determine which receptor subtype is important for morphine-induced inhibition of TRPM3. Treatment with the selective k opioid receptor agonist U50488 (20 nM), did not inhibit PS-induced responses and instead produced a modest, but significant increase (U50488: R = 72 ± 3%; control: R = 57 ± 3%, p<0.05, Figure 2a). The d opioid receptor agonist SB205607 (20 nM) was without effect (SB205607: R = 52 ± 3%; p>0.05, Figure 2a), whereas the m opioid receptor agonist DAMGO (20 nM) significantly inhibited PS-evoked [Ca 2+ ] i -responses to an extent similar to that previously observed with morphine (DAMGO: R = 18 ± 2%, p<0.001, Figure 2a,b). These findings indicate that morphine predominantly inhibits TRPM3 by activating m opioid receptors and suggest a high degree of co-expression of m opioid receptors and TRPM3. These results are in keeping with earlier observations of m (but not d) opioid receptor expression in heat nociceptors (Scherrer et al., 2009). We next examined whether GPCR inhibition of TRPM3 was specific to opioid receptors or could be extended to other G i/o -coupled receptors expressed on sensory neurons. The metabotropic G i/ocoupled receptors for GABA, GABA B1 and GABA B2 , are expressed by a high percentage (60-90%) of peripheral sensory neurons (Charles et al., 2001;Cuny et al., 2012;Engle et al., 2012) and for responses to the second PS challenge in (a) and (b), ***p<0.001; Mann-Whitney U test (control, n = 174; morphine, n = 209). (d) Effect of treatment with morphine (10 mM, n = 323), morphine (10 mM) and naloxone (1 mM, n = 110) and morphine (10 mM) following an incubation with pertussis toxin (200 ng/ml for 2.5 h-18h, n = 253) on [Ca 2+ ] i responses evoked by the second PS (20 mM) challenge using the protocol in (a and b). Control group, n = 188. (e) Traces displaying neuronal [Ca 2+ ] i responses to two PS (5 mM) challenges in the absence and presence of naloxone (1 mM), followed by a 50 mM PS challenge and high K + (50 mM KCl). ***p<0.001, compared to control. † † † p<0.001, compared to morphine (10 mM), Kruskal-Wallis. (f) Whole cell recording illustrating that morphine reversibly inhibits PS-evoked outward membrane currents in DRG neurons (+40 mV). (g) Inhibitory effect of morphine on whole cell outward current in a CHO cell co-expressing TRPM3 and m-opioid receptor (+60 mV). (h) Current-voltage relationships for another TRPM3/m-opioid receptor expressing CHO cell measured at times corresponding to time points 1-5 in panel g. DOI: 10.7554/eLife.26138.003 recently, activation of GABA B1 was shown to modulate activity of TRPV1 channels (Hanack et al., 2015). Treatment of DRG neurons with the selective GABA B agonist baclofen (100 mM), significantly reduced the amplitude of the second PS response to R = 12 ± 2% compared to R = 58 ± 6% in control experiments (p<0.001, Figure 3a,b) and abolished (R < 5%) the PS-evoked [Ca 2+ ] i responses in 67% (n = 130/194) of neurons. The G i/o -coupled neuropeptide tyrosine (NPY) receptors Y1 and Y2, are each expressed on 15-20% of sensory neurons , Zhang et al., 1997Brumovsky et al., 2005;Ji et al., 1994;Taylor et al., 2014). The Y1 receptor is predominantly expressed in small diameter neurons whereas the Y2 receptor is largely expressed in medium and large diameter neurons. We examined whether activation of NPY receptors by the agonist peptide YY (PYY) was able to modulate neuronal TRPM3 [Ca 2+ ] i -responses. Similarly to the effects of morphine and baclofen, application of 100 nM PYY reduced the PS-evoked [Ca 2+ ] i responses, and in 57% (n = 123/217) of neurons abolished the evoked increase in [Ca 2+ ] i (R < 5%). The relative amplitude of the second PS-evoked [Ca 2+ ] i responses was reduced from 66 ± 3% in control experiments to 11 ± 1% in the presence of PYY (p<0.001). Like opioid receptors, Y2 receptors have been reported to possess constitutive activity (Chen et al., 2000) and we therefore examined whether tonic Y2 receptor activity modulates TRPM3. In experiments where we challenged neurons with two consecutive applications of a low concentration of PS (5 mM), treatment with the selective Y2 receptor antagonist BIIE 0246 (10mM) significantly (p<0.05, Fisher's) increased the number of responding neurons from 1.1% (n = 5/450) to 4.2% (19/450; Figure 3e). In contrast, the number of PS responders fell from 3.4% (n = 15/440) to 1.8% (n = 8/440) in control experiments with two applications of 5 mM PS. These findings indicate that constitutive activity of Y2 receptors can exert a tonic inhibition of TRPM3. Additional experiments demonstrated that not all G i/o -coupled receptors are able to effectively modulate TRPM3 activity. L-AP4 (5 mM) activation of group III metabotropic glutamate receptors (mGLUR 4/6/7/8 ), which are expressed by DRG neurons (Carlton and Hargett, 2007;Govea et al., 2012), had no significant effect on PS-evoked [Ca 2+ ] I responses (R = 69 ± 3%, n = 100) compared to control (R = 61 ± 7%, n = 20; p>0.05, Mann-Whitney U test). Cannabinoid CB1 receptors are expressed by a subset of DRG neurons (Agarwal et al., 2007;Veress et al., 2013). The CB1 receptor agonist WIN 55212-2 (1 mM) slightly but significantly reduced the amplitude of PS evoked [Ca 2+ ] i responses (R = 51 ± 3%; Figure 4a) compared to control (R = 62 ± 2%; p<0.001, Mann-Whitney U test). This finding suggested that application of WIN 55212-2 inhibited PS-evoked responses in some but not in all DRG neurons. This was confirmed when DRG neurons were challenged with 3 applications of 20 mM PS with 1 mM WIN 552212-2 present before and during the second application and 1 mM WIN 55212-2 plus 0.5 mM AM251 (a CB1 receptor antagonist) present during the third PS application (Figure 4b). This experimental protocol allowed for detection of neurons whose PS evoked responses were inhibited by WIN 55212-2, and were restored when a CB1 antagonist (AM251) was co-applied with WIN 55212-2. In many DRG neurons, WIN 55212-2 had no obvious inhibitory effect (Figure 4b, top), but a small percentage of neurons (5%, n = 8/156) showed an AM251-reversible WIN 552212-2 inhibition of the PS response (Figure 4b, bottom). In view of the minor effect of CB1 receptor activation on DRG TRPM3 PS responses, we repeated the experiments in TRPM3 CHO cells transiently transfected with the CB1 receptor. As not all cells expressed CB1 after transfection, we compared the population responses to two consecutive PS applications. PS responses to the two PS applications were relatively stable in control cells not transfected with CB1, whereas WIN 552212-2 inhibited the PS responses in a sub-population of CB1 transfected cells. Furthermore, the second PS response of CB1 transfected cells was augmented by AM251 (Figure 4c,d). These results suggest that CB1 receptors can regulate TRPM3 but that there is either little co-expression of TRPM3 and CB1 in DRG neurons or ineffective coupling. We next investigated whether morphine generally affected TRP channels in DRG neurons by examining its effect on TRPV1 responses to capsaicin. As capsaicin responses readily desensitize, we performed the experiments in the presence of cyclosporin (1 mM) which reduces TRPV1 desensitization (Docherty et al., 1996). DRG neurons were stimulated two or three times with 30 nM capsaicin with morphine present before and during the second capsaicin application ( Figure 5a). Morphine had no significant inhibitory effect using this protocol ( Figure 5b) in contrast to the marked inhibition of PS responses seen in the same experiments ( Figure 5c). Inhibition of TRPM3 is independent of cAMP and does not rely on Ga i proteins Activation of Ga i subunits leads to inhibition of adenylate cyclase, the enzyme responsible for producing cAMP (cyclic 3', 5'-adenosine monophosphate). Activation of protein kinase A (PKA) by cAMP and the resulting regulation of ion channel functions by PKA are well characterised. In order to examine whether opioid mediated inhibition of TRPM3 is driven by reduced levels of cAMP, we investigated whether morphine could still exert its inhibitory effects in the presence of a membrane permeable cAMP analogue, 8-bromo cAMP. 8-bromo cAMP was unable to compensate for the morphine-induced inhibition of PS responses. In experiments where 8-bromo cAMP (1 mM) was coadministered with morphine (10 mM) the response amplitude evoked by the second PS challenge was 11 ± 2% (Figure 6a,b) which was similar to the amplitude in the presence of morphine alone (13 ± 2%, p>0.05, Kruskal-Wallis). In addition to initiating second messenger signalling pathways, Ga subunits can have direct actions on ion channels. To test the involvement of Ga i subunits in opioid-mediated inhibition of TRPM3 we examined the effect of the selective Ga i -inhibitor NF023 (Freissmuth et al., 1996) on morphine-induced inhibition of PS responses. Although NF023 has been reported to inhibit Gaimeditated signalling when applied extracellularly to cells (Sarwar et al., 2015), this charged molecule probably has limited membrane permeability. We therefore examined electrophysiologically whether intracellularly applied NF023 would modify the effect of morphine on PS evoked currents in TRPM3/m opioid receptor expressing CHO cells (Figure 6c). Inclusion of a high concentration (100 mM) of NF023 in the patch pipette had no marked effect on the inhibitory actions of morphine (Figure 6d,e). These findings indicate a Ga i -and cAMP-independent mechanism for opioid-induced inhibition of TRPM3. Beta-gamma subunits of Gi proteins mediate inhibition of TRPM3 Activation of GPCRs also leads to the release of Gbg subunits which are effector molecules themselves and can bind directly to ion channels to modulate their function (Smrcka, 2008). We therefore examined the effect of morphine and baclofen on PS evoked responses in the presence of a Gbg inhibitor, gallein. Morphine induced inhibition of PS evoked {Ca 2+ ] i -responses was dramatically supressed by gallein (20 mM, Figure 7a,b). The response amplitude evoked by the second PS challenge was 109 ± 5% in the presence of both morphine and gallein compared to just 10 ± 1% in the ] i responses to two sequential 5 mM PS challenges, followed by a 50 mM PS challenge and depolarisation with high K + (50 mM KCl). Cells were exposed to 10 mM BIIE 0246 for 2 min before and during the second PS challenge. (f-h) Traces displaying DRG [Ca 2+ ] i responses to sequential 20 mM PS challenges (indicated by black bars) in the presence and absence of morphine (10 mM), baclofen (100 mM) and PYY (100 nM), followed by depolarisation with high K + (50 mM KCl). A group of cells were inhibited by some GPCR agonists but not others. F (340/380) indicates fura-2 emission ratio. (i) Venn diagram illustrating the pattern of inhibition of PS responses by morphine, baclofen and PYY, n = 210 PS and KCl responsive cells. Responses in 31 cells were not inhibited by any of the GPCR agonists (second response amplitude >15% of first PS response amplitude). DOI: 10.7554/eLife.26138.005 presence of morphine alone. In additional experiments with three PS challenges, a group of neurons were identified which showed reduced responses (<30% of first response) to PS when morphine (10 mM) was present but regained sensitivity (>50% of first response) when gallein (20 mM) was also present (Figure 7c,d). . Cells were perfused with 10 mM morphine 2 min before and during the second and third PS challenge, and perfused with 20 mM gallein 7 min before and during the third PS challenge. The results shown are for morphine sensitive cells (PS response amplitude in presence of morphine < 30% of the first response amplitude). ***p<0.001 compared to control, † † †p<0.001 compared to morphine, Kruskal Wallis. Figure 7 continued on next page We next examined whether gallein also prevents baclofen-induced inhibition of PS-evoked [Ca 2+ ] i -responses. When both gallein (10 mM) and baclofen (100 mM) were present during the second PS application, the maximum response amplitude was significantly higher than when baclofen was present alone (gallein and baclofen: 38 ± 4%, baclofen: 12 ± 2%, p<0.001, (Figure 7e,f). Similarly, the inhibitory effect of PYY (100 nM) on PS-evoked [Ca 2+ ] i -responses was attenuated in the presence of gallein (20 mM); the maximum response amplitude in the presence of both gallein and PYY was significantly higher than the maximum response amplitude in the presence of PYY alone (gallein and PYY: 36 ± 3%, PYY: 21 ± 2%, p<0.001). Intriguingly, the scatter plots suggest that the inhibitory effect of gallein on baclofen and PYY mediated inhibition only occurred in a subset of neurons, unlike the strong effect noted for morphine inhibition in almost all cells. Nevertheless, these results indicate that TRPM3 activity can be regulated by a bg-dependent mechanism after activation of opioid, GABA B and NPY receptors. To determine whether TRPM3 can be inhibited directly by Gbg subunits in the absence of GPCR stimulation and Ga i activation, we examined electrophysiologically the effect of intracellularly dialysing TRPM3 expressing HEK293 cells with Gbg subunits (50 nM). Application of PS (50 mM) for 10 s every minute evoked large outward currents in TRPM3 HEK293 cells, without any detectable desensitization (Figure 8a,b). In contrast, intracellular dialysis with bg-subunits (50 nM) produced a significant and progressive inhibition of the PS-evoked current amplitude (Figure 8a,b). To confirm the inhibitory effects of Gbg subunits on TRPM3 we applied Gbg subunits to the intracellular face of inside-out membrane patches from TRPM3 HEK293 cells stimulated with another, specific TRPM3 agonist, CIM0216. Macroscopic CIM-0216-evoked currents in these patches were greatly inhibited by 50 nM Gbg subunits (Fig, 8 c, e) but not by administration of Gbg subunits that had been heat denatured by boiling (Figure 8d,e). These results are consistent with direct inhibition of TRPM3 by Gbg subunits. G i/o coupled GPCRs modulate TRPM3 mediated nociceptive responses We determined the effects of TRPM3 modulation in vivo by examining the nociceptive responses evoked by TRPM3 agonists in wild-type mice. Previous studies in TRPM3 knockout mice have shown that the behavioural responses to intraplantar injection of PS and CIM0216, are dependent on TRPM3 (35). The behavioural responses to either agonist alone were often mild. We therefore administered a combination of 5 nmole PS and 0.5 nmole CIM-0216 as these compounds have been shown to act synergistically in vitro (Held et al., 2015). Intraplantar administration of the combined agonists evoked robust paw licking/flinching behaviour that was measured over a 2 min period. Prior intraplantar administration of morphine (130nmole) essentially abolished the behavioural response evoked by PS/CIM0216 (Figure 9a), and intraplantar injections of either baclofen (240nmole) or PYY (235pmole) also significantly reduced the response (Figure 9b). Behavioural responses could be inhibited by morphine acting on, for example, voltage gated potassium channels in the periphery. We therefore examined the effects of intraplantar morphine on the paw licking/flinching responses evoked by intraplantar injection of capsaicin. Morphine had no inhibitory effect on capsaicin-elicited behaviours (Figure 9e) in marked contrast to its effect on PS/CIM0216 responses. Naloxone and BIIE0246 act as inverse agonists at m opioid and NPY Y2 receptors (Elbrønd-Bek et al., 2015;Wang et al., 2001) and consequently inhibit constitutive GPCR activity as well as inhibiting agonist evoked responses. As these ligands augmented TRPM3 activity in DRG neurons in vitro, we examined if they would increase the nociceptive effects of TRPM3 agonists. For these studies we used intraplantar injection of PS alone, which evoked relatively mild behavioural responses. Prior intraperitoneal administration of either 2.5 mg/kg naloxone (Figure 9d) or 3 mg/kg BIIE0246 (Figure 9e) resulted in significantly increased nociceptive responses to PS. Discussion TRPM3 is expressed in DRG neurons where it plays a role in the transduction of thermal (heat) stimuli in normal conditions and notably in the development of heat hypersensitivity in inflammatory conditions (Vriens et al., 2011). Studies of TRPM3 have been facilitated by the discovery of PS and the synthetic compound CIM0216 as TRPM3 agonists (Wagner et al., 2008;Held et al., 2015) that can be used to probe the functions and characteristics of TRPM3, and we have utilised these compounds to investigate the regulation of TRPM3 channels by GPCR ligands. Like other TRP channels, TRPM3 activity is regulated by intracellular PI(4,5)P 2 and other phosphoinositides (Badheka et al., 2015; Tó th et al., 2015) but otherwise little is known about the mechanisms that regulate the activity of this channel. Other sensory neuron TRP channels, notably TRPV1 and TRPA1, are regulated downstream of GPCR signalling by mechanisms that involve either G q activation, PI(4,5)P 2 hydrolysis and protein kinase C or G s activation leading to protein kinase A mediated phosphorylation. These and other phosphorylation pathways are important regulatory mechanisms for TRP channels particularly in inflammatory conditions (see Veldhuis et al., 2015 for review). Activation of the G i coupled m opioid receptor is a potent analgesic mechanism in sensory systems and opioid agonists such as morphine can inhibit TRPV1 activity by reducing phosphorylation levels; however, this is evident only when TRPV1 is sensitized via the cAMP-PKA pathway (Endres-Becker et al., 2007;Vetter et al., 2006Vetter et al., , 2008. This latter observation is consistent with our observation that morphine has little or no inhibitory effect on capsaicin evoked Ca 2+ responses when desensitization is blocked by cyclosporin. Our results demonstrate for the first time that agonists acting at several G i coupled GPCRs (m opioid, GABA-B and NPY receptors) exert an inhibitory effect on TRPM3 activation in DRG neurons. This inhibitory effect was PTX-sensitive demonstrating a G i/o protein involvement. The inhibition was not, however, mediated by the canonical Ga subunit/adenylate cyclase pathway as the inhibitory effect was not abrogated by either application of the Ga subunit inhibitor NF023 or by providing a membrane permeable cAMP analogue that is well known to activate PKA. Instead, our results indicate that the TRPM3 inhibition is mediated by Gbg subunits. The Ga i inhibitor NF023, which inhibits Ga i subunit interactions with effector molecules, was without effect when dialysed into cells at 100 mM which is higher than its IC 50 value (~300-400 nM). However, Ga subunits bind to effector molecules with picomolar to nanomolar affinities and the effectiveness of NF023 will depend on its relative binding affinity for Ga i compared to that of the effector molecules. We therefore cannot be certain that NF023 effectively inhibits Ga i signalling even at the concentration used. In contrast, we have clear evidence for a role for Gbg subunits. The inhibitory effects of morphine on TRPM3 were reversed by gallein, which is generally considered to be a specific inhibitor of Gbg signalling (Lin and Smrcka, 2011), although we cannot exclude a non-Gbg 'off target' effect. Critically, we found that PS evoked currents were inhibited by direct application of purified Gbg subunits to either whole cells or excised inside-out membrane patches, without concomitant activation of Ga i or G-protein-coupled receptors. The effect of the Gbg subunits was lost after heat inactivation indicating that proteins in the sample were responsible for the inhibitory activity. Our findings are therefore consistent with activation of a m opioid receptor and an action of Gbg subunits to inhibit TRPM3. A direct interaction between the released Gbg subunits and TRPM3 is likely as found for some other ion channels (Elbrønd-Bek et al., 2015;Wang et al., 2001;Veldhuis et al., 2015). Gbg subunits can, however, activate other molecules such as phospholipase C (Rebecchi and Pentyala, 2000), and such an action would hydrolyse PI(4,5)P 2 and reduce TRPM3 activity (Badheka et al., 2015;Tó th et al., 2015). Inhibition of TRPM3 activity could therefore be due to a Gbg/PLCmediated loss of PI(4,5)P 2 . PI(4,5)P 2 levels are not maintained in isolated membrane patches and decline rapidly especially in the absence of Mg-ATP, which is required for phosphoinositide kinase mediated generation of PI(4,5)P 2 (Zakharian et al., 2011). This loss of PI (4,5)P 2 , accounts for the greatly reduced TRPM3 channel activity previously reported in excised membrane patches (Badheka et al., 2015;Tó th et al., 2015). In our excised inside-out patch experiments, recordings were made minutes after excision into Mg-ATP free solution, which will deplete PI (4,5)P 2 , and no further run down of channel activity was noted during the recordings. The finding that Gbg subunits exerted a robust inhibitory effect on TRPM3 currents in membrane patches in these conditions suggests that the Gbg inhibition does not involve PLC mediated hydrolysis of PI (4,5)P 2 . Such a conclusion is further supported by the observations in DRG Ca 2+ imaging experiments where, in contrast to TRPM3, TRPV1 responses were not affected by morphine. As the activities of both channels are modulated by PI(4,5)P 2 hydrolysis (Badheka et al., 2015;Cao et al., 2013), Figure 9 continued on licking/flinching behaviour evoked by intraplantar administration of 5nmole PS. *p<0.05, **p<0.01, ***p<0.001; ANOVA followed by Tukey's HSD test. (e) No effect of intraplantar morphine (130 nmole) on nociceptive responses evoked by intraplantar administration of capsaicin (33 nmole). Vehicle, 0.9% NaCl. DOI: 10.7554/eLife.26138.011 the differential, strong inhibition of TRPM3 cannot be explained by PLC activation and a reduction in PI(4,5)P 2 . Direct effects of G protein subunits on ion channel functions have been well studied for P/Q-and N-type voltage gated calcium channels (Dolphin, 2003) and Kir channels (Dascal, 1997;Yamada et al., 1998). In contrast, there is relatively little knowledge of direct G protein subunit -TRP channel interactions. TRPM1 channels are inhibited by activation of G o -linked GPCRs and the available evidence is consistent with regulation by interactions direct between TRPM1 and both Gao and Gbg subunits (Shen et al., 2012;Xu et al., 2016;Devi et al., 2013;Koike et al., 2010). In other studies, TRPM8 was shown to be inhibited by an interaction with Ga q (Zhang et al., 2012;Li and Zhang, 2013) while TRPA1 activation following stimulation of MrgprA3 receptors in DRG neurons was inhibited by gallein and phosducin, consistent with a Gbg mediated mechanism (Wilson et al., 2011). TRP channel regulation by direct G protein subunit interactions is therefore emerging as an important and novel mechanistic concept. DRG neurons showed differential sensitivity to activation of different GPCRs. The TRPM3 responses in many neurons were inhibited by more than one GPCR agonist consistent with expression of multiple GPCRs in these neurons. Presumably the inhibition pattern reflects the distribution and expression levels of the different GPCRs. However, not all G i/o GPCR agonists inhibited TRPM3. The group III mGluR agonist, L-AP4, and agonists at d-and k-opioid receptors did not significantly inhibit TRPM3 responses although there is good evidence for their expression in DRG neurons (Scherrer et al., 2009;Carlton and Hargett, 2007;Govea et al., 2012;Honsek et al., 2015;Wang et al., 2010). In part this could be explained by lack of TRPM3-GPCR co-expression in individual DRG neurons. For example, d-opioid receptors are expressed by about 15% of, mainly larger diameter, DRG neurons (Scherrer et al., 2009;Bardoni et al., 2014) although there is evidence for some co-expression of m-and d-opioid receptors (Devi et al., 2013), There is also likely to be an overlap in expression of G i/o -coupled metabotropic glutamate GPCRs (Carlton and Hargett, 2007;Govea et al., 2012) with the small-medium diameter DRG neurons that express TRPM3 (Vriens et al., 2011). A significant inhibition of TRPM3 responses would therefore be expected in some DRG neurons, as seen for CB1 receptor activation (Agarwal et al., 2007;Veress et al., 2013). The effect of CB1 receptor activation in DRG neurons was restricted to a small number of neurons, although our experiments with heterologously expressed TRPM3 and CB1 indicated that CB1 receptor activity can regulate TRPM3. It is possible that specific macromolecular assemblies are required for the efficient interaction between types of GPCRs and TRPM3 and that d-and k-opioid and mGluR receptors do not contribute to these complexes, and that CB1 receptors are variably coupled in DRG neurons. The pronounced inhibition of TRPM3 by m-opioid receptor activation is interesting as this receptor sub-type is expressed in mice in heat sensitive DRG neurons (see Scherrer et al., 2009;Honsek et al., 2015) and may be functionally relevant for opioid control of heat sensation. An interesting additional finding was that while both baclofen and PYY robustly inhibited TRPM3 responses in a substantial percentage of DRG neurons, this inhibitory effect was only partially reversed by gallein. In contrast, gallein strongly reversed the inhibitory effect of morphine on TRPM3. This finding raises the possibility that G i/o -coupled GPCRs can regulate TRPM3 via several signalling pathways. In addition to demonstrating GPCR agonist induced inhibition of TRPM3, our in vitro studies have revealed that inverse agonists that act at m-opioid and NPY receptors (naloxone and BIIE0246) can potentiate TRPM3 mediated responses. These findings are consistent with the concept that constitutive activity of m-opioid and NPY receptors provides a level of tonic inhibition of TRPM3. A potentiating effect of these ligands was also noted in vivo, where they potentiated the behavioural effects of local intraplantar injection of PS. This result could reflect an inhibition of constitutive GPCR activity, as suggested by the in vitro findings, or inhibition of endogenous GPCR agonists produced in the tissues. Intraplantar administration of morphine, baclofen or PYY inhibited the strong nociceptive behavioural responses evoked by combined local application of PS and CIM0216. Such a peripheral anti-nociceptive action could be due to an action that inhibits action potential transmission, perhaps by activation of voltage gated potassium channels. However, morphine did not inhibit TRPV1-mediated, capsaicin-evoked pain responses so a general inhibitory action of morphine on the transmission of nociceptive signals can be ruled out. The inhibition of TRPM3-mediated nociceptive responses by the GPCR agonists can therefore be correlated to TRPM3 inhibition. Our results demonstrate that GPCR modulation of TRPM3 occurs in vivo and that the effects are localized to the regions of sensory nerve terminals rather than systemic effects operating at the level of the spinal cord or higher centres in the nociception pathway. Our results demonstrate that TRPM3 in sensory neurons is subject to G i/o GPCR regulation via a Gbg subunit action. Activation of m-opioid and GABA-B receptors are important analgesic mechanisms that operate both peripherally and centrally, including actions on the central terminals of sensory nerves in the spinal dorsal horn. Given the co-expression of TRPM3 and these GPCRs in nociceptive sensory neurons and the emerging role of TRPM3 in nociception, our findings highlight the importance of determining the role of TRPM3 in pathophysiological pain conditions. Imaging changes in intracellular calcium levels DRG neurons were loaded with 2.5 mM Fura-2 AM (Molecular Probes) in the presence of 1 mM probenecid for~1 hr. Dye loading and all experiments were performed in a physiological saline solution containing (in mM) 140 NaCl, 5 KCl, 10 glucose, 10 HEPES, 2 CaCl 2 , and 1 MgCl 2 , buffered to pH 7.4 (NaOH). Drug solutions were applied to cells by local microsuperfusion of solution through a fine tube placed very close to the cells being studied. The temperature of the superfused solution (24-25˚C) was regulated by a temperature controller (Marlow Industries) attached to a Peltier device with the temperature measured at the orifice of the inflow tube. Images of a group of cells were captured every 2 s at 340 and 380 nm excitation wavelengths with emission measured at 520 nm with a microscope based imaging system (PTI, New Jersey, USA). Analyses of emission intensity ratios at 340 nm/380 nm excitation (R, in individual cells) were performed with the ImageMaster suite of software. Electrophysiology DRG neurons and TRPM3 expressing HEK293 or CHO cell lines were studied under voltage-clamp conditions using an Axopatch 200B amplifier and pClamp 10.0 software (RRID:SCR_011323, Molecular Devices). DRG neuron recordings were performed at +40 or+60 mV using borosilicate electrodes (3-6 MW) filled with (in mM): 140 CsCl, 10 EGTA, 1 CaCl 2 , 2MgATP, 2 Na 2 ATP, buffered to pH 7.4 (CsOH). DRG neurons were studied in the extracellular solution used for [Ca 2+ ] i -measurements (see above). HEK293 and CHO cells were studied under Ca2+-free conditions (same solution as above, but with CaCl 2 substituted by 1 mM EGTA). Excised inside-out membrane patch recordings were made using TRPM3 expressing HEK293 cells. The solution bathing the intracellular membrane face of excised membrane patches contained either (mM) 140 CsCl, 10 EGTA, 10 HEPES or 70 CsCl, 70NMDG-Cl, 10 EGTA, 10 HEPES pH7.3. For some experiments current-voltage relationships were obtained from voltage ramps (300 ms duration) from À100 mV to voltages up to +200 mV. The NMDG containing intracellular solution was used for voltage ramp studies to reduce the amplitude of the outward current. bg subunits from bovine brain (Millipore Ltd., Watford, UK) were dissolved in intracellular solution at a final concentration of 50 nM. For some experiment the bg subunits were inactivated by heating to 100˚C for 10 min. bg subunits (plus CIM0216) were applied by pressure ejection from a blunt pipette positioned close to the excised membrane patch. Behavioural assessment of pain responses All behavioural experiments were approved by the King's College London Animal Welfare and Ethical Review Board and conducted under the UK Home Office Project Licence PPL 70/7510. PS, or a combination of PS/CIM0216 (25 ml) and morphine, baclofen and PYY (10 ml) were injected subcutaneously into the plantar surface (intraplantar, i.pl.) of one of the hind paws using a 50 ml Luer-syringe (Hamilton Co.) fitted with a 26-gauge x 3/8 inch needle. Morphine, baclofen and PYY were injected 5 min before PS/CIM0216. Naloxone (2.5 mg/kg) and BIIE 0246 3 mg/kg were injected intraperitoneally 30 min prior to intraplantar injection of PS. Mice were habituated to the experimental Perspex chambers before the experiment and placed in the chambers immediately after injection of TRPM3 agonists. The duration of pain-related behaviours (licking and biting or flinching and shaking of the injected paw) was recorded using a digital stop-watch. Total pain response times over the first 2 min were used for analysis as the pain behaviours were largely restricted to this period. Groups of 6-11 animals were used for each agent. Pregnenolone sulphate (PS,): a 200 mM solution in PBS was prepared from a 60 mM DMSO stock solution. 25 ml was injected i.pl. into the left hind paw. CIM0216: a 20 mM solution was prepared from a 5 mM DMSO stock solution. A combined PS/CIM solution was prepared by combining 1 ml 400 mM PS with 1 ml 40 mM CIM and 25 ml was injected i.pl. into the left hind paw. Naloxone HCl was administered at a dose of 2.5 mg/kg i.p. 30 min prior to PS. Morphine, 100 mg in 10 ml was injected i.pl. 5 min prior to CIM/PREGS. Baclofen, 60 mg in 20 ml was injected i.pl. 5 min prior to CIM/PS. PYY, 1 mg in 10 ml was injected i.pl. 5 min prior to CIM/PS. BIIE 0246 was made up in saline from a 10 mM DMSO stock and dosed at 3 mg/kg i.p. 60 min prior to PS. Materials All compounds were from Sigma-Aldrich, Poole, UK unless otherwise stated. Stock solutions of pregnenolone sulphate, CIM0216 (Tocris Bioscience; Bristol, UK), WIN 55212-2 (Tocris Bioscience; Bristol, UK), AM251 (Tocris Bioscience; Bristol, UK), Gallein (Santa Cruz Biotechnology; Heidelberg, Germany) and BIIE 0246 (Tocris Bioscience; Bristol, UK) were made in DMSO (Calbiochem; Darmstadt, Germany). Stock solutions of morphine and naloxone were made in H 2 O and DAMGO (Tocris Bioscience; Bristol, UK), SB205607 (Tocris Bioscience; Bristol, UK), U50488 (Tocris Bioscience; Bristol, UK) and PYY (ABGent; San Diego, CA), were made in physiological extracellular solution. A stock solution of 8-bromo cAMP was made in H 2 O titrated with NaOH. Stock solutions of L-AP4 (Tocris Bioscience; Bristol, UK) and (RS)-Baclofen (Tocris Bioscience; Bristol, UK) were made in molar equivalent solutions of NaOH. Stock solutions were aliquoted and stored at À20˚C. Stock solutions were diluted in physiological extracellular solution for their use in experiments. PTX (lyophilised powder) was reconstituted in H 2 O and was stored at 4˚C. PTX was added to cells at a concentration of 200 ng/ml. Statistical analysis Data are presented as box and whisker plots showing the mean (square symbol), median (horizontal line), interquartile range (box) and 5% and 95% percentile points (whiskers). For the microfluorimetry and electrophysiology experiments, 'n' values represent the number of PS responding neurons or cells studied except where indicated otherwise in the text. For behavioural experiments, 'n' values represent the number of animals in each group. No statistical methods were used to predetermine sample sizes, however our sample sizes are similar to, or greater than, those generally employed in other studies in the field. Normality of data was tested using the Shapiro-Wilk Test. Differences in normally distributed data means between two groups were analysed using an independent samples t-test. Differences in normally distributed data means between three groups or more were analysed using a one-way ANOVA, followed by a Tukey's HSD post-hoc test. Differences in non-normally distributed data means between two groups were analysed using a Mann-Whitney U test. Differences in non-normally distributed data means between three groups or more were analysed using a Kruskal-Wallis test. All statistical analyses were made using IBM SPSS statistics, version 22 (RRID:SCR_ 002865).
2018-04-03T04:54:06.954Z
2017-08-15T00:00:00.000
{ "year": 2017, "sha1": "a132091cc58aaf420bd26abe9819b05899271729", "oa_license": "CCBY", "oa_url": "https://doi.org/10.7554/elife.26138", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "a132091cc58aaf420bd26abe9819b05899271729", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine", "Chemistry" ] }
205644708
pes2o/s2orc
v3-fos-license
In vivo imaging of adeno-associated viral vector labelled retinal ganglion cells A defining characteristic of optic neuropathies, such as glaucoma, is progressive loss of retinal ganglion cells (RGCs). Current clinical tests only provide weak surrogates of RGC loss, but the possibility of optically visualizing RGCs and quantifying their rate of loss could represent a radical advance in the management of optic neuropathies. In this study we injected two different adeno-associated viral (AAV) vector serotypes in the vitreous to enable green fluorescent protein (GFP) labelling of RGCs in wild-type mice for in vivo and non-invasive imaging. GFP-labelled cells were detected by confocal scanning laser ophthalmoscopy 1-week post-injection and plateaued in density at 4 weeks. Immunohistochemical analysis 5-weeks post-injection revealed labelling specificity to RGCs to be significantly higher with the AAV2-DCX-GFP vector compared to the AAV2-CAG-GFP vector. There were no adverse functional or structural effects of the labelling method as determined with electroretinography and optical coherence tomography, respectively. The RGC-specific positive and negative scotopic threshold responses had similar amplitudes between control and experimental eyes, while inner retinal thickness was also unchanged after injection. As a positive control experiment, optic nerve transection resulted in a progressive loss of labelled RGCs. AAV vectors provide strong and long-lasting GFP labelling of RGCs without detectable adverse effects. such method utilizes apoptotic indicators, annexin V 9 or effector caspases 10,11 , to measure cell death in the retina of rodents and humans 12 in conjunction with in vivo fluorescence imaging. The specificity of this approach for RGCs is not known and could lead to a high number of false positives when other retinal neurons are labelled. Furthermore, this approach provides evidence of cells undergoing cell death at a single time point, making it difficult to ascertain how many RGC are lost and how many remain, thereby making it challenging to document disease progression. Other investigators used genetically encoded calcium indicators for repeated in vivo functional imaging of foveal RGCs in macaques 13,14 . This method required implementation of adaptive optics to obtain sufficient fluorescence intensity in a small region of the retina, but represents important progress as it measures not only the presence of RGCs, but their functional responses to light stimuli via fluorescence intensity. Adeno-associated viral (AAV) vectors have been used in clinical trials for retinal diseases with successful safety and transduction outcomes 15 . AAV vectors can be customized to improve cell-type specificity and rate of labelling. Such approaches include manipulation of the capsid (i.e., wild-type or engineered), genome (i.e., promoters and reporter gene, such as a fluorophore, e.g., green fluorescence protein (GFP)) and route of delivery (i.e., intravitreal or subretinal). Most recently, research has addressed the limited capacity for DNA, approximately 4.7 kilobases, in AAV vector mediated transduction. Small gene promoters of human DNA ("MiniPromoters") were developed to drive gene expression in neural tissue 16,17 . One of these MiniPromoters is for the gene DCX that encodes for the protein doublecortin; previously shown to be expressed in developing and mature retinal neurons. Specifically, there is evidence the protein is present in RGCs as well as amacrine, bipolar and horizontal cells 18,19 . When the MiniPromoter developed for DCX was used in an AAV vector, it primarily targeted the RGCs in mice 16 . However, the efficiency, specificity and persistence of these promoters when incorporated into an AAV vector are not known. These tools provide an opportunity for in vivo labelling of RGCs with improved specificity. In this study, we demonstrate the feasibility of AAV delivery via intravitreal injection as a means for in vivo RGC labelling in mice. This method was used to test if AAV vectors provide specific fluorescence labelling for longitudinal in vivo imaging. Confocal scanning laser ophthalmoscope (CSLO) imaging was used for in vivo detection of fluorescence labelling, while OCT imaging and electroretinography (ERG) was used to detect any structural or functional changes to the retina, respectively. It is expected the results could provide a minimally invasive method for efficient and robust RGC labelling with longitudinal imaging, offering the ability to detect changes in RGCs and track progression of diseases such as glaucoma. Results Following intravitreal injection of the AAV vector, GFP labelling was detectable by in vivo CSLO fluorescence imaging in 17 mice (81%) at week 1 post-injection. The four (19%) animals that did not show any GFP labelling at week 1 did have labelling by week 2. Examples of the images acquired weekly are shown in two different animals labelled by vectors containing the CAG and DCX promoters (Fig. 1). Cells that were labelled in the early weeks post-injection continued to express the GFP until at least week 5. However, GFP labelling in the majority of cells was not evident until 2-4 weeks post-injection (Fig. 1). The in vivo mean (95% confidence interval) central retina cell density for the AAV2-CAG vector was 97 (50) cells/mm 2 at week-1 and 534 (201) cells/mm 2 at week-4. For the AAV2-DCX vector, cell density was 29 (40) cells/mm 2 at week-1 and 288 (105) cells/mm 2 at week-4. That is 77 (6)% of cells ultimately labelled with GFP at week 5, became GFP positive between weeks 1 and 5 in the AAV2-CAG injected animals and 85 (5)% of cells in the AAV2-DCX injected animals. The density of GFP labelled cells for both vectors follow a similar trend over time and fit with second-order polynomial models of R 2 = 0.43 for AAV2-CAG and R 2 = 0.38 for AAV2-DCX (Fig. 2). The AAV2-CAG vector labelled significantly more cells than the AAV2-DCX vector (p < 0.05), however, this difference was not apparent until week 3. At all subsequent time points, the percent difference of labelled cells between the AAV2-CAG and AAV2-DCX vectors was an average of 65 (10)% (Fig. 2). The trend of increasingly more labelled cells over time occurred until approximately week 4 post-injection, at which time neither vector had significantly more cells labelled than the previous week consecutively for two time points. The proportion of retina labelled in the in vivo images with the AAV2-CAG vector was 97 (7)% whereas the AAV2-DCX was 53 (30)%. Both the AAV2-CAG and AAV2-DCX vectors had a high signal-to-background ratio (Table 1) and difference in means equal to 0.67 (0.90). The measures of image quality were also comparable between the two vectors for signal-to-noise ratio and contrast-to-noise ratio with percent differences of 9.7% and 3.3%, respectively. There was no statistically significant difference between the vectors for any of the measures (Table 1). Figure 3 shows example micrographs with labelling from intravitreal injections (panels a-b) and immunohistochemical markers (panels c-f). The co-localization of GFP positive cells in the ganglion cell layer with RBPMS are shown in the merged micrographs in addition to the distribution of cholinergic amacrine cells (Fig. 3e,f). The cell densities of GFP labelled cells, 723 (287) cells/mm 2 for AAV2-CAG and 715 (177) cells/mm 2 for AAV2-DCX, revealed the AAV labelling via intravitreal injection was not significantly different between the two vectors ( Fig. 3g, p = 0.97). Based on immunohistochemical labelling, RGC densities of 2671 (256) cells/mm 2 for AAV2-CAG and 2817 (379) cells/mm 2 for AAV2-DCX were not significantly different (p = 0.55). Co-localization showed the specificity of GFP labelling to RGCs to be very high at week 5 post-injection for both vectors (Fig. 3h) with the proportion of GFP positive cells that were RGCs as 72 (3)% for the AAV2-CAG vector and 86 (4)% for the AAV2-DCX vector. There was significantly higher specificity of RGC labelling with the AAV2-DCX vector (p < 0.05) and it was independent of the region of retina (central vs. peripheral). However, the proportion of RGCs labelled by each vector (RBPMS+ that are GFP+) was 20 (9)% for AAV2-CAG and 35 (7)% for AAV2-DCX and not significantly different between vectors (p = 0.38). For the AAV2-DCX group, a paired t-test showed that there was a significant difference between the in vivo and ex vivo densities, 242 (111) cells/mm 2 and 716 (177) cells/mm 2 , respectively; p < 0.05. Linear regression revealed R 2 = 0.01 for in vivo vs. ex vivo GFP densities and R 2 = 0.16 for in vivo GFP density and ex vivo RBPMS density. At baseline, prior to the intravitreal injection, there was no significant difference of the amplitude or implicit time between left and right eyes ( Supplementary Fig. S1). Week 5 post-injection there was no significant decrease in the positive (p = 0.44) and negative (p = 0.84) STR amplitudes or implicit times (pSTR, p = 0.31 and nSTR, p = 0.33) between the control and experimental eyes following intravitreal injections with the AAV2-DCX vector, Figure 2. Comparison of in vivo GFP expression in the ganglion cell layer between vectors after intravitreal injection. Cell densities were calculated from 30-degree field of view (approximately 1.36 mm) fluorescence confocal scanning laser ophthalmoscopy images centred at the optic nerve head for the AAV2-CAG and AAV2-DCX vectors. Trendlines show the second-order polynomial regression results calculated for each group. Error bars represent 95% confidence intervals; *p < 0.05; n = 6. Signal-to-Background Signal-to-Noise Contrast-to-Noise Signal-to-background ratio measured the intensity ratio between the signal and background, signal-to-noise ratio and contrast-to-noise ratio represents the image quality at 5-weeks post-injection. Ratios are expressed as mean (95% CI). No statistically significant difference was found for any measure. indicating the RGC contribution to the ERG was unaffected (Fig. 4). The mean relative pSTR amplitude was 0.86 (0.04) and nSTR amplitude was 1.12 (0.06) across all signal strengths. Furthermore, the amplitudes of the a-wave, 187 (26) µV vs. 184 (23) µV, p = 0.87, and b-wave, 332 (108) µV vs. 315 (80) µV, p = 0.87, were not significantly different between the experimental and control eyes, respectively. The ERG response in the animals that received an optic nerve transection demonstrated significantly reduced pSTR and nSTR amplitudes, with the pSTR amplitude in the transected eye 47 (9)% that of control eye, while the nSTR amplitude was 83 (14)% that of the control eye ( Supplementary Fig. S2). Inner retinal thickness, the region of the retina between the ILM and IPL, was used to include the ganglion cell layer and retinal nerve fibre layer (Fig. 5a). The range of mean thicknesses in all animals across all time points was 65-72 µm. The mean proportional change in ILM-IPL thickness compared to pre-injection was 1.025 (0.007) and no significant change (p = 0.51) in the thickness between time points was shown up to five weeks after an intravitreal injection of an AAV vector (Fig. 5b). Longitudinal fluorescence images corresponding to the same region of the retina before and after optic nerve transection are shown in Fig. 6. The in vivo AAV2-DCX-GFP labelling is clearly visible, but at 14 days post-transection there was a 23 (7)% decrease in GFP labelled cells with some cells losing all fluorescence (red arrow, Fig. 6), while others had diminished fluorescence (yellow arrow, Fig. 6). Discussion In this study we demonstrated that intravitreal injection administration of AAV vectors with a fluorescent reporter gene provides robust and sustained in vivo RGC labelling in mice. The GFP labelling was sufficiently strong to be detectable by non-invasive CSLO imaging with laser sources approved and used routinely in clinical practice. Between week 2 and week 4 there was a 69% increase in labelling for AAV2-CAG and 80% for AAV2-DCX. Furthermore, the GFP signal was persistent in individual RGCs for at least five weeks (Fig. 1). There are two conclusions from the fluorescence trend important for the planning of longitudinal RGC survival experiments. Firstly, since the transduction of AAV vector requires approximately four weeks for maximal labelling ( Fig. 2), consistent with previous studies 20, 21 , it was notable that GFP labelling was cumulative during this time and that the fluorescence signal was not transient. Secondly, we demonstrated a progressive decrease in RGC density following optic nerve transection (Fig. 6). Furthermore, the in vivo electrophysiological assay of RGC health showed the AAV intravitreal injection did not cause measureable damage (Fig. 4). These findings demonstrate (a) Example waveforms of ERG recordings obtained from AAV injected experimental (solid) and control fellow (dashed) eyes over a range of low stimulus strengths. Averaged group data for control (filled circles) and experimental (unfilled circles) of (b) negative STR, (c) positive STR amplitudes. Averaged group data for control (filled triangles) and experimental (unfilled triangles) eyes of (d) negative STR implicit times, and (e) positive STR implicit times. No significant differences were found between the control and experimental eyes for any of the measures. Error bars represent 95% confidence interval; n = 6. that our technique of RGC labelling is a viable method to monitor real-time RGC survival in experimental disease models and interventions for neuroprotection and neuroregeneration. AAV vector serotypes and capsids show preferential transduction to differing retinal cell types, even with ubiquitous promoters [21][22][23] , and is dependent on the injection site (e.g., subretinal vs. intravitreal). Serotype 2 was chosen for this study because it most often shows the best transduction efficiency in RGCs when administered by intravitreal injection 24,25 . Serotypes and engineered capsids 23 can be used for improving transduction efficiency of cells, while promoters can be used for improving transduction specificity. Ubiquitous promoters are commonly used, even when targeting specific cells, but could drive protein synthesis and cause side effects in other tissues and cells 26,27 . Therefore, use of specific promoters such as DCX, helps provide improved cell specific targeting and reduce the possible side effects induced by ubiquitous promoters. The specificity of RGC labelling with AAV vectors or comparing promoters has not been previously quantified with RGC-specific immunohistochemistry or markers. Transduction efficiency in the retina was either measured by total density of labelled cells across the retina 28 or fluorescence intensity within the retinal layers 29 . We demonstrated a 19% increase in specificity to RGCs by utilizing a specific promoter compared to a ubiquitous promoter while labelling approximately the same proportion of total RGCs (Fig. 3h). Choline acetyltransferase antibody labelling was used to show the presence of amacrine cells in the ganglion cell layer and did not co-localize with GFP positive cells. These results, in processed retinal tissue, demonstrated that the method of delivering fluorescent proteins via AAV2 and the specific DCX promoter yields high specificity for RGC labelling. A higher density of labelling was detected ex vivo than in vivo with the imaging methods we chose, though due to the small sample size in this study, we cannot determine the level of correlation between in vivo and ex vivo cell densities. Ex vivo tissue microscopy allowed for higher resolution imaging than the CSLO system and could have resulted in a larger number of detectable cells. Alternatively, further work could determine if AAV transduction/labelling is representative of the entire RGC population or is preferential to a specific RGC type. We previously showed that intravitreal injection of the neuronal tracer, cholera toxin subunit B (CTB), resulted in fluorescence labelling of cells in the ganglion cell layer detectable by in vivo imaging for at least 100 days post-injection. However, the CTB labelling was not specific to RGCs, with only 53% of CTB labelled cells being RGCs 30 . Other researchers have described AAV-mediated GFP gene transfer for cross-sectional 28,31 or longitudinal 32 fluorescence imaging of the retina, but there is no previously published work that has quantified RGC density from in vivo images or reported the specificity of GFP labelling to RGCs. Martin et al. estimated a 75-85% transduction of RGCs in rats, approximately 2.5 times higher compared to our findings in mice, with AAV vectors containing GFP under the control of the ubiquitous CBA promoter and modified to include the woodchuck hepatitis post-transcriptional regulatory element (WPRE) 28 . However, the high rate of transduction was achieved within 2 weeks after intravitreal injection and was attributed to the use of WPRE. The strengths of the RGC labelling studies described above, AAV vector labelling and specificity quantification, were combined to develop the protocol described in this study. The results imply that in vivo imaging of labelled RGCs from AAV vectors can provide a reliable method to quantify RGCs repeatedly and longitudinally. This is an improvement on widely used transgenic animals or invasive labelling techniques, such as retrograde labelling via the superior colliculus, as neither are clinically applicable methods for visualizing RGCs. While the technique of RGC labelling described in this study does not label the entire RGC population, it does adequately sample the total RGC population. The reported in vivo density measures provide an estimate of labelling across the retina with signal intensity that was at least twice that of the background. This is an indication that the labelled cells are dispersed such that the resolution of the imaging system is able to adequately resolve individual cells. If all RGCs were transduced, it would be challenging to differentiate and quantify RGCs in vivo due to the high density labelling. In some applications of RGC targeting, such as high-resolution imaging and neuroprotective applications, it could be beneficial to have higher transduction. The inner limiting membrane (ILM) has been reported to be a barrier to viral transduction via intravitreal delivery 33 . This is likely the explanation for the observed cases of uneven labelling across the retina and a higher degree of GFP in the area closest to the injection site (Supplementary Figure S3). A higher density of RGC labelling could have been achieved with improved transduction by compromising the ILM. To avoid possible disruption or integrity of the retina, we chose not use strategies such as enzymatic degradation of extracellular matrix of the ILM 33,34 , vitreous aspiration prior to intravitreal injection 35,36 , formation of a bleb below the ILM to create a space for vector bolus 37 , or ILM peeling 38 , to reduce the resistance of the ILM to AAV transduction. In summary, this work provides a novel method for quantifying RGCs in experimental studies and monitoring RGC loss in animal models. The results show that a minimally invasive intravitreal injection of AAV vectors reliably labels RGCs in mice for longitudinal in vivo visualization (Fig. 1), in vivo quantification (Fig. 2), ex vivo labelling (Fig. 3), and ex vivo quantification (Fig. 3g). We show that the measures of RGC structure (retinal thickness) and function (ERG measures) are not affected by the administration of the AAV vectors. The high specificity of these AAV vectors to RGCs indicates there is potential for diagnostic and therapeutic applications in diseases that cause RGC loss, such as glaucoma. Specifically, it would reduce the challenges presented by a heterogeneous population of cell types in the ganglion cell layer and the large amount of variability in the number of RGCs between individuals. Future work is required to investigate if AAV vectors can efficiently, reliably and specifically transduce human RGCs and the effects of introducing exogenous reporters into human RGCs. Virus-based strategies are used for therapeutics, but their use for diagnostics would represent a paradigm shift and major advance in the management of ocular neuropathies. In vivo Imaging. In vivo imaging was performed with a confocal scanning laser ophthalmoscope (CSLO) and spectral domain optical coherence tomography (OCT) device modified for use in mice (Spectralis Multiline, Heidelberg Engineering GmbH, Heidelberg, Germany) 39 . CSLO imaging was performed to visualize the GFP-positive cells after intravitreal injection while OCT imaging was used to derive B-scans of the retina to quantify inner retinal thickness (see below). All laser sources were Class 1 according to International Electrotechnical Commission (IEC) 40 . Methods The left pupil was dilated (one drop of 1% tropicamide and one drop of 2.5% phenylephrine hydrochloride, Alcon Canada) and the mouse anaesthetized with inhalant isoflurane, induction of 3-4% volume (Baxter Corporation, Mississauga, ON, Canada) with 1.5 L/min oxygen flow and maintained at 1.5-3% volume with 0.8 L/ min oxygen flow, via a nose cone attached to a portable inhalation system. The mouse was placed on a heating pad for the duration of imaging. Ophthalmic gel and a custom-made polymethyl methacrylate plano contact lens (Cantor and Nissel Limited, Brackley, UK) were used to maintain corneal hydration and improve image quality. Baseline images focused at the level of the nerve fiber layer were first acquired with infrared (820 nm) illumination. Fluorescence images were then acquired in CSLO mode with a 488 nm excitation laser and bandpass filter of 505-545 nm. Each image was averaged a minimum of 20 times with automatic real-time eye tracking software to increase the signal-noise ratio. The imaging protocol was repeated at several time points post-injection ( Supplementary Fig. S4) with the image tracking software to ensure that the same retinal areas were imaged in each session. The same animal protocol and imaging set-up was used for OCT imaging to quantify inner retinal thickness. The principles of OCT for retinal imaging have been described elsewhere 41 ; briefly, the technique employs the principle of low coherence interferometry to generate high-resolution cross-sectional images of the retina. A raster pattern of 19 equally spaced horizontal B-scans, each 30 degrees wide, centred on the optic nerve head was used ( Supplementary Fig. S5). The scanning speed was 40 000 A-scans per second and each B-scan comprised of 1536 A-scans. Each B-scan was averaged 20 times. Electroretinography. Full field ERG analysis was performed prior to intravitreal injection and at week 5 post-injection to determine if the AAV-mediated GFP labelling driven by the DCX promoter had detrimental effects on retinal function ( Supplementary Fig. S4). To demonstrate the ability of our ERG protocol to detect changes in retinal function, specifically that of the RGCs, a subset of animals received an optic nerve transection ( Supplementary Fig. S2). Mice were dark-adapted overnight (≥12 hrs) before being anesthetized with an intraperitoneal injection of ketamine (100 mg/kg) and xylazine (10 mg/kg). Pupils were dilated with one drop each of 1% tropicamide and 2.5% phenylephrine hydrochloride (Alcon Canada Inc., Mississauga, ON, Canada). Body temperature was maintained between 35 and 37 °C with a heating pad and monitored with a rectal probe. A platinum subdermal electrode (Grass Instruments, Quincy, MA, USA) was placed in the base of the nose for reference. Optic Nerve Transection. Animals that underwent optic nerve transection were injected with AAV vector 5-weeks prior. Mice were anaesthetized using inhalant isoflurane as described for in vivo imaging. Under an operating microscope, the globe of the left eye was rotated downwards and held in place with a 9-0 conjunctival suture. To expose the optic nerve, an incision was made in the skin near the supraorbital ridge then the intraorbital subcutaneous tissues were dissected. The optic nerve dura was cut longitudinally and the optic nerve transected completely approximately 0.5 mm from the globe. The ophthalmic artery, located underneath the transected nerve, was kept intact. The incision was closed and the fundus examined to confirm no ischemic damage. Immunohistochemistry. To estimate the cell density and specificity of GFP labelling in the whole retina, immunohistochemistry was performed on retinal flat-mounts. Five weeks post-intravitreal injection animals were sacrificed with an overdose of sodium pentobarbital by intraperitoneal injection. The cornea and lens were removed and the eye-cups fixed in 4% paraformaldehyde for 2-3 hours. Retinas were washed in 1 × phosphate-buffered saline (PBS) for 10 min and incubated in blocking buffer (10% normal donkey serum, 0.3% Triton X-100) overnight at 4 °C. Retinas were incubated for 6 days at 4 °C in primary antibodies against RNA binding protein with multiple splicing [43][44][45] Image Data Analysis. Image processing, analysis and cell quantification algorithm implementation of CSLO acquired images (1536 × 1536 pixels) were performed with a customized analysis tool in MATLAB software. Measures of signal intensity and image quality for in vivo fluorescence images were calculated (signal-to-background ratio, signal-to-noise ratio, and contrast-to-noise ratio) at week 5 post-injection. This was completed to determine if the GFP variants (enhanced vs. humanized) affected the signal intensity or image quality. For cell quantification a Gaussian filter (σ = 3, h = 19) was used to remove noise and a minima transform (h = 10) implemented to extract markers for each labelled cell. If a labelled region was greater than 200 pixels, it was assumed to be a cluster of cells and an eroding function applied as a method of segmentation. Connected components in the binary image were automatically counted and markers were superimposed on the original CSLO image to indicate their position. The final cell quantification was performed after manual correction of the automated algorithm to include cells not correctly identified, or to exclude objects incorrectly identified as cells. The total number of labelled cells divided by the retinal area, excluding the optic nerve region, was used to calculate cell density. A percentage of labelled retinal area in the 30° in vivo images was measured at week 5 post-injection by tracing and calculating the region with prominent labelling then divided by the total image area. OCT layer segmentation was performed with the device segmentation algorithm (Heidelberg Eye Explorer, Heidelberg Engineering) after which each B-scan was checked for segmentation errors and manually corrected when required (Supplementary Fig. S5). The retinal nerve fibre and ganglion cell layers in mice are very thin, especially beyond the peripapillary region, and therefore cannot be reliably identified in OCT images. The signal interface between the inner plexiform layer and inner nuclear layer is easily identifiable and therefore used for segmentation. Inner retinal thickness was measured from the vitreous-retina surface to the inner plexiform layer (Supplementary Fig. S5). Micrographs of retinal flat-mounts were imaged with a Zeiss Axio Imager M2 microscope (Carl Zeiss AG, Oberkochen, Germany) and a 20 × Plan-Apochromat objective (Carl Zeiss). Fluorescence images of the ganglion cell layer were captured with the ZEN software (Carl Zeiss) sampling the central, mid-peripheral, and peripheral retinal regions, with respect to the optic nerve head. In each sampled region, the number of cells labelled by (1) GFP, (2) RBPMS, and (3) RBPMS with GFP were quantified independently in graphics editing software (Adobe Photoshop CS6, Adobe Systems Incorporated, San Jose, CA, USA). ERG Data Analysis. ERG waveforms were analyzed with a custom toolbox for Matlab (Mathworks, Natick, MA, USA) and filtered with a low-pass eighth-order Butterworth filter at 50 Hz prior to measuring amplitudes. Analysis of the scotopic threshold response (STR) included amplitude measurement of the positive component (pSTR), from the baseline to the initial peak, and the negative component (nSTR), from baseline to the local minimum after the pSTR. Both the pSTR and nSTR signals have been shown to reliably measure RGC function in mice 42,46 . For photoreceptor response, the a-wave amplitude was measured from the baseline to the maximum negative trough, while the b-wave was measured from the a-wave trough to the maximum positive peak. For comparison between experimental and control eyes, the relative amplitude (experimental/control eye) was calculated. Statistics. Statistical analyses were performed in the open-source R platform (version 3.1.3, R Core Team, http://www.R-project.org) and Prism (version 7 for Mac, GraphPad Software, La Jolla, CA, USA). The Shapiro-Wilk normality test was used to test data for a normal distribution. Unless otherwise indicated, all results are expressed as mean (95% confidence interval) and statistical significance was assumed when p < 0.05. For in vivo cell densities second-order polynomial regression was used for each vector and the Holm-Sidak's multiple comparisons test was used to test significance between vectors at each time point. Two-way analysis of variance (experimental/control vs. ERG stimulus strength) was applied to test the significance of the ERG data. Data Availability. The datasets generated during and/or analysed during the current study are available from the corresponding author on reasonable request.
2018-04-03T00:20:39.319Z
2018-01-24T00:00:00.000
{ "year": 2018, "sha1": "221fd7b4a49b9f39d0b19074d9c248379ff85a4a", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/s41598-018-19969-9.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "221fd7b4a49b9f39d0b19074d9c248379ff85a4a", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Chemistry", "Medicine" ] }
259090931
pes2o/s2orc
v3-fos-license
Triggering daily online adaptive radiotherapy in the pelvis: Dosimetric effects and procedural implications of trigger parameter‐value selection Abstract Background Online adaptive radiotherapy (ART) can address dosimetric consequences of variations in anatomy by creating a new plan during treatment. However, ART is time‐ and labor‐intensive and should be implemented in a resource‐conscious way. Adaptive triggers composed of parameter‐value pairs may direct the judicious use of online ART. Purpose This work analyzed our clinical experience using CBCT‐based daily online ART to demonstrate how a conceptual framework based on adaptive triggers affects the dosimetric and procedural impact of ART. Methods Sixteen patients across several pelvic sites were treated with CBCT‐based daily online ART. Differences in standardized dose metrics were compared between the original plan, the original plan recalculated on the daily anatomy, and an adaptive plan. For each metric, trigger values were analyzed in terms of the proportion of treatments adapted and the distribution of metric values. Results Target coverage metrics were compromised due to anatomic variation with the average change per treatment ranging from ‐0.90 to ‐0.05 Gy, ‐0.47 to ‐0.02 Gy, ‐0.31 to ‐0.01 Gy, and ‐12.45% to ‐2.65% for PTV D99%, PTV D95%, CTV D99%, and CTV V100%, respectively. These were improved using the adaptive plan (‐0.03 to 0.01 Gy, ‐0.02 to 0.00 Gy, ‐0.03 to 0.00 Gy, and ‐4.70% to 0.00%, respectively). Increasingly strict triggers resulted in a non‐linear increase in the proportion of treatments adapted and improved the distribution of metric values with diminishing returns. Some organ‐at‐risk (OAR) metrics were compromised by anatomic variation and improved using the adaptive plan, but changes in most OAR metrics were randomly distributed. Conclusions Daily online ART improved target coverage across multiple pelvic treatment sites and techniques. These effects were larger than those for OAR metrics, suggesting that maintaining target coverage was our primary benefit of CBCT‐based daily online ART. Analyses like these can determine online ART triggers from a cost‐benefit perspective. When implementing an ART program, one must consider the characteristics of the type of variation it is intended to address.What is appropriate for one treatment site or technique may not be appropriate for another.2][23] Therefore, offline ART is an effective technique as it establishes a new baseline designed to track with the changing anatomy.In this context, questions regarding the implementation of ART are focused on the frequency and timing of adaptive interventions. For an online ART program intended to address large, random, interfractional variations like those observed in the pelvis, the appropriate implementation should take a different form.5][26][27][28][29][30][31][32][33][34] In this context, an implementation of ART driven by a predetermined frequency and timing of adaptive interventions is likely to be less effective, as the anatomic changes do not follow the patterns that made such a method appropriate for head-and-neck cancer patients.Instead, to address anatomic variations like these, adaptive interventions should be conducted as needed (i.e., ad hoc).Identifying what constitutes as needed in ART is challenging but can be approached through establishing clinical conditions that trigger the adaptive intervention.These triggers are circumstances specified by one or more parameter-value pairs that identify when ART is appropriate.The trigger's parameter is the particular signal that is monitored, and the trigger's value pertains to the specific measurements of that parameter that correspond to the decision of whether or not to adapt. For example, in many online ART workflows, the original plan can be recalculated on the daily anatomy, and the resulting dose distribution can be compared to the original one or to that of a daily adaptive plan reopti-mized on the daily anatomy.Even without any adaptive replanning effort, the difference in any particular dose metric between the original dose distribution and that of the original plan recalculated on the daily anatomy could serve as the ART trigger parameter.This trigger would respond to scenarios where the anatomic change has compromised the dose distribution of the original plan, thereby warranting adaptation.Alternatively, the trigger parameter could be the difference in the same dose metric as above, but as observed between the dose distribution of the original plan recalculated on the daily anatomy and that of the adaptive plan.This trigger would respond to the potential dosimetric improvement of the adaptive plan, regardless of the adequacy of the original plan.This latter scenario does require the completion of adaptive replanning effort or at least requires an estimation of the effect of the adaptive plan.While the adapted plan will likely be dosimetrically superior, even if only by a marginal amount, finalizing and delivering that plan may still require additional tasks or extra personnel that clinics may wish to avoid unless determined to be dosimetrically justified.While some tasks, such as the initial contouring or potential dose calculation of the original plan, may be required regardless of which plan is selected, others, such as comprehensive plan review and quality assurance, may be required only if the delivery of the new adapted plan is likely.The sequence and specific effort required is dependent on the precise implementation of the ART workflow which can vary between vendors and may be subject to change.Regarding the two scenarios above, with triggers based on a compromised dose distribution or the potential of an improved one, represent two different adaptive philosophies.In addition, each can be implemented according to any particular dose metric.Furthermore, for any dose metric, the action limit that signals the need for adaptation can take any value.The large number of possible parameters and values, along with the fact that multiple triggers may be used in combination, generates countless ways to implement online ART. For this reason, identifying appropriate triggers is critical for the optimal implementation of online ART, particularly for pelvic disease sites that are best adapted as needed.Although triggers for patient selection and timing of offline ART have been investigated, [9][10][11][12][13][14][15][16][17] the optimal triggers to determine the need for online ART on any given day remain unknown.Not only is the number of possible triggers very large, but clinical implementation of online ART as facilitated by new technologies is relatively recent, 18,20,[35][36][37][38][39][40][41][42][43] and the clinical experience required to determine the effect of any particular trigger remains limited. The purpose of this work was to analyze our clinical experience using CBCT-based daily online ART in order to demonstrate a conceptual framework of how adaptive triggers may be used to determine the need for adaptation on a daily basis, as well as to present the varied dosimetric and procedural implications observed using this framework. Patients Patients treated at our institution in the last 27 months using CBCT-based daily online ART (Ethos Therapy, Varian Medical Systems, Palo Alto, California, USA) were considered for this study.Disease sites were all located within the pelvis and included cancers of the prostate and bladder.Treatment sites and techniques that were highly standardized with respect to our institutional online ART implementation and that had at least two patients successfully complete therapy were included for analysis.This included The Ethos online ART workflow has been described in detail elsewhere. 20,42,43In short, following the acquisition of a daily CBCT, the system uses deformable image registration and AI-based auto-contouring tools to delineate influential normal tissues followed by treatment targets, as well as to generate a synthetic CT.Subsequently, the system recalculates the dose of the original plan on the synthetic CT and re-optimizes a new adaptive plan.The vast majority of all structure delineation and plan evaluation tasks during both the initial treatment planning and each daily treatment were overseen by either one of two individual physicians (CK, AS). Standardized dose metrics For each patient, values of a standardized set of dose metrics were collected from the following three dose distributions available for each treatment: • The dose distribution of the "original plan" calculated on the "simulation CT anatomy" (i.e., the Reference Dose), which did not vary between treatments, • The dose distribution of the "original plan" recalculated on the "daily anatomy" (i.e., the Scheduled Dose), • The dose distribution of the "adaptive plan" reoptimized on the "daily anatomy" (i.e., the Adapted Dose). The complete set of metrics is listed in Table 1.These metrics were not necessarily the objectives used during plan optimization but were considered adequate to characterize the dose delivered to the targets and organs-at-risk across different patients, treatment sites, and treatment techniques.Instances where metrics did not correspond to reasonable objectives or did not reflect sufficient dose for comparison due to the particular target or prescription dose were noted.For each metric, the difference in the value between the Scheduled Dose and the Reference Dose (SCH-REF) was determined, as was the difference in the value between the Adapted Dose and the Reference Dose (ADP-REF).These differences reflected the degree to which the Scheduled and Adapted Doses matched the Reference Dose which was reviewed and approved prior to treatment and which represented the anticipated value of each metric.The dose value of each metric was determined from the cumulative dose-volume histogram as presented in the treatment planning system (Ethos Therapy; Varian Medical Systems, Palo Alto, California, USA), and the relative values of the SCH-REF and ADP-REF were calculated and summarized with descriptive statistics. Trigger parameter-value pairs The SCH-REF and ADP-REF differences in each metric were considered potential adaptive trigger parameters, Differences in PTV D95% according to Reference, Scheduled, and Adapted Dose distributions for patients treated to the prostate plus proximal seminal vesicles. and a set of integer values spanning the range of observed differences were analyzed as possible trigger value action levels.The proportion of treatments where ART was triggered was calculated for each of these parameter-value pairs.In addition, the distribution of values of each metric that would have resulted from a parameter-value pair acting as a trigger was determined by combining the values from Adapted Doses for treatments where ART was triggered with those from Scheduled Doses for treatments where it was not.For patients treated to the prostate plus proximal seminal vesicles, the distribution of SCH-REF differences in PTV D95% was broad and biased toward negative values (representing compromised target coverage).The differences had a median value of -0.14 Gy and an interquartile range (IQR) of 0.28 Gy.Meanwhile, the distribution of ADP-REF differences was narrower and more centered around zero (representing greater equivalence to the Reference Dose).Their median value was 0.00 Gy with an IQR of 0.01 Gy.Lastly, the median value of the ADP-SCH differences was 0.15 Gy with an IQR of 0.29 Gy. Standardized dose metric difference distributions Patterns in the SCH-REF, ADP-REF, and ADP-SCH differences of the PTV D95% metric for patients treated to the prostate plus proximal seminal vesicles were similar to those observed for all target coverage metrics (PTV: D99% and D95%, CTV: D99% and V100%) across all four treatment sites and techniques, although the magnitude of the changes did vary.However, considerable interpatient variation was also observed, even within a particular treatment site and technique. Figure 2 presents the distributions of SCH-REF, ADP-REF, and ADP-SCH differences for an example organ-at-risk (OAR) metric, the V40Gy of the rectum for patients treated with a sequential boost to the postoperative prostate fossa.As in Figure 1b, the histograms of Figure 2b, show a broad distribution of SCH-REF differences (median = 6.85%,IQR = 6.73%) and a narrower distribution of ADP-REF differences more centered around zero (median = 0.90%, IQR = 0.85%).In this case, however, the SCH-REF distribution is biased toward positive values, representing increased dose to the OAR with the Scheduled Dose, which the ADP-SCH distribution signifies is decreased with the Adapted Dose (median = -6.40%,IQR = 6.45%).Also unlike the PTV D95% metric depicted in Figure 1, which was representative of target coverage metrics across treatment sites and techniques, the example V40Gy metric depicted in Figure 2 did not represent the behavior of other OAR metrics, which largely exhibited more random distributions. While the PTV D95% and the rectum V40Gy are depicted here as examples, statistical descriptions of the SCH-REF, ADP-REF, and ADP-SCH difference distributions for all metrics are presented in Table 2, each representing a possible adaptive trigger parameter. Effects of trigger parameter-value pairs Figure 3 depicts the effects that result from considering the SCH-REF difference as an example adaptive trigger parameter for the PTV D95% for patients treated to the prostate plus proximal seminal vesicles.The x-axis of both Figure 3a and 3b represents the trigger value for that parameter while the y-axis depicts the effect of this value on the proportion of treatments adapted (3a) or the resulting distribution of changes in that metric's value (3b).For the PTV D95% metric, as the SCH-REF trigger value increases, so does the proportion of treatments adapted.Figure 3a shows that the increase in proportion of treatments adapted was non-linear exhibiting a shallow rate of increase at the lowest trigger values that becomes very steep as SCH-REF approaches 0%.This non-linearity characterizes the potential of very unequal effects in the proportion of treatments adapted for equal changes in trigger values.For example,in Figure 3a,changing the trigger value from -1.0 to -0.5 Gy increases the proportion of treatments adapted from 4.8% to 14.4%.Yet changing the value from -0.5 to -0.25 Gy (half the previous change) increases the proportion of treatments adapted from 14.4% to 33.7% (twice the previous effect).It noting that the proportion of treatments adapted does not reach 100%.This occurs because a second condition for selecting the adaptive plan is that the Adapted Dose value is preferred over the Scheduled Dose value (e.g., Adapted Dose target coverage > Scheduled Dose target coverage).While this was typically true, it was not always the case for this metric (see Figure 1b, bottom histogram).One possible example scenario where this might occur for a target coverage metric would be when the target shrinks or moves to a higher dose region, increasing target coverage, which the adaptive reoptimization subsequently decreases in a renormalization step. Figure 3b shows how the distribution of PTV D95% values changes with the trigger value.As increasing the SCH-REF trigger value increases the proportion of treatments adapted (Figure 3a),the distribution of metric values is made up of an increasing proportion of values from Adapted Doses rather than Scheduled Doses.Because the Adapted Dose values for this metric more closely resemble those of the original Reference Dose (Figure 1), the overall distribution also approaches that of the Reference Dose (Figure 3b).For example, the 10 th percentile change in PTV D95% increases from -0.65 to -0.48 to -0.35 Gy for trigger values at -2.0, -1.0, and -0.5 Gy, respectively. Figure 4 presents the effects of various trigger values for the SCH-REF parameter for an example OAR metric, the V40Gy of the rectum for patients treated with a sequential boost to the post-operative prostate fossa.Unlike for target coverage metrics, decreasing the SCH-REF trigger value corresponds to an increasingly strict trigger, which, in turn, increases the proportion of treatments adapted and makes the resulting distribution of metric values more similar to that of the Reference Dose. The observed increase in the proportion of treatments adapted for each metric as depicted in Figures 3 and 4 are also somewhat characterized by the descriptive statistics of Table 2.The median values listed in Table 2 correspond to the trigger value that results in adapting 50% of treatments, and the IQR denotes the spread of trigger values that encompass the central 50% of observed changes in the metric value. DISCUSSION This work leveraged our clinical experience using CBCT-based daily online ART to demonstrate a conceptual framework of adaptive trigger selection and to present the varied effects of this framework for multiple pelvic treatment sites and techniques.Our purpose was not to prescribe the use of any specific trigger parameter-value pair but to demonstrate the concept and key considerations of trigger selection as observed from clinical online ART patients.The dosimetric and procedural effects of adaptive trigger selection depend heavily on the choice of trigger parameter and value.The value of target coverage metrics, which were frequently compromised by changes in patient anatomy, was substantially improved with the reoptimized adaptive plan.This was consistent across the four distinct treatment sites and techniques evaluated, although considerable interpatient effects were also observed.In some instances, OAR metrics were improved with the adaptive plan;however,this varied considerably between metrics.The seemingly random distribution of most OAR metrics suggests that their behavior may be governed by random effects or other unknown factors not explicitly identified here.A nonlinear increase in the number of treatments using the reoptimized adaptive plan occurred when triggers were made increasingly strict.Increasing the proportion of treatments adapted, in turn, improved the distribution of values for metrics that had been compromised by variations in patient anatomy.The broad improvement in target coverage across treatment sites and techniques can largely be explained by two factors.First, target coverage metrics such as D99%, D95%, and V100% are sensitive to anatomic variations.In an appropriately optimized intensitymodulated plan, the prescription dose conforms tightly around the target, and the DVH curve exhibits high dose coverage that is followed by a sharp shoulder and steep fall-off.Little change in anatomy is therefore required for the target to move relative to the region of high dose causing large changes in values of these metrics.Second, target coverage metrics are heavily prioritized during plan optimization.This occurs explicitly with the large weight given to related optimization objectives and with dose renormalization.It also occurs implicitly as the importance of target coverage solicits particular attention from clinicians during plan review and from the fact that the target coverage metric values are likely to be close to decision making thresholds.These factors help explain why target coverage was initially compromised by anatomic variation but readily recovered by the adaptive plan. The behavior of OAR metrics was less consistent.For some treatment sites and techniques, the adaptive plan did improve dosimetry (e.g., bowel and rectum metrics for patients treated with a sequential boost to the postoperative prostate fossa).However, many others exhibited largely random behavior.This may in part be due to the following considerations.First, many OAR metrics are prioritized lower than target coverage and will therefore be compromised during optimization if they conflict with such goals that have higher priority.Some OAR metrics that are often high priority (e.g., maximum dose) are in very close proximity to the target, making it difficult to optimize a plan that spares this portion of the OAR.In addition, values of OAR metrics are not necessarily near decision making thresholds depending on the prescription and treatment technique.Therefore, OAR metrics may generally not be as sensitive to small anatomic variations as target coverage, and they will not likely be substantially improved by the adaptive plan if their priority is low, if their improvement is costly, or if they are not near a dose limit.If strong forces do not affect the compromise and recovery of OAR metric values, the changes are likely to exhibit largely random behavior. Ultimately, of course, it is the clinical consequence of these dosimetric changes that are the true assessment of the potential of ART.This data remains forthcoming.Online ART systems such as MR-linacs and, in particular, CBCT-guided systems remain new technologies, and clinical trials are currently ongoing to determine their impact across a wide variety of treatment sites.It is worth noting, however, that conventional clinical outcomes are most closely associated with the dosimetry represented by the Scheduled Doses as these doses reflect those of an original plan delivered onto the daily changing anatomy.The difference of these values relative to the Reference Doses largely reflects our previous inability to observe the dose delivered to the patient on a daily basis.Our work presented here was not intended or designed to correlate differences in dose metric values with clinical outcomes, but rather to observe and present the variable behavior of trigger-value selection decisions on those values. Our results also demonstrated that as ART triggers become more strict, the proportion of treatments adapted increases and the resulting distribution of metric values improves.Because the changes in target coverage metrics featured a skewed distribution, the curves representing the increase in proportion of treatments adapted were nonlinear.This suggests that the instances with the most compromised target coverage can be corrected using a relatively small number of online ART treatments.To improve the dosimetry further, however, will require adapting additional treatments at an increasingly rapid rate, leading to a diminishing return. Numerous other studies have investigated the dosimetric effects of various online ART implementations.Precise, quantitative comparisons are challenging due to differences in technical details as well as the variety of metrics reported.However, general trends and dependences have continued to emerge.Li et al. compared non-adaptive image-guided radiotherapy, daily online ART, and a hybrid approach that combined a library of previously generated plans and some online plan reoptimization. 44In analyzing the CTV D99% for patients treated to the prostate plus seminal vesicles, they observed large decreases in target coverage due to anatomic changes (57.8%-104.1% of prescription dose), which were then recovered when using daily online ART (99.6%-105.1% of prescription dose).This was even the case when plans were not reoptimized for each treatment as per their hybrid method (98.7%-104.7% of prescription dose).This effect of compromised target coverage resolved by online ART is consistent with our findings, including when reoptimization was used only for a proportion of treatments. More recently, Siciarz et al. analyzed eight different methods of implementing ART for patients with hypofractionated treatments to the prostate using a combination of online and offline methods. 45Their purpose was to discern a method that achieved adequate dose delivery without undue effort or demand for clinical resources.Included in their analysis was the change in several target coverage metrics also considered in this work.They observed that the CTV D99% and D95% both decreased on average about 4.5% compared to the original plan when not adapted.The values increased about 2% compared to the original plan when adapted on a daily basis.Similarly, the PTV D99% decreased approximately 11% and 2%, respectively, when never adapted or adapted daily.Maximum doses to the bladder and rectum did not demonstrate significant changes between the online ART plan and the original plan.These effects and magnitude of changes are also in-line with our findings. The treatment sites and techniques used in this work did not include nodal target volumes.Delivering dose to more extensive targets will likely influence the benefit online ART can provide for normal tissue sparing.Lutkenhaus et al. observed significant decreases in volume of the bowel receiving high doses for patients treated to the bladder with nodal volumes. 46Similarly, Kerkhof et al. found significant decreases in the volume of the bladder, rectum, and bowel receiving 45 Gy for patients treated to the cervix with nodal volumes. 47In the work presented here, the dosimetric effects of online ART on OAR metrics seemed largely random.Were more extensive target volumes used, OAR objectives might influence plan reoptimization more strongly.The OAR metrics that did demonstrate the most improvement with online ART were the bowel and rectum for patients treated with a sequential boost to the postoperative prostate fossa.The fact that, in this case, the target is predominantly the post-operative space may influence the anatomic variations and result in a greater opportunity for online ART to improve OAR dosimetry.These patients also exhibited the least amount of target coverage compromise, potentially allowing the adaptive plan to achieve greater normal tissue sparing.This illustrates the complex interactions between dose objectives that occur during plan reoptimization. For managing anatomic variations, ART represents just one available tool and should be considered alongside others like margins and robust plan optimization.Sun et al. reviewed the anatomic variation observed for patients treated for cervical cancer and discuss the implications on target margins including with respect to ART. 48Anatomic variations can also be considered more explicitly during robust plan optimization. 49An early combination of ART and plan robustness is described by Birkner et al., 50 while Böck presents a framework for online robust ART including consideration of adaptation cost and frequency. 51Similarly, Liu et al. combine robust plan optimization and ART by using images acquired during the course of treatment to update the statistics that drive the robust optimization. 52This interaction between ART, margins, and robust optimization should continue to be explored as their optimal use is intertwined and dependent on many specific treatment planning and delivery considerations. Interest in the determination and use of triggers has persisted throughout the history of ART.However, prior investigations have been dedicated to the initiation, tim-ing, and patient selection for offline ART addressing longitudinal changes.Numerous authors have examined the use of target and OAR changes in geometry and dosimetry to trigger offline ART for sites like head-andneck, [9][10][11][12][13] soft tissue sarcoma, 14 lung, 11,15,16 cervix, 17 and prostate. 11Triggers appropriate for determining the need for online ART during a particular treatment session, however, are in many ways functionally different from those appropriate for offline ART.As the unique benefit of online ART is to account for large, random, interfractional variations, the trigger information may not be available prior to the daily treatment, nor may it be relevant following the treatment.Lim et al. compared a midtreatment replanning strategy with a dosimetrically triggered strategy for offline ART patients treated to the cervix. 17While both strategies improved target coverage, only the former also improved dose to normal tissues.These observations may in part be attributable to the complex anatomic variations in patients treated for cervical cancer.As Lim et al. point out, their strategies are intended to account for tumor regression,a longitudinal effect.Yet the position and orientation of the cervical cancer target is also strongly influenced by daily variations in bladder filling that would best be addressed with online ART. Furthermore, as Sonke et al. describe in their review of ART literature, "triggered adaptation has the disadvantage of being unpredictable." 53This has a particularly strong consequence for online ART, where both the determination of the need to adapt and the adaptive intervention must occur in real-time.As a result, with the current version of our system, the demand for resources required for ART are governed not by whether the adaptive plan is selected, but by whether an adaptive plan is even to be considered.This reinforces the value of meaningful, objective, and quantitative triggers that might facilitate decision making, empower more members of the clinical team, and be incorporated into expedient workflows. 54Considerations such as the trigger selection framework described here can help clinicians to resolve the levels at which adaptive interventions are likely to be of dosimetric consequence and to anticipate the required clinical resources. To be used to this effect, online ART triggers that correspond with the best estimate of dose delivered to the patient should be determined.McCulloch et al. analyzed numerous potential offline triggers across several organs for head-and-neck cancer patients based on predictions of cumulative dose metric values. 55Online ART implementation will benefit from similar analyses that also consider the large, random, interfractional nature of the anatomic variations it is intended to address.Sibolt et al. describe their initial experience addressing such variations with implementation of CBCT-based online ART for several sites in the pelvis. 18In their description, the authors found that the predominant reason for selecting to treat with the adaptive plan was to maintain target coverage rather than to improve OAR sparing.This is consistent with our observation that recovering compromised target coverage was a stronger signal than OAR sparing for dosimetric improvement from online ART, making target coverage metrics the basis for effective triggers. Several considerations should be maintained for proper interpretation of the work presented here.First, our analysis only considered the dosimetric effects observed for individual treatment sessions.A more comprehensive estimate of the total dose delivered to a patient might be made by accumulating dose over multiple treatments.However, doing so also introduces additional uncertainty from deformable image registration and assumptions made during dose accumulation calculations.In addition, the relationship between accumulated dose and the conventional dose objectives normally used to evaluate the original treatment plan remain unknown.Considering each treatment individually can therefore be an effective way to assess how the original dose distribution is compromised within the scope and influence of the possible adaptive intervention. A second limitation of this work is that each trigger was evaluated without consideration of other metrics.However, the conceptual framework described can be extended readily to consider multiple metrics.Online ART could be implemented using multiple triggers combined with logical operators (e.g., adapt when dose coverage is compromised excessively OR when a highpriority OAR dose limit is exceeded).In addition, the effect that a trigger based on one metric had on another metric was also not considered.When combining triggers and their effects across metrics in this way, the number of possible triggers is exceedingly large.The dimensionality and complexity of such considerations may be better approached through machine learning and artificial intelligence. Third, the metrics investigated here were those corresponding with DVH-type dosimetric objectives.To be evaluated, these require both a dose distribution and a structure contour, which, depending on the specific online ART workflow, may only be available following considerable time and effort during a treatment session.Nonetheless, the effects of various adaptive triggers could still be used at this point for clinical review or even be incorporated into automatic decision support tools.In addition, similar triggers could also be applied to estimates or predictions of the dose distribution and structure contours, potentially avoiding costs associated with generating them.Even in scenarios where the decision to adapt is derived from daily imaging, such methods may still implicitly incorporate approximate dose distributions or structure contours in which case triggers of the form presented here remain relevant.The decision to adapt can also be made based on geometric imaging values alone, although these metrics are less directly correlated with the dosimetric quality of various treatment plans. Fourth, this work considered triggers that described the Scheduled Dose and Adapted Dose in relation to the Reference Dose.Also important are the absolute values of particular metrics as they relate to clinical dose objective tolerances.However, not only is it unclear as to how to properly interpret dose tolerances conventionally used for the Reference Dose now to be used for daily dose distributions like the Scheduled and Adapted Doses, but doing so increases the space of possible trigger parameters and values considerably.The decision to focus on dosimetric changes in relation to the Reference Dose was made for simplicity.Furthermore, the decision seems justified considering that the effects observed on target coverage metrics (which are frequently near the limits of dose objective values to avoid undue dose to normal tissues) changed considerably with the Scheduled Dose, such that the metric no longer achieved the clinically desired objective. Fifth, this work represents our institution's experience with CBCT-based daily online ART.Elements that are knowingly or unknowingly specific to our clinical operation and workflow may influence the observed changes in metric values which, in turn, determine the dosimetric and procedural effects.These institution-specific elements include, but are not limited to, preferences in prescribing, contouring, planning, delivering, and managing treatment.In addition, the factors pertaining to the specific workflow of the existing CBCT-guided ART system may influence the generalizability of our results.While the conceptual framework of adaptive trigger selection for online ART described here can be extended to future CBCT-guided systems and even MR-linac systems, our quantitative results can only be reported as to how they pertain to our specific experience. Lastly, of course, the work would also benefit from increased patient numbers in order to improve statistics as well as to represent scenarios not yet observed in the patients that were available to be incorporated in this study.Increased patient numbers are particularly important for shorter treatment courses where the number of treatments per patient was limited. Despite these limitations, this work provides previously unreported data and analysis with considerable implications for the effective use of online ART.This is the first analysis of this conceptual framework facilitating the study of trigger selection on data from clinical patients treated with CBCT-based daily online ART as presented here across multiple pelvic treatment sites and techniques.This analysis addresses several clinical concerns such as the dosimetric effect of uncorrected interfractional variations, the ability of online ART to recover the original dosimetric values, and the magnitude of dosimetric improvement when delivering an adaptive plan instead of the original plan.In addition, this work demonstrates how observed dosimetric differences due to anatomic variations affect how ART trigger selection influences other consequences like the proportion of treatments adapted and the resulting distribution of metric values.In this work, it was shown that target coverage metrics were more sensitive to anatomic variations than OAR metrics, and that these trigger values resulted in a strongly nonlinear impact on the proportion of treatments adapted.These newly characterized effects and considerations will be critical to the successful implementation of online ART. Our results can inform the implementation of online ART on both a patient-specific and population level.We observed significant interpatient variations regarding the need for online adaptation to maintain adequate target coverage.While the utility of the framework described here does not require patient-level specification, similar methods that incorporate patient-specific information as it becomes available over the course of treatment may improve the precision in predicting the dosimetric and procedural effects of online ART.One simple strategy, included here for illustration, might be to treat a patient non-adaptively for a period of time (e.g., 1 week), gathering datapoints that may better reveal the potential cost and benefit of online ART which would then be implemented as appropriate.This workflow would have the added benefit that multiple instances of contours could be included to generate patient-specific targets. 56,57The interpatient variations that were observed during this work remain incompletely explained or characterized.Identifying and integrating related factors into patient management as well as treatment planning and delivery may promote the effective and judicious use of online ART for individual patients.Considerable work in determining the optimal implementation of this CBCTguided ART system to maximize its impact at the patient and population level is currently ongoing at our institution. On a larger scale, multicenter clinical trials could use analyses similar to those presented here to characterize population, sub-population, and individual effects of online ART in greater detail by leveraging their larger sample size and strict trial protocols.This work suggests that, in addition to the specific parameters and values used for an online ART trigger, other treatment planning factors may strongly influence the consequence of using online ART.These factors include how contours are delineated, how they are to be used by the treatment planning and delivery systems, and what specific objectives and priorities are assigned to them during plan optimization.Clinical trial protocols should specify factors like these to ensure consistent implementation and valid scientific comparisons.Even outside of clinical trials, the impact of these factors implies that they should be reported by investigators when describing experimental methods and results. CONCLUSION Large, random, interfractional variations in patient anatomy compromise target coverage across pelvic treatment sites.Our clinical implementation of CBCTbased daily online ART is able to recover the original target coverage.These effects were much larger than those observed for OAR metrics, suggesting that maintaining target coverage is the primary benefit of our use of daily online ART.This work observed a diminishing improvement in dosimetry when using increasingly strict triggers, and it also observed strong patientspecific effects.Taken together, these suggest that developing methods able to discern the value of online ART of each individual patient is critical for the technology to be implemented in the most effective, yet resource-conscious way.The concept of adaptive trigger selection-whether it is applied to a single trigger or multiple triggers, each based on the Scheduled and/or Adapted Doses-looks to be an effective framework with which to evaluate the critical balance between the dosimetric benefit and the clinical resource cost of various ART implementation strategies. AU T H O R C O N T R I B U T I O N S Areas of expertise: TH-External beam-photons: adaptive therapy, TH-External beam-photons: Motion management-interfraction, TH-External beam-photons. All the authors contributed to conception, data acquisition, data analysis, and writing of the manuscript. AC K N OW L E D G M E N T S This project was supported in part by funds provided by Vanderbilt Ingram Cancer Center through ADY's endowed directorship. C O N F L I C T O F I N T E R E S T S TAT E M E N T ADY reports a patent regarding an adaptive radiotherapy phantom. Figure Figure 1a presents a scatterplot showing the difference in value between the Scheduled Dose and the Reference Dose (SCH-REF) and between the Adapted Dose and the Reference Dose (ADP-REF) for an example target coverage metric, the PTV D95% for patients treated to the prostate plus proximal seminal vesicles.As the x-axis measures the SCH-REF difference, positive and negative values represent when the Scheduled Dose value was greater than or less than the Reference Dose value, respectively.Similarly, the y-axis (ADP-REF) discriminates whether the Adapted Dose value was greater than or less than the Reference Dose value.The direct comparison of Adapted Dose and Scheduled Dose values (ADP-SCH) can be observed from the 45 • line corresponding to y = x.Figure 1b presents the corresponding histograms of the SCH-REF, ADP-REF, and ADP-SCH differences.Together, Figure 1a and 1b demonstrate how the Scheduled and Adapted Dose values compare to that of the original Reference Dose. F I G U R E 2 Differences in rectum V40Gy according to Reference, Scheduled, and Adapted Dose distributions for patients treated with a sequential boost to the postoperative prostate fossa.F I G U R E 3 Effect of the SCH-REF trigger value for PTV D95% (x-axis) on (a) the proportion of treatments adapted (y-axis) and on (b) the resulting distribution of changes in PTV D95% values (y-axis) for patients treated to the prostate plus proximal seminal vesicles.Bands in (b) are bounded by the 5th, 10th, 25th, 50th, 75th, 90th, and 95th percentile values. F I G U R E 4 is worth TA B L E 2 Descriptive statistics of changes in standardized dose metric values according to Reference, Scheduled, and Adapted Dose distributions.Dose; IQR, Interquartile Range; na, Dose constraints not appropriate given target; ( -), Inadequate dose for comparison; REF, Reference Dose; SCH, Scheduled Dose.Effect of the SCH-REF trigger value for rectum V40Gy (x-axis) on (a) the proportion of treatments adapted (y-axis) and on (b) the resulting distribution of changes in rectum V40Gy values (y-axis) for patients treated with a sequential boost to the post-operative prostate fossa.Bands in (b) are bounded by the 5th, 10th, 25th, 50th, 75th, 90th, and 95th percentile values. TA B L E 1 Standardized dose metrics.
2023-06-07T06:17:49.496Z
2023-06-05T00:00:00.000
{ "year": 2023, "sha1": "f61e34c395712dc1cc32a366f1752a93238e639c", "oa_license": "CCBY", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/acm2.14060", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "b17beacacc0c4dd6d1369d8a2101dfae11a4a429", "s2fieldsofstudy": [ "Medicine", "Physics" ], "extfieldsofstudy": [ "Medicine" ] }
53741656
pes2o/s2orc
v3-fos-license
Changes of hemodynamic and cerebral oxygenation after exercise in normobaric and hypobaric hypoxia: associations with acute mountain sickness Objective Normobaric (NH) and hypobaric hypoxia (HH) are associated with acute mountain sickness (AMS) and cognitive dysfunction. Only few variables, like heart-rate-variability, are correlated with AMS. However, prediction of AMS remains difficult. We therefore designed an expedition-study with healthy volunteers in NH/HH to investigate additional non-invasive hemodynamic variables associated with AMS. Methods Eleven healthy subjects were examined in NH (FiO2 13.1%; equivalent of 3.883 m a.s.l; duration 4 h) and HH (3.883 m a.s.l.; duration 24 h) before and after an exercise of 120 min. Changes in parameters of electrical cardiometry (cardiac index (CI), left-ventricular ejection time (LVET), stroke volume (SV), index of contractility (ICON)), near-infrared spectroscopy (cerebral oxygenation, rScO2), Lake-Louise-Score (LLS) and cognitive function tests were assessed. One-Way-ANOVA, Wilcoxon matched-pairs test, Spearman’s-correlation-analysis and Student’s t-test were performed. Results HH increased heart rate (HR), mean arterial pressure (MAP) and CI and decreased LVET, SV and ICON, whereas NH increased HR and decreased LVET. In both NH and HH cerebral oxygenation decreased and LLS increased significantly. After 24 h in HH, 6 of 11 subjects (54.6%) developed AMS. LLS remained increased until 24 h in HH, whereas cognitive function remained unaltered. In HH, HR and LLS were inversely correlated (r = − 0.692; p < 0.05). More importantly, the rScO2-decrease after exercise in NH significantly correlated with LLS after 24 h in HH (r = − 0.971; p < 0.01) and rScO2 correlated significantly with HR (r = 0.802; p < 0.01), CI (r = 0.682; p < 0.05) and SV (r = 0.709; p < 0.05) after exercise in HH. Conclusions Both acute NH and HH altered hemodynamic and cerebral oxygenation and induced AMS. Subjects, who adapted their CI had higher rScO2 and lower LLS. Furthermore, rScO2 after exercise under normobaric conditions was associated with AMS at high altitudes. Introduction Acute hypoxia under both normobaric (NH) and hypobaric (HH) conditions is associated with symptoms of acute mountain sickness (AMS) and cognitive dysfunction in humans [1][2][3][4]. The degree of hypoxemia plays a central role in the pathophysiology of AMS [5]. However, decrease of peripheral oxygen saturation (SpO2) under hypoxic conditions has previously been shown to be of poor predictive value. Therefore, most publications identified a combination of different variables to predict AMS [6,7]. Unfortunately, some of these variables are difficult to raise under laboratory conditions or must be measured invasively. Recently, heart rate variability (HRV) was identified as a potential predictor for AMS in healthy subjects, where the underlying mechanism is unclear [8]. Predicting the likelihood to develop AMS before ascent to HH could be important not only for mountaineers but also for untrained individuals as improved transport technologies allow to rapidly ascending to high altitude. This exposes also persons with potentially preexisting conditions like cardiovascular disorders to an increased risk for AMS. It is therefore of particular interest to find further non-invasive variables for AMS prediction. Simultaneously, exposure to high altitudes is associated with a decrease of cerebral oxygen saturation, which is controversially discussed in terms of the incidence of cognitive dysfunction [3,[9][10][11]. We therefore performed a study with healthy volunteers to identify non-invasive variables under NH as predictors for AMS. Using electrical cardiometry, near-infrared spectroscopy, cognitive function testing and Lake-Louis-Score (LLS) we hypothesized that 1.) NH and HH would lead to similar changes of hemodynamic variables, decreases in systemic (SpO2) and cerebral oxygen saturation (rScO2) and that 2.) hemodynamic changes and rScO2 in NH would correlate with the degree of AMS in HH. Subjects and experimental protocol After approval by the local Ethics Committee of the University of Munich, Germany (project no. 350-16) and obtaining written informed consent, 11 healthy female (n = 5) and male (n = 6) individuals aged 36.4 (±7) years, with mean height of 178 (±6) cm and mean body mass index of 22.7 (±2) kg/m 2 , were included in the study. All subjects were in good physical and mental condition, without any comorbidities or medication and were measured at different time points in normobaric normoxia, NH and HH (see Fig. 1). All individuals did not stay at a height of more than 2000 m a.s.l. until at least 6 weeks before the study. The following protocol was done to evaluate effects of hypobaric hypoxia: after initial baseline measurements in Munich at 520 m a.s.l. (normoxia), all individuals were transferred to Zermatt, Switzerland (1608 m a.s.l.) by car. Next morning, ascent to Little Matterhorn at 3883 m a.s.l. was done by cable car (duration 45 min), followed by further measurements. After this, all subjects performed 120 min endurance exercise by descending to around 3500 m and reascending to 3883 m a.s.l.. Immediately after physical exercise, measurements were performed in an expedition tent (Keron 4 GT, Hilleberg AB, Frösön, Sweden) on the glacier. After spending one night at 3883 m a.s.l. in the hut, measurements were performed again 24 h after arrival at high altitude and repeated after breathing 100% oxygen for 5 min (Fig. 1). Additionally, 7 of these individuals (3 female, 4 male; 36.3 (±4) years; 179 (±6) cm; BMI 22.7 (±2) kg/m 2 ) were examined under normobaric conditions in a hypoxic chamber (VPSA 16; Van Amerongen CA Technology, Tiel, Netherlands) 6 weeks before (n = 4) and 6 weeks after (n = 3) high altitude exposure. Again, baseline measurements were performed in Munich at 520 m a.s.l. (normoxia), followed by passive ascent (duration 45 min) to simulated 3883 m a.s.l. in the hypoxic chamber and 120 min of endurance exercise at simulated 3883 m a.s.l. including alternately cycling and walking with 15% slope (Trac 3000 Tour Med and Crosstrainer 3000; Ergo-Fit Inc., Pirmasens, Germany) (see Fig. 1). To simulate an altitude of 3883 m a.s.l., participants were exposed to an inspiratory oxygen fraction of 13.1% at constant room temperature (20-24°C) and humidity (20-27%) for 4 h. Acute mountain sickness score and cognitive performance Symptoms of AMS, consisting of headache, gastrointestinal problems, insomnia, fatigue and dizziness, were evaluated using a self-report questionnaire according to the Lake Louise Score (LLS, 5 items, maximum point sum 15) [12]. AMS after exposition to NH/HH was defined as presence of moderate or severe headache in combination with a LLS point sum of ≥3. Cognitive function was evaluated on an Android tablet with a test battery developed by the Mobile Health Systems Lab, Eidgenössische Technische Hochschule (ETH), Zurich, Switzerland, with a total of 4 different cognitive tests: first, Trail Making Test A (TMT-A), where subjects must connect numbers and Trail Making Test B (TMT-B), where subjects must connect numbers and letters in an ascending sequence (i.e. 1-A, 2-B, 3-C…) as quickly as possible. Second, a target reaction test (tRT) and a sorting reaction test (sRT) were performed. In the tRT, one must keep a finger on a predefined area of the tablet until a spot appears which should be touched as quickly and accurately as possible. In the sRT, similar looking geometrical forms must be quickly touched in the order displayed above. For all cognitive tests, speed, accuracy and response time were recorded electronically. Subjects were asked to take 3 of each test type in an isolated environment. Prior to the study, individuals trained all tests to become familiar with the test battery and handling of the tablet. Cerebral oxygenation and advanced hemodynamic monitoring All variables regarding hemodynamics, peripheral oxygen saturation and cerebral oxygenation were repeated five-times at each time point to calculate mean values for every subject. Heart rate, peripheral oxygen saturation and non-invasive blood pressure were measured with a mobile battery powered monitoring system (Infin-ity® M540 Monitoring, Draeger Inc., Luebeck, Germany). Cerebral oxygenation (rScO 2 ) was measured using a noninvasive near-infrared spectroscopy (NIRS) monitor (INVOS™ 5100C Cerebral/Somatic Oximeter, Covidien G, Boulder, CO, USA) powered by battery and a portable 240-V power converter. Non-invasive advanced hemodynamic monitoring was performed with a portable monitor using electrical cardiometry (ICON™ Cardiac Output Monitor, Osypka Medical GmbH, Berlin, Germany) to measure cardiac index (CI), stroke volume (SV), index of contractility (ICON) and left-ventricular ejection time (LVET). This technique is based on variations of thoracic electrical bioimpendance due to changes in thoracic conductivity during the heart cycle registered by highly conductive sensors (Cardiotronic Sensors™ Osypka Medical GmbH, Berlin, Germany) [13]. Statistical analysis Normally distributed data are given as mean and standard deviation. In case of repeated measurements, a one-way ANOVA with Greenhouse-Geisser correction, followed by multiple comparisons with Bonferroni correction was performed (p < 0.05/n). Differences of LLS were analyzed by Wilcoxon matched-pairs test. Correlations were assessed using Spearman's rank correlation coefficients. T-test with Bonferroni-Sidak correction was used to detect differences between groups after exercise. All statistical analyzes were performed using PRISM version 7 (GraphPad Software Inc., La Jolla, CA, USA). Discussion Acute mountain sickness is an ongoing topic in high altitude medicine. Until now, different variables with a predictive value for AMS could be identified [6,7]. Of high interest, Sutherland et al. have recently shown a significant correlation between heart rate variability and AMS, evaluated by LLS [8]. However, HRV can be of limited value in subjects with cardiovascular comorbidities i.e. arrhythmias or ß-blocker intake. It is therefore of interest to identify further predictive variables which can be assessed easily even in remote areas. In our study we were able to identify further hemodynamic variables associated with AMS. To the best of our knowledge, this is the first trial using electrical cardiometry combined with cerebral near-infrared spectroscopy in high altitude. In detail, in our healthy volunteers we have seen significant increases in HR, CI, SV and decreases in LVET, ICON and cerebral oxygenation. Effects on hemodynamic variables were most pronounced after exercise in both, normobaric and hypobaric hypoxia at an altitude of 3883 m a.s.l.. In accordance to the results of Sutherland et al. [8] we found a significant negative correlation between HR and LLS in hypobaric conditions. However, variables assessed in hypobaric conditions could only allow recommendations to interrupt further ascent or to immediately descend but they cannot provide predictive value. In contrast, associations between variables assessed in a safe normobaric training environment and the risk of developing actual AMS at high altitude could help to predict the individual's risk for AMS. In this regard, we could show a negative correlation between normobaric rScO2 decrease after exercise and LLS after 24 h in hypobaric conditions on the mountain. Due to the fact that our subjects were the same under NH and HH, an association is likely. Thus, the rScO2 decrease in simulated altitude in a normobaric chamber might serve as a predictive variable in the future. This is interestingly due to the fact, that there is actually an existing debate about the air equivalent model, which points out, that NH and HH are two different stimuli for AMS [14,15]. In some previously published trials, the severity of AMS was higher in HH than in NH whereas the underlying mechanism is unclear [16,17]. Additionally, preacclimatization in HH can reduce severity of AMS whereas preacclimatization in NH was less effective [18][19][20][21]. However, the main factor affecting AMS seems to be acclimatization to hypoxia. The role of hypoxia in AMS was also supported by our presented data. When focusing on the four individuals with highest and lowest LLS, it turns out that they had lowest / highest cerebral oxygenation which supports the hypothesis, that oxygen delivery and clinical symptoms are associated. Simultaneously, the significant correlation between the rScO2 decrease after exercise and corresponding cardiac index underlines the importance of adequate hemodynamic adaptation to hypoxic conditions. Although this is not a new finding, it is of interest that we have found associations between short exercise in a hypoxic chamber and symptoms of AMS in high altitude. Furthermore, subjects who were able to adequately adapt their cardiac index either by an increase of HR or SV, have shown better cerebral oxygenation and lower LLS point sum. Thus, rScO2 after exercise under normobaric hypoxia could be a possible predictor for AMS at high altitude. This could be interestingly due to the fact that access to high altitude areas is getting easier even for subjects with cardiovascular diseases exposing those individuals at risk for AMS [22]. In contrast to the observed changes in hemodynamics and oxygenation, the cognitive function tests used in this trial did not reveal any changes. This is in some accordance with the literature where results are inconsistent: Asmaro et al. (2013) investigated cognitive dysfunction in hypoxic conditions at a simulated altitude up to 7.620 m a.s.l. in healthy volunteers [2]. The authors were able to detect impairments of cognitive performance in this setting of extreme high altitude. Davranche et al. (2016) studied brain oxygenation and cognitive function during 4 days at an altitude of 4.350 m a.s.l. and detected a reduction in terms of speed and accuracy in the early phase of hypoxic exposure whereas the slowdown of reaction time was not detectable anymore after 2 days at high altitude [3]. However, Issa et al. (2016) found no significant changes in overall cognitive performance during an expedition to Mount Everest [23]. Also, Pramsohler et al. Values are presented as mean ± SD. Statistical analysis with one-way repeated-measures ANOVA with Greenhouse-Geisser correction, followed by multiple comparisons with Bonferroni correction for multiple comparisons, *p < 0.05, **p < 0.01, ***p < 0.001 vs. baseline in Munich at 520 m function in subjects that slept at a simulated altitude of 5.500 m a.s.l. [1]. While the combined parameter of cognitive-and motoric reaction time didn't change, these authors even found a correlation between lower SpO 2 and shorter cognitive reaction time. In summary, the data regarding hypoxia and cognitive function are contradictory. One reason for this could be the fact that the tests applied throughout the studies are not standardized and vary. In any case, at this point, cognitive function tests are not associated with symptoms of AMS. Our study has limitations: First, we only included healthy volunteers and can only speculate that the cerebral oxygenation decrease in normobaric hypoxia would be of predictive value in patients with decreased heart rate variability. Secondly, due to the higher heart rate, exercise intensity seems to be slightly higher in HH than in NH. This is probably due to the fact that the expedition on the glacier had not been carried out as originally planned due to the weather conditions, but had to be modified. Third, the set of cognitive function tests used was insensitive to detect mild cognitive impairment. Thus, in future studies, a larger set of more standardized tests is recommended. However, our trial provides new insights regarding the relation between hemodynamics, cerebral oxygenation and LLS, and thus these variables assessed in normobaric conditions might help to predict AMS in high altitude. Conclusion Non-invasive hemodynamic variables and cerebral oxygenation after exercise in normobaric hypoxia seem to be associated with the occurrence of acute mountain sickness at high altitude. This could be particularly interesting as a predictor for acute mountain sickness. The variables described here for the first time should therefore be investigated further in high altitude including more healthy participants as well as subjects with comorbidities.
2018-12-01T05:53:37.914Z
2018-11-19T00:00:00.000
{ "year": 2018, "sha1": "c63a1008018a64b3d07b4131ff5f2cad95ce6260", "oa_license": "CCBY", "oa_url": "http://aoemj.org/Synapse/Data/PDFData/2022AOEM/aoem-30-66.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "c63a1008018a64b3d07b4131ff5f2cad95ce6260", "s2fieldsofstudy": [ "Medicine", "Environmental Science" ], "extfieldsofstudy": [ "Medicine" ] }
236774650
pes2o/s2orc
v3-fos-license
Isolation and characterization of endophytic fungi having plant growth promotion traits that biosynthesizes bacosides and withanolides under in vitro conditions Endophytes are regarded with immense potentials in terms of plant growth promoting (PGP) elicitors and mimicking secondary metabolites of medicinal importance. Here in the present study, we explored Bacopa monnieri plants to isolate, identify fungal endophytes with PGP elicitation potentials, and investigate secretion of secondary metabolites such as bacoside and withanolide content under in vitro conditions. Three fungal endophytes isolated (out of 40 saponin producing isolates) from leaves of B. monnieri were examined for in vitro biosynthesis of bacosides. On morphological, biochemical, and molecular identification (ITS gene sequencing), the isolated strains SUBL33, SUBL51, and SUBL206 were identified as Nigrospora oryzae (MH071153), Alternaria alternata (MH071155), and Aspergillus terreus (MH071154) respectively. Among these strains, SUBL33 produced highest quantity of Bacoside A3 (4093 μg mL−1), Jujubogenin isomer of Bacopasaponin C (65,339 μg mL−1), and Bacopasaponin C (1325 μg mL−1) while Bacopaside II (13,030 μg mL−1) was produced by SUBL51 maximally. Moreover, these aforementioned strains also produced detectable concentration of withanolides—Withaferrin A, Withanolide A (480 μg mL−1), and Withanolide B (1024 μg mL−1) respectively. However, Withanolide A was not detected in the secondary metabolites of strain SUBL51. To best of our knowledge, the present study is first reports of Nigrospora oryzae as an endophyte in B. monnieri with potentials of biosynthesis of economically important phytomolecules under in vitro conditions. Supplementary Information The online version contains supplementary material available at 10.1007/s42770-021-00586-0. Introduction The medicinal plants and its high economical value secondary metabolites are widely used as raw materials for pharmaceutical, cosmetic, and perfumery industries [1]. Globally, a large population (80%) still relay on herbal products, supplements for primary healthcare, and immune boosting [2]. Therefore, there is a continuous surge in the demands of herbs and herbal products. In the time of COVID-19 pandemic, demands and usage of herbal supplements and drugs are ever-increasing. In Indian subcontinent, in the present scenario, attention towards medicinal plants in day to day life is highly recommended for maintenance of immune system and immune boosting. Bacopa monnieri, generally known as Brahmi, is widely used in ayurvedic preparations (Indian system of traditional medicine) for treating various ailments such as epilepsy, anxiety, poor memory, neurosis, psychosis, and renaissance of sensory organs [3][4][5][6]. Moreover, in modern days, it has also been used in remedies of many other diseases including stress, depressant, ulcer, and hepatic infection [7,8]. The high economical value and global demands of bacosides consequently boosted the unorganized collections and over exploitation B. monnieri and subsequently leading to sharp reduction of germplasm and causing a massive loss to its natural habitats [9]. Furthermore, bacosides are present in very low quantity in the plant and the extraction procedure requires huge biomass leading to environmental imbalance and accounting this plant as an endangered species. The overexploitations of B. monnieri lead them to enter to highly endangered list of medicinal plants in India [9]. Similarly, Withania somnifera (Ashwagandha) is regarded as Indian ginseng with potential therapeutic values [10] for improving body strength and immune systems, anti-aging, hepatic and cardiac cells protection, control cholesterol level, antipyretic, antiulcer, hemopoietic, etc. [11,12]. The therapeutic potentials of W. somnifera are due to the presence of terpenoids saponins collectively known as withanolides which include Withanolide A, Withanolide B, and Withaferrin A. However like B. monnieri, over exploitation of Withania somnifera is also undergoing rapid depletion in its germplasm. Moreover, it is evident that global warming and climate change have impacted on humans and agriculture. The increasing population and food security are a big challenge too; however with depleting land under cultivation area and other challenges form abiotic [13] and biotic stresses, it is hard to maintain yield attributes [1]. In recent years, endophytes are regarded as major sources for potential metabolites such as alkaloids, benzopyranones, benzoquinones, flavonoids, phenols, steroids, terpenoids, tetralones, and xanthones, [14] with array of novel therapeutic values [15]. Endophytic fungi colonize intercellularly or intracellularly within healthy plant tissues [16] and consequently maintain a harmonious symbiotic relationship without causing any apparent harm or disease symptoms within all examined plants [17,18]. The endophytes dwelling inside the medicinal plants forms a positive correlation over time and yields secondary metabolites in the same lines as of the host plants [15]. The endophytes isolated from medicinal plants are proved to be involved in modulation of secondary metabolites and production of pharmacologically important substances facilitates nutrient exchange and enzyme activity, enhanced stress resistance in plants, degradation of pollutants, and help in plant growth by producing plant hormones [12,19]. Therefore to cope from aforementioned issues and meet the desired demands, there is urgent need to search an efficient, eco-friendly, cost-effective alternative production of high contents of bacoside and withanolide. In this regard, native endophytic fungi, a simplest eukaryotic microorganism, could be new sources of aforesaid saponins and will protect naturally inhabiting B. monnieri and W. somnifera resources. The potential of endophytes to produce pharmacologically important secondary metabolites encouraged us to undertake the studies for unexplored native endophytes from B. monnieri and look for potentially important secondary metabolite biosynthesis under in vitro conditions. We hypothesize that the isolated fungal endophytes will mimic secondary metabolites of B. monnieri and will scale up yield of Bacosides when compared to in planta. Collection of plant material The brahmi plants were cultivated in the research fields of CSIR-Central Institute of Medicinal and Aromatic Plants (CSIR-CIMAP, Lucknow, India, an institute of national importance dedicated to medicinal plants research) located at an elevation of 131 m elevation from sea level (26°16, N, 80°46, E and the region has semiarid sub-tropical climatic conditions with an average rainfall of 1000 mm (https:// en. clima te-data. org/ asia/ india/ uttar-prade sh/ Luckn ow). A survey to the fields were done and healthy looking plants were collected and transferred to laboratory for further isolation process as described previously in our research article [16]. Isolation of endophytic fungi The isolation of endophytic fungi from brahmi plant leaves were carried out as per previous described methods [16,20]. The leaves were well rinsed with normal tap water for few minutes to remove surface adherent followed by washing with double distilled and surface sterilized with 70% ethanol solution for 10 s followed by treatment with 4% sodium hypochlorite solution for 5 min. Further the leaves were rinsed with sterile double distilled water for 1 min (3 times). The sterilized plant leaves were dried on pre-sterile filer paper and chopped into small pieces (3 to 5 mm), and transferred on potato dextrose agar (PDA) plates supplemented with streptomycin (1 gm L −1 ) followed with incubation in BOD incubator at 28 ± 1 °C under dark conditions until the growth of fungal hyphae. Afterwards, the hyphae were transferred carefully into fresh PDA plates to get pure cultures [15,16]. Fungal culturing and preparation of fungal crude extract The isolated fungal strains were transferred to potato dextrose broth (PDB), incubated at 28 ± 2 °C in dark (200 RPM) under constant shaking conditions for 16 days. After incubation, the fungal crude extract was separated from mycelia by filtering through cheesecloth. Filtered supernatant was extracted (ratio of 1:1) with ethyl acetate as organic solvent; further, the supernatant was subsequently left for 48 h at room temperature to properly solubilize the fungal metabolites in the solvent. Afterwards, ethyl acetate fractions were collected through separating funnel and concentrated in vacuo (Bucchi, Rotavapor, India). The concentrated fungal metabolites were further dissolved in 10 mL of methyl alcohol and subsequently filtered through 0.2 μm filters to obtain the crude extract. Screening of saponins producing endophytic fungi The isolated pure fungal cultures were further screened for their ability of saponins production. To look for the saponins presence in crude extract, 5 mL aliquots of fungal crude extract (FCE) was mixed with 25 mL of distill water and heated in a microwave for 2 min, followed by shaking vigorously for 1 min with vortex mixture. Afterwards, the mixtures were allowed to stand for 10 min. The occurrence of stable froth is an indicative of saponins presence. We found 40 endophytic fungi that were capable of producing saponins. These saponin positive isolates were further analyzed for their ability of bacoside and withanolide production. Quantification of Bacoside A content For quantification of Bacoside A content we followed protocols of Murthy et al. [21], 250 ml of FCEs were mixed with same volume of ethyl acetate sequentially extracted 2 times; further left overnight with ethyl acetate and process was repeated next day once again for complete extraction of metabolites. The extracted ethyl acetate fractions were pooled and subsequently concentrated using rotavapor (Bucchi, India). The dry residue of metabolites were collected, and further dissolved in 5 mL of HPLC grade (Sigma Aldrich) methanol and eluted using micropipette. Afterwards, the eluted metabolites were centrifuged at 6000 rpm for 5 min (Sigma Aldrich) and subsequently supernatant was filtered using 0.45 µm nylon syringe filter. The filtrate thus obtained was used for HPLC analysis. The analysis was performed using Shimadzu HPLC (Prominence-model Singapore) operational with LC-20AD pump, SIL-20 AC HT auto-sampler, SPD M20 PDA detector, CTO-10 AS VP column and DGU-14A DEGASSER mobile phase solvent. The reverse phase C18 column (250 mm × 0.46 mm × 0.25 μm) was used. For the mobile phase acetonitrile-water was (with 0.05% orthophophoric acid) used with gradient solvent sys- [21] The detection was made at 205 nm. The acquisition and computation of data was carried out using lab solution software. The standard of Bacoside A used was purchased from Natural Remedies Pvt. Ltd., India. A total six fungal isolates were found to synthesize Bacoside A (mixture of four Bacoside standards-Bacoside A 3 , Bacopaside II, Jujubogenin isomer of Bacopasaponin C, and Bacopasaponin C). These fungal isolates were also examined for their withanolide production potentials. Quantification of Withanolide A, Withanolide B, and Withaferrin A content The preparation of fungal metabolite solution for quantification of withanolide content was carried out in the similar lines of the methods described by Chaurasiya et al. [22]. Here also the reverse phase column (250 mm × 0.46 mm × 0.25 μm) was used. The mobile phase was water (with 0.1% acetic acid)-methanol (with 0.01% acetic acid) with The detection was made at 227 nm [22]. The standards of Withanolide A, Withanolide B, and Withaferrin A were procured from Natural Remedies Pvt. Ltd., India. Out of the six, three best strains producing both bacoside and withanolide were selected for further characterization (biochemical-phytochemical synthesis, extracellular enzymes production, plant growth promoting activities; morphological and molecular-5.8S ITS sequencing and BLAST analysis) and other important studies. Qualitative screening of other phytochemicals The qualitative screening of phytomolecules from fungal crude extracts (FCEs) was performed in the same lines as described by Bandoni et al. [23]. One mL of FCE was mixed in 1 mL of chloroform followed by addition of 0.75 mL of concentrated sulfuric acid. The appearance of reddish-brown precipitate in the interface indicates the presence of terpenoids. For detection of presence of phenols 1 mL of FCE was transferred to test tube and left for air drying. The air dried crude extract was mixed with 1 mL of distilled water and a few drops of FeCl 3 . The appearance of dark green color shows the presence of phenols. To detect the tannins in FCE, 1 mL of crude extract was mixed with 0.1% FeCl 3 . The development of brownish green or a blue black coloration shows the presence of tannins in crude extract. Steroids presence in crude extract were detected by mixing of 1 mL of FCE with 2 mL of chloroform and the same volume of concentrated sulfuric acid was added slowly with the mixture. The turning of upper layer into red while green fluorescence by sulfuric acid layer indicates the occurrence of steroids. The alkaloids were looked in by taking 1 mL of FCE and further mixed with 1 mL of 1% HCl solutions in stream bath. Afterwards, few drops of Mayer's reagent were added to the mixture and development of creamish/buff color precipitate indicates the occurrence of alkaloids. The presence of flavonoids was detected by mixing 1 mL of methanolic FCE with few drops of 1% ammonia solution. The development of yellow color indicates the occurrence of flavonoids. Anthraquinones were detected in crude extracts by mixing 1 mL of the FCE with 0.5 mL of diluted ammonia and shaken. The development of red color indicates the occurrence of anthraquinones. At last to detect glycosides, 1 mL of glacial acetic acid was added in with 1 mL of FCE followed by a drop of 5% ethanolic ferric chloride solution. Further, 1 mL of concentrated sulfuric acid was carefully dropped down along the sides of test tube. The development of brownish ring between two layers indicated the occurrence of cardiac glycosides [23]. Qualitative screening of extracellular enzymes The qualitative screening of extracellular enzymes activities was performed by following the methods described by Sunitha et al. [24]. The 10-day-old fungal cultures (grown at 28 ± 2 °C in dark) were used for this purpose. The amylase activity of endophytic fungi was evaluated by inoculating fungal hyphae on starch agar medium (Himedia, India). After 4 days of incubation at 28 ± 2 °C in dark, 1% of iodine solution was poured in culture plates. The appearance of colorless halo zone around fungal colony indicates the positive result of amylase activity. The cellulase activity was examined on PDA (Himedia, India) supplemented with 1% (w/w) carboxy methyl cellulose (CMC) [25]. After 3 days of incubation at 28 ± 2 °C in dark, the culture plates were stained with 2% of Congo red solution for 5 min followed by de-staining by washing them with 1 M Sodium chloride solution. The presence of clear halo zone around the colony indicates the positive cellulase activity. For estimating protease and lipase activity, fungal hyphae were inoculated on glycerol casein agar (GCA) and tributyrin (TB) agar medium (Himedia, India) at 28 ± 2 °C in dark, respectively. After 4 days of incubation at 28 ± 2 °C in dark, clear halo zone around the colony showed positive results. Similarly, laccase activity was evaluated by inoculating fungal hyphae on glucose yeast extract peptone agar medium supplemented with α-napthol (0.05 g L −1 ) and incubated at 28 ± 2 °C in dark for 4 days. The turning of medium from colorless to blue in color indicates positive laccase activity [24]. Screening of plant growth promoting (PGP) activity The qualitative screening of PGP activities such as indole production (IAA), phosphate solubilization, siderophore production, catalase, and antimicrobial activity were assessed for the selected three endophytes. The 10-day-old fungal cultures (grown at 28 ± 2 °C in dark) were used for this purpose. The production of IAA activity by endophytic fungi was evaluated by transferring fungal hyphae to PDB and incubated for 4 days on rotary shaker at 200 rpm and 28 ± 2 °C in dark. At the end of 4th day, 2 mL supernatant was separated by centrifugation and mixed with 4 mL of Salkowski reagent. The appearance of stable pink color showed positive IAA production [26]. For phosphate solubilizing and sidrophore production activity a small disc of fungal hyphae from 10-day-old culture obtained through cork borer and transferred on the culture plates containing Pikovskaya's (PVK) agar medium (HiMedia, India) and CAS agar. The plates were incubated at 28 ± 2 °C in dark for 7 days. The appearance of clear zone around the growing colony in PVK indicates positive phosphate solubilization activity [27] whereas sidrophore production was examined by observing the development of deep blue to yellow or orange color zone around the colony in CAS agar [28]. The catalytic activity was examined by growing fungal hyphae on potato dextrose agar medium at 28 ± 2 °C in dark for 4 days. An appropriate amount of H 2 O 2 and was added in culture plates. The liberation of oxygen gas in the form of bubbles indicates the positive catalytic activity. Antibacterial assay The antibacterial activities of FCEs were evaluated by agar diffusion method [29]. Both Gram positive (Bacillus sp.) and Gram negative (Pseudomonas aeruginosa) bacterial strains were tested. The isolated fungal strains were cultured at 28 ± 2 °C in dark for 16 days on PDB broth. The mycelia free FCE was separated and used for the assay. The aforesaid bacterial cultures were grown overnight using nutrient broth (HiMedia, India). The supernatant was separated from bacterial cells by centrifugation and 100 μL of the cell free culture broth were poured and spread on PDA plates. Afterwards, a 6 mm well were made in each PDA plates and subsequently loaded with 0.2 mL of FCE. Streptomycin sulfate (200 mg/ well) was taken as reference for this purpose. The activities of FCEs were calculated by observing the growth inhibition (in mm). Antagonistic assay against pathogenic fungi The antagonistic effect of the isolated endophytic fungi was evaluated against Fusarium oxysporum using the dualculture technique [30]. Fusarium oxysporum f. sp. lycopersici (ITCC 1322), obtained from ICAR-Indian Agriculture Research Institute, New Delhi, India, was used for this purpose. Ten-day-old pathogenic fungal culture of F. oxysporum was transferred on one side of a fresh PDA plate while the test cultures were inoculated on the other side of the plate were incubated at 28 ± 2 °C for 7 days in dark. While pure culture of F. oxysporum inoculated on PDA, plates were used as control. The inhibitions in growth of F. oxysporum in presence of test cultures were recorded as positive antagonistic activity. Morphological and molecular identification of selected endophytic fungi The three selected fungal isolates were identified by observing the morphological characteristics on PDA under ambient day light conditions at room temperature. The molecular identification of selected endophytic fungi was performed by amplification and analysis of ITS rDNA sequences. The genomic DNA of endophytic fungi was isolated by following protocols reported by Thakur et al. [31]. Afterwards, the yield and quality of genomic DNA was estimated using Nanodrop spectrophotometer (Nanodrop ND 1000). For amplification of ITS rDNA sequences, the universal primers of Internal Transcribe Spacer 1 (ITS1-5′-TCC GTA GGT GAA CCT GCG G-3′) and Internal Transcribe Spacer 4 (ITS4-5′-TCC TCC GCT TAT TGA TAT GC-3′) were used. Nearly 25 ng of genomic DNA and 5 pmol of aforementioned primers were used for amplification purpose. The amplifications of ribosomal gene sequence were performed using Mastercyler gradient (Eppendorf) programmed as 95 °C for 5 min; 32 cycle at 95 °C for 30 s, 55 °C for 30 s, 72 °C for 1 min, and 72 °C for 10 min; and 4 °C for infinite period. The amplified PCR products were purified using PCR Cleanup Kit (Mol Bio, Himedia) by following the instructions mentioned by manufacturers. The PCR product obtained was sequenced by 3130 xl Genetic Analyzer (Applied Biosystems) using sequencing kit (Applied Biosystems, USA) and primer [11,16]. The resultant sequence thus obtained was analyzed by nucleotide BLAST (https:// blast. ncbi. nlm. nih. gov/). Phylogenetic analysis and nucleotide sequence accession numbers After performing the nucleotide BLAST of sequencing product, the fasta sequence of most similar organisms along with nearest neighbor sequences from the NCBI database were download. Apart from this, one analog sequence of other Screening and identification of endophytic fungal isolates The endophytic fungi isolated from leaves of Bacopa monnieri were identified through both morphological (Fig. 1) and molecular characters. It is needed to look on both ways to define and identify the particular endophyte. Tiwari et al. [16] also stresses on using both techniques to reveal the identity of any endophytes. The molecular identification through homology searching (Table 1) and BLAST analysis revealed that the isolated strains namely SUBL33, SUBL51, and SUBL206 belonged to Nigrospora, Alternaria, and Aspergillus respectively. The aforesaid strains exhibited 99%, 99%, and 98% similarities with Nigrospora oryzae (KU375674), Alternaria alternata (FJ025207), and Aspergillus terreus (JX863370) respectively. The phylogenetic positions of Nigrospora oryzae (strain SUBL33), Alternaria alternata (strain SUBL51), and Aspergillus terreus (strain SUBL206) with other related organisms have been depicted in Fig. 2a Screening of phytochemical of endophytic fungi The results of qualitative analysis of phytochemical of FCEs were summarized in Table 2 and Fig. S1. The occurrence of phytochemical in endophytes showed that they have potentials to be used as alternative for plantless biosynthesis and in production of economically important phytomolecules for medicinal and industrial use [14,33]. Saponins [34] and terpenoids [35] have multiple therapeutic values and are found usually in medicinal and aromatic plants. It was found that three isolated endophytic fungi were able to produce saponins and terpenoids which are in similar lines to other reports [34,36]. The crude extract of Aspergillus terreus (strain SUBL206) showed the presence of phenolics, tannin, flavonoids, and steroids. The occurrence of phenolic compounds in fungal endophytes also has been reported with marvelous potentials such as antioxidant, antitumor, anti-inflammatory, antimicrobial, anti-carcinogenic anti-viral activities [15,37,38], chelating metals, and reduce lipoxygenase activity [39,40]. The production of flavonoids and tannins further showed enhanced antioxidant capacity [15,41]. Such compounds when used in therapeutic or dietary supplement helps in mitigating the free radicals. The steroids are also important secondary metabolites and are routinely used in medicine due to their antimicrobial and other biological activities [42]. Thus, the phenolic compounds obtained from fungal extract may find place in medicinal preparations for therapeutic purpose. The crude extract of Alternaria alternata (strain SUBL51) showed the positive results of Mayer's test which indicates the presence of alkaloids. The presences of different type of alkaloids in FCEs were reported earlier too exhibited different potentials such as antimicrobial, insecticidal, and anticancer activities [43]. The array of metabolites produced by endophytes may be the contributions of different endophytes in particular plant that are in lines of hosts are specific but not general [44]. Similarly, we found diverse secondary metabolites from different endophytes isolated from B. monnieri might be contributing in vivo in plants for diverse potentials, although all the selected strains were failed to give the positive results of anthraquinones and cardiac glycosides. Screening of extracellular enzyme of pure cultures The results of qualitative analysis of extracellular enzyme of pure cultures were depicted in Table 3 and Fig. S2. All endophytic strains showed positive amylase activity. It is well reported that endophytes utilize starch as a main carbon and energy sources by Saponin hydrolyzing them with amylase [19]. The results got strengthened from previous findings and we can predict that in vitro large scale culturing of endophytes needs starch as energy source. Laccase and proteases was produced only by Aspergillus terreus (strain SUBL206). Generally, the fungi that possess the ability to produce laccase are found to mitigate toxic phenols from the medium in which they grow [45]. The production of laccase by endophytic fungi is conformity with the result found earlier by Sunitha et al. [24] where among the isolated fungi, few were able to produced laccase. The enzyme has been also regarded useful in a number of areas such as textile dye transformation, waste detoxification, biosensors, and food technology [46]. We say that the endophytes isolated with laccase potentials will be a good source of the mentioned enzyme and possibility can be exploited in future for the same purpose. Proteases have equally commercial significance like laccase and these enzymes are presently used in broad range of domains such as bioremediation, leather manufacture, animal cell culture, insecticidal agents, silk degumming, detergent, cosmetics, food, and pharmaceuticals industries [47,48]. Our results indicated Aspergillus terreus (strain SUBL206) produced proteases are in same lines as reported for Aspergillus oryzae [49]. The cellulase enzyme is widely used in pulp and paper industries. We have observed that Nigrospora oryzae (strain SUBL33) and Aspergillus terreus (strain SUBL206) were able to hydrolyze cellulose via the production of cellulase in extracellular medium. The production of extracellular cellulase by endophytic fungi has been well reported [24]. The cellulase production by aforementioned fungi indicates that endophytes have own genetic mechanism necessary to generate cellulase, and this might be used by endophytic fungi for establishing itself in host plant. Screening of plant growth promoting (PGP) activity The PGP activities of fungal strains were represented in Table 4 and Fig. S3. All strains were found to have catalase activity. Nigrospora oryzae (strain SUBL33) and Alternaria alternata (strain SUBL51) showed the positive IAA test. Moreover, Alternaria alternata (strain SUBL51) also showed sidrophore activity. The production of IAA and sidrophore by endophytic fungi were reported earlier in many studies [50,51]. However, all aforementioned strains were unable to solubilize phosphate. The PGP activities of endophytes directly attributed to their indole production, phosphorus mobilization, and ammonia production, scavenging free radicals, synthesis of enzymes or metabolites that notably inhibit the growth of pathogenic microorganisms [50] and help plants to remain healthy. Furthermore, the microbes with PGP potentials either endophytic or rhizospheric, supports plant growth and development with secretion of plant growth promoting enzymes [52]. They (microbes) also contribute in enhancement [1,11,52] and modulation of secondary metabolites in planta [12]. Therefore, microbes with such potentials will be beneficial for targeted enhanced metabolite productions. Detection of antibacterial and antagonistic activity The antibacterial activity of extracellular fungal extract has represented in Table 5 whereas the antagonistic activities with respect to F. oxysporum f. sp. lycopersici (ITCC 1322) has depicted in Table 6. The antibacterial activity of endophytes was examined against both Gram positive (Bacillus sp. GenBank no. JN700911) and Gram negative (Pseudomonas aeruginosa strain CRC5 (GenBank no. HQ995502 and microbial type culture collection no. MTCC 9800)) bacteria. The antibacterial activity of isolated endophytic fungi tested against Gram positive and Gram negative bacteria by well diffusion method. Only the fungal extract of Alternaria alternata (strain SUBL51) showed 8 mm inhibition zone against Gram positive Bacillus sp.; however, it failed to inhibit the growth of Gram negative. There was no antimicrobial activity observed with Nigrospora oryzae (strain SUBL33) and Aspergillus terreus (strain SUBL206) ( Table 5). Our results are as par to findings of other researchers [53,54] where they reported antimicrobial activity of Aspergillus spp. against both Gram positive and Gram negative bacteria. The antagonistic activity of selected endophytes was done against a phytopathogen F. oxysporum f. sp. lycopersici (ITCC 1322) ( Fig. 3 and Table 6). The maximum growth was inhibited by Alternaria alternata (strain SUBL51) followed by Nigrospora oryzae (strain SUBL33) and Aspergillus terreus (strain SUBL206) respectively. The antagonistic action of endophytic fungi against phytopathogens tested may be attributed either by production of antibiotics or cell wall degrading enzymes [55]. These potentials might be also responsible for protecting plants from naturally from different fungal diseases and enhanced immunity. HPLC analyses of Withanolide A, Withanolide B, and Withaferrin A content The enhance production of withanolide through modulation of its pathway has been earlier reported by our laboratory [12]. There are also a very few report of production of Withanolide from endophytic fungi [56]. However endophytes from B. monnieri biosynthesizing both the phytomolecules which is obtained from different medicinal plants are unique. The fungal crude extracts were looked in for detection of Withanolide A, Withanolide B, and Withaferrin A content using HPLC ( Fig. 4b and Table 8). As a reference, mixtures of aforesaid phytochemical were used as standards. The endophytes Alternaria alternata (strain SUBL51) produced withaferrin A and Withanolide B phytochemical (480 μg mL −1 and 1024 μg mL −1 ) respectively, highest quantity when compared with other two strains (Table 8). However, Alternaria alternata (strain SUBL51) was unable to produce Withanolide A, which was reported highest in Nigrospora oryzae (strain SUBL33). Therefore, we believe that the unique endophytes with dual properties of biosynthesizing both the phytomolecules are encoring and will prospect for future targets of scale-up studies and therapeutic in vivo studies using model systems. Conclusion This is the first report of biosynthesis and production of bacosides and withanolides through endophytes from B. monnieri under in vitro conditions. The isolated native endophytic fungi which have PGP potentials could be utilized for plantless, efficient production of bacosides and withanolides in a short period of time with eco-friendly and cost-effective manner. The endophytes if utilized for commercial purpose will minimize unorganized collection and over exploitation of B. monnieri and W. somnifera and will protect the rapid depletion of their germplasm and ultimately boost their survival in natural habitats. Moreover, this study will strengthen the importance of endophytes mimicking phytomolecules of economic importance. Further, it will encourage the researchers to explore endophytes from different medicinal and aromatic plants for biosynthesis of phytomolecules in demand in pharmaceutical and phytochemical industries, and thus will also help in minimizing the cost and adverse impacts on nature.
2021-08-03T06:23:36.100Z
2021-08-02T00:00:00.000
{ "year": 2021, "sha1": "a677efcb38ff810c581d4fe5eba3d686153911b5", "oa_license": null, "oa_url": "https://link.springer.com/content/pdf/10.1007/s42770-021-00586-0.pdf", "oa_status": "BRONZE", "pdf_src": "PubMedCentral", "pdf_hash": "35b8358e08f2ad628b89cab5ea7bfb377701878f", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
225709629
pes2o/s2orc
v3-fos-license
An Analysis of Development Inequality and Economic Growth against Poverty in Papua Province in 2010-2018 An important goal of development is to reduce poverty. Indonesia is one of the members of the United Nations who signed the SDGs resolution, where the first goal is no poverty. Papua Province always ranks first in the highest poverty rate in Indonesia. Unstable economic growth and the dominance of the mining sector against the Gross Regional Domestic Product (GRDP) is one of the factors affecting poverty levels in Papua Province. The purpose of this study was to analyze the relationship between development inequality and economic growth on poverty in Papua Province in 2010-2018. The variables of this study included development disparity, economic realization of economic functions, percentage of paved roads, economic growth rate, mining sector rate and poverty. This study utilized panel data analysis. The results showed that the realization of economic functions and the percentage of the paved roads had a negative impact, while development inequality, growth rates and the pace of the mining sector had a positive impact on poverty in Papua Province. INTRODUCTION The important goals of development are to reduce poverty and improve the welfare of all its people. However, based on the National Socio-Economic Survey on September 2018, Papua's poverty rate was ranked as number 1 as the poorest population in Indonesia [1]. This is because of its difficult geographical location for which poverty and development gaps in Papua province are increasingly visible. The Construction Cost Index of the Province of Papua in 2017 ranks in the first place at 229.82 [2]. It shows that carrying out development in Papua Province requires the most expensive cost in Indonesia. This certainly has an impact on the slow pace of the development in Papua Province which will eventually lead to development imbalances and an increase of poverty in Papua Province. Yusuf, et al. in his research on regional development inequality in Indonesia in 2014 said that by reducing development inequality in Indonesia it would reduce the level of poverty in the region [3]. Besides poverty, the success of an area's economic growth is measured using the rate of economic growth based on the Gross Regional Domestic Product (GRDP) at constant prices. Economic growth also plays an important role in poverty alleviation. Papua Province's economic growth rate tends to fluctuate. This partly is caused by the main economic activity in Papua Province that is still extractive and relies on sectors that utilize natural resources so that growth has not been maximized [4]. The highest economic growth in Papua Province between 2012-2017 was achieved in 2016; it was above the national economic growth rate of 9.14 percent [4]. The economy in Papua generally still relies on the primary sector consisting of the mining sector, which is around 36 percent contributing to economic growth. Research by Ames states that economic growth that impacts on poverty is influenced by the composition of sectoral growth [5]. It indicates that mining and quarrying business fields are the backbone of the economy in Papua and give a signal that changes in the business fields will have a significant impact on the economy as a whole and will certainly have an impact on poverty. Dependence of an area on the mining sector can make the economic growth of the area slow. Furthermore, if it lasts for a long time, it can lead to a Dutch disease phenomenon as mentioned by a research conducted by Loayza and Rigolini in Peru which results in the mining sector having a dual effect. The negative effect is the gini ratio of 0.6 percent which is greater than non-mining areas. The positive effect is that consumption per capita is 9 percent higher than in non-mining areas, while poverty is lower at 2.6 percent [6]. Kakwani state economic growth with a focus on the poor will improve the level of community welfare and income distribution will be more equitable so that it reinforces the impact of growth on poverty alleviation [7]. In line with this research Evans (2000) defines that economic growth that can reduce poverty is called pro-poor economic growth [8]. However, based on the existing reality, the economy of the Papua Province grows positively together with the high poverty. It shows that the welfare has not been felt by every element of society in Papua Province. This phenomenon contradicts the Trickle Down Effect theory which explains that economic progress will trickle down where it will create jobs and various economic opportunities thereby creating an equitable distribution of economic growth results. All this time, it is believed that reducing poverty requires high economic growth because high economic growth is expected to create a trickle down effect that can improve Poverty reduction has always been a central agenda of government and a priority development agenda. Infrastructure differences cause a sharp disparities in the development between districts so that it becomes one of the triggers of high poverty in Papua Province. World Bank (2017), Indonesia loses more than 1 percentage point of additional GDP growth per year due to lack of investment in infrastructure [9].Besides, the mining factor also makes the GRDP in Papua Province unstable. Theoretically, it is stated that high economic growth will be followed by a reduction in inequality and in the end there will be a decrease in poverty, but this condition does not occur in Papua. Therefore, it is very interesting to know the relationship between inequality, economic growth and poverty that occur in Papua Province. RESEARCH METHODOLOGY The type of data used in this study is secondary data. All data are the combination of cross section data and time series data. Cross section data used are 29 districts in Papua Province with time series data from 2010 to 2018. Thus, the panel data estimation approach model is in accordance with the objectives of this study, which is to analyze the effect of independent variables in the form of inter-district development inequality as measured by the contribution of districts to the regional index, realization of the Regional Revenue and Expenditure Budget (APBD) of economic functions, infrastructure measured by the percentage of paved roads, the rate of economic growth and the rate of the mining sector on the dependent variable, namely poverty in the districts of Papua Province. HYPOTHESIS The following are the hypotheses in this study: 1.Development inequality has a significant effect on poverty in Papua. 2.The realization of the Regional Revenue and Expenditure Budget (APBD) of economic functions influences poverty in Papua Province 3.Infrastructure in the form of unpaved roads significantly influences poverty in Papua. 4.Economic growth has a significant effect on poverty in Papua. 5.The GRDP of the mining sector has a significant effect on poverty in Papua. RESULTS AND DISCUSSION Chow test results showed that the prob value in the cross section F indicated a smaller number than α of 0.05. Threfore, H0 was rejected and the selected model was the Fixed Effect Model. Table 1. Chow Test Results Henceforth, a retesting between the Fixed Effect Model and the Random Effect Model using hausmant test was performed. Table 2 shows that the probability value is smaller than α = 0,05 which indicates that H0 is rejected. The conclusion is that the best model that is suitable for use in this study is the Fixed Effect Model with a confidence level of 95%.. By fixed Effect Model Estimation Results above, the following equation is obtained: poor= β0+β1 inequality+β2 realization of economic function +β3 percentage of paved roads +β4 eco growth +β5 rate of mining sector +e Where: Table 2. Hausmant Test Results poor=29.8169+26.0813x1-2.5930x2-0.031x3+0.2271x4+1.5471x5 The above model shows the effect of each independent variable on the dependent variable. Variables that have a negative influence on poverty are the realization of the economic function budget and the percentage of paved roads, while the variables that have a positive influence are development inequality, economic growth and the pace of the mining sector. From the above equation, the following things were discovered: Constant results A constant of 29.81690 indicates that if the independent variable is considered constant, then Y is 29.81690. Therefore, the five independent variables are not complete or certain variables. There are still uncounted variables that are not included in the model. Development inequality The coefficient of development inequality in the fixed effect model shows the value of 26.08130, then the change in Y is positively influenced by the x1 variable. It means that by a 95% confidence level, if development inequality increases by 1 digit, the poverty will increase by 26.08 percent. As in the study of Baransano in West Papua about the neo-classical hypothesis that at the beginning of the development process the disparity between regions will tend to increase (which will increase poverty) and when inequality reaches its peak then disparity between regions will decrease, or in other words the disparity in development between regions forms an inverted "U" (inverted U-shape) [10]. 3. Realization of regional economic budget Realization coefficient of economic sector APBD in the fixed effect model shows a value of -2.593057, then the change in Y is negatively affected by variable x2. It means that with a 95% confidence level, if the realization of the regional budget (APBD) in the economic sector increases by 1 trillion, it will reduce poverty by 2.59 percent. Due to the realization of the budget function of the economic budget expenditure is intended for infrastructure construction, transportation and highways, so it is thought to have a direct effect on people's welfare. Therefore, the realization of the regional budget (APBD) more in economic function is expected to reduce poverty in Papua Province. In line with Stephan Litschig's research in Brazil that a 20 percent increase in government transfers to regions reduces poverty by 4 percent [11] 4. Percentage of paved roads The percentage coefficient of paved roads in the fixed effect model shows the value of -0.03180, then the change in Y is negatively affected by the x3 variable. In line with Kwon research that an increase of 1 percent of roads with poor conditions will reduce poverty by 0.09 percent [12] However, the paved roads percentage variable is not significant to the percentage of poverty in Papua Province at a 95% confidence level. Therefore, the policies to reduce poverty in Papua should not go through this variable since the percentage of paved roads in Papua is not significant in reducing poverty. 5. Economic growth The coefficient of economic growth in the fixed effect model shows a value of 1.547184, then the change in Y is positively influenced by the x4 variable. So that the increase in the rate of economic growth actually increases poverty in Papua Province. This proves that economic growth in Papua Province is not yet Pro Poor. in line with the research of Hull (2009) which explains that economic growth will reduce poverty if it is labor intensive, but if it is capital intensive it will increase unemployment so that it impacts on increasing the number of poor people [13] 6. The pace of the mining sector The coefficient on the fixed effect model shows a value of 0.227131, then the change in Y is positively influenced by the x5 variable. Variable rate of the mining sector with a 95% confidence level increases poverty in Papua Province. This is because people who live in areas whose economies depend on mining and mining products tend to be less prosperous. In line with research Sachs and Warner that a region's dependence on the sector mining can slow down the region's economy [14]. CONCLUSION Based on the discussion that has been presented in the previous section, there are some conclusions which can be drawn: 1. Realization of economic function and the percentage of paved roads have a negative impact on poverty in Papua. Increasing the realization of economic functions and the percentage of paved roads reduce poverty. However, the percentage of the paved roads is considered to be less significant and ineffective in reducing poverty so it is expected to use other policies to reduce poverty in Papua. 2. Development inequality, growth rates and the pace of the mining sector have a positive impact on poverty. It proves that the economic growth achieved is not yet inclusive since the community has not yet fully been able to access the economic opportunities that have existed and welfare has not been evenly distributed in each segment of society. Advances in Economics, Business and Management Research,volume 144
2020-07-02T10:29:24.134Z
2020-06-09T00:00:00.000
{ "year": 2020, "sha1": "c6938e0955fc2d59c00641d12c943bc49ae6456a", "oa_license": "CCBYNC", "oa_url": "https://doi.org/10.2991/aebmr.k.200606.053", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "099c9996ef36f4e5db487c125b2c3fc6fcb9ebce", "s2fieldsofstudy": [ "Economics" ], "extfieldsofstudy": [ "Economics" ] }
235810848
pes2o/s2orc
v3-fos-license
The Use of Sugarcane Bagasse to Remove the Organic Dyes from Wastewater In the present study, the potential of sugarcane bagasse (SCB) was evaluated by methylene blue (MB) retention. The selected low-cost adsorbent was characterized by scanning electron microscopy (SEM) coupled with energy-dispersive X-ray spectroscopy (EDX), Fourier transform infrared spectroscopy (FTIR), BET method, and determination of the point of zero charge (pHzpc). Batch kinetic and isothermal studies were performed to examine the effects of contact time, initial dye concentration, adsorbent dose, pH, and temperature. The results show that the kinetic study of MB adsorption on sugarcane bagasse is very fast; the equilibrium is reached after only 20 minutes. The kinetic model of pseudo-second-order and the Langmuir isotherm model perfectly explain the adsorption process of MB with a monolayer adsorption capacity equal to 49.261 mg·g−1 activation parameters' values such as free energy (ΔG°), enthalpy (ΔH°), and entropy (ΔS°) also determined as −4.35 kJ·mol−1, −31.062 kJ·mol−1, and −0.084 J·mol−1·K−1, respectively. Besides, the thermodynamic parameters of the methylene blue sugarcane bagasse system indicate that the exothermic adsorption process is spontaneous. Introduction Industrial wastewater pollution has become a common problem in most countries [1] and one of the biggest environmental problems in recent decades. Industrial discharges include dyes used in different areas such as printing, food, cosmetics, and clinical products, but particularly in the textile industry [2]. e discharge of these colored effluents in nature affects not only humans and water but the whole environment. More than 14 million chemical products can be found in the environment; this number is gradually increasing which poses a negative impact on the biosphere and threats the balance of the natural ecosystem. Indeed, the toxic chemical products dumped into the water are genotoxic and mutagenic which might cause hereditary diseases that can be transmitted to the future generation [3]. Moreover, this will also provoke the increase of the chemical oxygen demand (COD) and biochemical oxygen demand (BOD) which inhibit and decrease the rate of photosynthesis and plants' growth [4]. In this context, a wide variety of physical, chemical, and biological techniques have been developed and tested in the treatment of these effluents loaded with dyes. erefore, adsorption remains one of the easy-to-implement technologies, as it is widely used for water treatment [5]. Nowadays, adsorption has been accepted as a suitable removal technology, particularly for developing regions, because of its simple operation, the potential for regeneration. e adsorption process can be described as "the tendency of chemical species existing in a phase to adhere onto a solid" [6]. Technically, in adsorption science, the solid surface providing the adsorption sites is called adsorbent, and substances that are adsorbed at the solid surface are known as adsorbate [6]. e removal of dyes by adsorption on selective adsorbents such as low-cost adsorbents (sugarcane bagasse) as shown in Figure 1 has been synthesized and tested for dye removal. From the literature, we can find that the use of activated carbon is considered a good adsorbent due to its high adsorption capacity for organic materials. However, activated carbon is not only expensive but also difficult to regenerate. In this regard, there is a growing interest among researchers in the last few years towards the use of adsorbents that are based and prepared from natural materials with an organic phase such as cellulose, hemicellulose, and lignin contents [7] which are marked by their efficiency and low-cost and their abundance in large quantities in developing countries such as Lgharb region in Morocco. Recently, quite a good number of research articles have been published on the utilization of low-cost adsorbents derived from biomass for wastewater treatment such as banana waste, moringa seed and lemon seed, straws, cotton, and palm fibers; rice (Oryza sativa) and coffee (Coffee Arabica) husk wastes were used [8][9][10]. erefore, the present study aims to investigate the MB dye removal efficiency using cost-effective sustainable solid biomass sugarcane bagasse (SCB) adsorbent materials using batch and the close circuit under various physiochemical process conditions which may be applicable in actual large scale industrial treatment operation. Adsorbate. Methylene blue (MB) is an organic dye that is chosen for this study for its very high degree of purity (99%). It was used without any prior purification. Its characteristics are grouped in Table 1. Adsorbent. Sugarcane bagasse is a natural waste abundantly available in El Ghareb region, Morocco. e sugarcane bagasse was collected from various local "sugarcane juice" venders; then, it was treated as follows: first, it was washed several times to remove the impurities, then dried in the oven at a temperature of 105°C for 24 h, and grind and sieved to a particle size equal to 250 μm. Characterization of SCB is adsorbent's physical and chemical characteristics are viewed to possess high rates of efficiency of pollutants removal [11]. ese characteristics are discussed in the following part. Textural Characterization. e measurement of the specific surface of pore diameters and pore volume of SCB was obtained by using the nitrogen adsorption-desorption isotherm curve (Micromeritics ASAP 2010) using the Brunauer-Emmett-Teller (BET) method. Morphological analysis was made by scanning electron microscopy (SEM) type (VEGA3 TESCAN) coupled with EDX and quantitative analysis of the elemental composition of SCB was employed using X-ray energy dispersion spectroscopy (EDX). Chemical Characterization. e chemical functions of the molecules present in the SCB were analyzed by Fourier Transform Infrared Spectroscopy (FTIR) type (Shimadzu, JASCO 4100). IR spectra were recorded over a wavelength range of 400 and 4000 cm −1 . e zero point charge where pHzpc was determined to clarify the net charge was carried by the surface of the SCB. Adsorption Procedure. e adsorption tests were conducted in the batch reactor, at ambient temperature; the colored synthetic solution of MB in the presence of the adsorbent SCB is stirred for an hour; and homogenization of the mixtures was carried out by a Shaker type incubator agitator (Jisico, model J-NSIL-R) with a stirring speed equal to 127 rpm. e adsorbate-adsorbent separation was performed using a 0.45 μm diameter Wothman-type filtration system, and the supernatant absorbance was measured by a UV-Visible type spectrophotometer (Shimadzu 1601) at a wavelength corresponding to the maximum absorbance of MB (λmax � 665). en, the residual dye concentration is determined from a calibration curve by Beer Lambert's law. e adsorption capacity of MB by SCB is calculated by the following formula: where q t (mg·g −1 ) is the amount adsorbed at time t (min), C 0 and C t (mg·L −1 ) are the initial concentration and the concentration at time t in the dye, V (L) is the volume of the solution, and m (g) is the amount of the adsorbent in solution. Nitrogen Adsorption-Desorption Isotherm. e nitrogen adsorption-desorption isotherm is obtained at a temperature of 77.35 K after degassing at 80°C. e BET S BET surface area was determined using the Brunauer-Emmett-Teller (BET) equation, the pore volume at saturation V P is the volume of nitrogen corresponding to the highest relative pressure of P/P 0 � 0.99, and the pore diameter D P is calculated by the relation 4 Vp/S BET [12]. e nitrogen adsorption-desorption isotherm obtained on sugarcane bagasse powder is shown in Figure 2. According to the classification of isotherms (IUPAC), this curve has a sigmoidal shape, classified as type II, frequently found in fruits with a high sugar content [13], which means that the medium is nonporous or macroporous. e adsorption isotherm obtained shows that the SCB has a nonporous structure; this is confirmed by the textural measurements grouped in Table 2. Scanning Electron Microscopy (SEM). e SEM images ( Figure 3) illustrate the morphological structure of the SCB. e observations obtained show that sugarcane bagasse has a fibrous structure, each elementary fiber has a compact structure that is aligned in the direction of the fiber axis, and sugarcane bagasse has a smooth and continuous surface [14,15]. Energy-Dispersive X-Ray Spectroscopy (EDX). e surface elemental analysis of the SCB (Figure 4) indicates the presence of different chemical elements, and those results are grouped in Table 3. Which reveal the important presence of carbon and oxygen with a percentage of 51.10% and 48.85%, respectively, compared to the other chemical elements (S,...) which confirms the organic nature of our material [16], and the high oxygen content suggests that the surface of the adsorbent is more acidic. Fourier Transform Infrared Spectroscopy (FTIR). e spectrum of sugarcane bagasse recorded by infrared spectroscopy between 400 and 4000 cm −1 is shown in Figure 5. Figure 5 shows that the broadband at 3412.92 cm −1 is related to the elongation vibration of the O-H bond [17], mainly due to the presence of cellulose molecules [18]. e peak observed at 2922.93 is attributed to the C-H elongation and bending vibration of the CH 3 methyl group [19]. e peak at 1736 cm −1 is due to the elongation of the carbonyl group of aldehydes and ketones [20]. e 1605 band is due to aromatic skeletal elongation vibrations existing in the lignin structure [21]. e 1053.31 cm −1 band is attributed to stretching vibrations in steric O�C-O-C compounds due to the existence of hemicellulose [22,23], and the 608.93 cm −1 band corresponds to the bending modes of aromatic compounds [24]. International Journal of Analytical Chemistry Determination of Zero Point Charge (ZPC). Like many physiochemical variables such as pH, temperature, and ZPC, the latter is considered to be a significant factor that helps in determining the biosorption capacity of the biosorbent and the nature of binding sites [25,26]. In this study, this method consists of preparing a series of 20 ml of a 5.10-2 M NaCl solution, after adjusting the initial pH (pH i ) of each to values between 2 and 12, by the addition of HCl (0.1 M) or NaOH (0.1 M), a 0.2 g mass of the SCB is then added to the different solutions. All of this is left to stir at ambient temperature for 48 h until the final pH (pH f ) has stabilized. e intersection of the curve ∆pH (pH f − pH i ) as a function of pH i with the xaxis determines the ZPC ( Figure 6). e ZPC of the SCB is equal to 4.69, which expresses that the surface of the SCB is positively charged at a pH below 4.69 and negatively charged at a pH above 4.69. e low ZPC [26]. is study was conducted by varying the mass of the SCB between 0.05 and 0.5 g, keeping the other parameters constant: an ambient temperature, a solution pH of 6.4, an initial dye concentration of 25 mg·L −1 , and a stirring speed of 127 rpm. Figure 7 shows the effect of the adsorbent amount on dye removal. According to the results obtained, the percentage of dye removal increases with the increase of the adsorbent amount from 80.27% for 0.05 g up to 98.49% for 0.5 g. is phenomenon is due to the increase of the specific surface area and the high availability of the adsorption sites [1]. For the continuation of the studies, we have chosen an amount equal to 0.2 g with an elimination percentage equal to 95.86%. e chosen amount minimizes the consumption of the amount of the SCB to half compared to 0.5 g with a percentage of 98.49% close to that of 0.2 g. Effect of Contact Time. e experiments were conducted in different contact times, and the variation in the amount of MB adsorbed by the SCB in the function of contact time (5-120 min) was observed as we set three initial concentrations of dye 5, 15, and 25 mg·L −1 as Figure 8 illustrates. e results obtained show that the equilibrium time is independent of the initial dye concentration and that the amount of dye fixed on the adsorbent increases with the contact time. Figure 8 reveals the existence of two phases during the adsorption of the MB by the SCB: the first one is the fastest taking 5 minutes, which can be explained by the existence of a high affinity between the cations of the dye and the adsorbent [17], and the second is a slow phase where equilibrium is gradually reached after 20 minutes. is may be due to the saturation of the active sites in the support [27]. e contact time was set at 60 min for further studies. Effect of the Initial Dye Concentration. e effect of the initial concentration of MB on its retention rate by the SCB was studied at different initial concentrations, ranging from 5 to 100 mg·L −1 , and a constant mass of the SCB of 0.2 g at ambient temperature for 60 minutes (Figure 9). Figure 9 shows that the percentage of MB removal increases for low concentrations, up to a maximum obtained at 97% for a concentration of 25 mg·L −1 . is can be explained by the availability of active sites which are much higher than the amount of dye introduced. However, at high concentrations, the percentage decreases from 97% to 56% due to a lack of available active sites. Effect of pH. e solution pH plays a significant role in the sorption process as it can affect the surface charge of the adsorbent and the molecular state of the dye molecule. In other words, it interrupts both the solution's chemistry of dyes and functional groups of the adsorbents. Meanwhile, it seems that the adsorption capacity of the dye depends on the pH of the solution [25,28]. e variation of biosorption capacity of SCB for the removal of MB dye was studied in the pH range of 2 to 12 by adding HCl (chloride acid) purity 37% (0.1 M) or NaOH (sodium hydroxide) purity 98% (0.1 M), keeping other variables constant (0.2 g/50 ml biosorbent dose, 25°C temperature, 127 rpm, and 60 min contact time). Figure 10 shows the effect of pH solution on dye removal. e results indicate that the basic pH was favorable for the removal of MB. e optimum pH for maximum dye removal of SCB was found to be 10, (dye removal � 99.30%). Effect of Ionic Strength. Wastewater contains various elements such as salts, organic, and metal ions. e presence of those ions is the reason for a high ionic strength which affects the performance of the adsorption process significantly [29]. e study of the influence of this parameter on the adsorption phenomenon is carried out by adding variable amounts of NaCl (sodium chloride, purity 99.8%) of concentration ranging from 0 to 0.1 M to solutions of the dye of initial concentration of 25 mg·L −1 and a mass of 0.2 g of the adsorbent. e influence of the initial concentration of NaCl on the rate of removal of MB by SCB is shown in Figure 11. International Journal of Analytical Chemistry According to these results, we notice a great decrease in the rate of elimination of the dye with the increase of the NaCl concentration from 93.63% to 77.71%. Above a concentration of 0.04 M NaCl, there is always a decrease but at a very low rate. is phenomenon can be attributed to the fact that Na + ions accumulate in greater numbers next to the surface and thus screen the fixation sites [30]. Effect of Temperature. e influence of the temperature on the adsorption phenomenon was carried out by adding 0.2 g/50 ml of SCB to a solution of MB at a concentration of 25 mg/L, at a temperature varying between 15 and 75°C ( Figure 12). Figure 11 shows that the MB removal rate on the SCB decreases from 98.75% to 89.22% with increasing temperature from 15 to 75°C. is decrease can be explained by the destruction of the adsorption sites [31], which means that the increase in temperature adversely affects the adsorption mechanism, so the reaction is exothermic in nature. Kinetics Study. MB adsorption kinetics data from the SCB are modeled by three models: pseudo-first-order, pseudo-second-order, and intraparticle diffusion. e pseudo-first-order kinetic model, evaluated by the Lagergren relation [32], is described by the following equation: After the integration for the boundary conditions q t � 0 to t � 0 and q t � q t to t � t, the equation becomes ln q e − q t � ln q e − K 1 t. (3) e quantity adsorbed at equilibrium q e and the velocity constant k 1 are obtained from the intercept and the slope of the curve ln (q e − q t ) versus time t, respectively. e application of Blanchard's model allows us to define the pseudo-second-order of the reaction in a sorption process [33]: e integration of the equation for the boundary conditions q t � 0 at t � 0 and q t � q t at t � t gives International Journal of Analytical Chemistry e quantity adsorbed at equilibrium q e and the velocity constant K 2 are determined, respectively, from the slope and the intercept at the origin of the t/q t curve as a function of time t. e intraparticle diffusion (internal transport) is also used to identify the diffusion mechanism. It is presented by the following equation: By tracing q t as a function of t 1/2 , the constants K id and C are deduced from the slope and intercept, respectively. e best model that describes adsorption kinetics is the one with the highest R 2 linear regression coefficient. Figure 13 presents the t/q t curve in the function of time t at different initial MB concentrations of the pseudo-secondorder model for MB adsorption by the SCB and shows improved linearity. Table 4 summarizes the results of the three models of MB adsorption kinetics by the SCB. According to this table, the adsorption kinetics is described perfectly with the pseudosecond-order model and the intraparticle diffusion model, with a correlation coefficient which is equal to 1 for the pseudo-second-order kinetic model and 0.96 for the intraparticle diffusion model; the calculated value of the adsorbed quantity at the equilibrium q e,cal is very close to the experimental value q e , exp . On the other hand, the pseudofirst-order model represents a poor correlation. e velocity constant K 2 shows that the retention of MB by the SCB is quite fast [27]. Adsorption Isotherms. Some of the important roles of the adsorption isotherms are that they allow the description of how the adsorbate interacts with the adsorbent, illustrate the type of accumulation of adsorbate on the adsorbent, and analyze the adsorption equilibrium [34]. Experimental data for the MB adsorption isotherms in the SCB were modeled using two models: the Langmuir model and the Freundlich model illustrated below. Langmuir Model. Langmuir's model supposes that the adsorption comes from the monolayer coverage of the adsorbate on a homogeneous surface; that is, once a dye molecule takes up at a site, no further adsorption can take place at that site [35], and it is illustrated that International Journal of Analytical Chemistry e linear transform of this model has the following equation [36]: With q e and q m (mg·g −1 ) being the amount adsorbed at equilibrium and the maximum amount adsorbed at saturation of the monolayer, C e (mg·L −1 ) is the equilibrium concentration and K L (L·mg −1 ) is the Langmuir constant. By tracing C e /q e according to C e , the Langmuir model is checked if a straight line of slope 1/q m and ordinate at the origin 1/q m K L is obtained. We can check whether adsorption is favorable or not by the equilibrium parameter R L given as e adsorption is irreversible (R L � 0), favorable (0 < R L < 1), linear (R L � 1), and unfavorable (R L > 1) [37]. Freundlich Model. Freundlich's model is based on an empirical equation used to model adsorption isotherms on energetically heterogeneous surfaces [38]. It is expressed by the following relationship [39]: where q e (mg·g −1 ) is the equilibrium amount adsorbed, K F and n are Freundlich's constants, and C e (mg·L −1 ) is the equilibrium concentration of the solute. e logarithmic model of this relationship makes it possible to verify its linear transformation [36]: ln q e � 1 n ln C e + ln K F . e values of K F and n are determined experimentally by drawing ln q e according to ln C e . e isotherms of MB adsorption by the SCB at 25°C are presented according to the Langmuir model ( Figure 14) and the Freundlich model ( Figure 15). Table 5 presents the values of the adsorption equilibrium parameters according to the Langmuir and Freundlich model. According to these results, the correlation coefficient of the Langmuir model is equal to 0.98, as it is close to the Freundlich correlation coefficient of 0.97. is means that the process of MB adsorption by SCB is perfectly described by both Langmuir and Freundlich models and the separation factor R L < 1 indicates that the adsorption is favorable. 4.9. ermodynamic Study. ermodynamic parameters such as enthalpy ΔH°, entropy ΔS°, and free enthalpy ΔG°w ere calculated at different temperatures of 15, 35, 55, and 75°C to describe the reaction of MB adsorption by the SCB from the following equations [40]: K d � q e /C e is the distribution coefficient; q e (mg·g −1 ) is the amount adsorbed at equilibrium; R is the perfect gas constant and T (K) is the temperature of the solution; and C e (mg·L −1 ) is the equilibrium concentration. e thermodynamic parameters play a significant role as they provide information about the spontaneity and endoor exothermicity of the adsorption process and the increment or decrease of randomness at the solid-liquid interface [41]. e plot of ln K d as a function of 1/T ( Figure 16) gives a straight line of slope-ΔH°/R and an intercept at the origin ΔS°/R. e results obtained are grouped in Table 6. ese results show that the adsorption process of MB on the SCB is exothermic in nature and can be qualified as physical adsorption since the value of ΔH°is negative and greater than −40 kJ·mol −1 ; the negative value of ΔS°indicates that the molecules of the dye are more organized at the solid/liquid Conclusion e adsorption process of MB on the SCB was the aim of this study. e results obtained show that the removal rate of MB increases from 80.27% to 98.49% with the increase of the mass of adsorbent from 0.05 g to 0.5 g due to the increase in specific surface area. e maximum removal of dye was observed at pH 10 and the biosorption process has reached equilibrium at 60 min. e contact time effect indicated that the equilibrium time is independent of the initial MB concentration, adsorption is very fast after 5 min, and equilibrium time is reached at 20 min. On the other hand, the rate of elimination decreases with increasing temperature. erefore, an exothermic reaction occurs. e kinetic and isothermal studies of the adsorption mechanism are perfectly described by pseudo-secondorder kinetics and both Langmuir and Freundlich models as they perfectly present the adsorption of MB on the SCB. Finally, the thermodynamic studies have shown that the adsorption of MB by the SCB is exothermic, feasible, and spontaneous. To conclude, SCB is found to be a good biosorbent for MB removal which makes it a great alternative for wastewater and dye effluents treatment, especially today, as we need to protect the environment using environment-friendly processes like this. Data Availability e datasets generated and/or analyzed during the current study are available from the corresponding author on reasonable request. Conflicts of Interest e authors declare that they have no conflicts of interest.
2021-07-14T05:24:15.725Z
2021-06-28T00:00:00.000
{ "year": 2021, "sha1": "5a86e2e00134847d4ed67559ad32db201a571f60", "oa_license": "CCBY", "oa_url": "https://downloads.hindawi.com/journals/ijac/2021/5570806.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "0a1256e676ad15fb7867cad27f2461d5bbfad770", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [ "Medicine" ] }
157937016
pes2o/s2orc
v3-fos-license
The effects of economic and political integration on power plants’ carbon emissions in the post-soviet transition nations The combustion of fossil fuels for electricity generation, which accounts for a significant share of the world’s CO2 emissions, varies by macro-regional context. Here we use multilevel regression modeling techniques to analyze CO2 emissions levels in the year 2009 for 1360 fossil-fuel power plants in the 25 post-Soviet transition nations in Central and Eastern Europe and Eurasia. We find that various facility-level factors are positively associated with plant-level emissions, including plant size, age, heat rate, capacity utilization rate, and coal as the primary fuel source. Results further indicate that plant-level emissions are lower, on average, in the transition nations that joined the European Union (EU), whose market reforms and environmental directives are relevant for emissions reductions. These negative associations between plant-level emissions and EU accession are larger for the nations that joined the EU in 2004 relative to those that joined in 2007. The findings also suggest that export-oriented development is positively associated with plant-level CO2 emissions in the transition nations. Our results highlight the importance in macro-regional assessments of the conjoint effects of political and economic integration for facility-level emissions. Introduction Recent estimates suggest that the combustion of fossil fuels for the production of electricity accounts for nearly a fourth of global anthropogenic CO 2 emissions [1]. Emissions from the electricity generation sector could increase substantially in coming decades [2], given the growing energy demands and concomitant increases in the number of power plants throughout the world, especially in nations where fossil-fuel burning power plants account for a large proportion of the electricity generation sector [3]. And more broadly, analyses suggest that for the majority of the world's nations, growth in the use of renewable forms of energy has been unsuccessful so far in adequately 'displacing' fossil-fuel energy consumption [4]. With these factors in mind, an emerging body of research on fossil-fuel power plants seeks to identify how facility-level characteristics and broader contextual factors, such as national development, political circumstances and openness to the global economy, are associated with CO 2 emissions at the plant level. Many of these studies involve the analysis of large datasets of fossil-fuel burning power plants in nations throughout the world [5][6][7], while some instead focus on plants within just one nation, such as the United States [8][9][10]. Analyses of power plants in nations throughout the world provide an important lens for assessing relatively broad-based socioenvironmental relationships [6], while studies within one nation allow for more nuanced assessments of sub-national conditions that might influence plant-level emissions [8,11,12]. As important, we suggest, is a type of middle-level approach that is generally absent so far [13,14], where studies are conducted of CO 2 emissions from plants within a macro-region that might consist of multiple nations that share similar sociohistorical characteristics. Country-level analyses of various human drivers of greenhouse gas emissions and other environmental outcomes have highlighted notable macro-regional differences [15][16][17][18]. Further, sustainability scientists have argued that such regionally-oriented research could aid in the creation of more effective climate change mitigation approaches, and therefore should be more prevalent in the synthesis reports and other activities of scientific bodies and policy organizations [19]. One such region that has been the focus of socioenvironmental research in recent years is the 25 'transition' nations located in Central and Eastern Europe and Eurasia. Subsequent to the collapse of the Soviet Union, in the early 1990s these countries began transitioning away from centrally planned economies with few connections to the global economy [20][21][22]. The recent studies on socioenvironmental relationships in these transition nations largely focus on how economic and demographic conditions influence country-level outcomes, including national CO 2 emissions. Consistent with cross-national studies on large samples of nations throughout the world [23,24], population size and economic development are both found to increase national-level emissions in the transition nations [25,27], while world-economic integration, such as increased exports, has been found to be associated with higher levels of CO 2 emissions in these nations as well [28,29]. Public opinion research suggests that on average, individuals in the transition nations express higher levels of environmental concern than individuals in other regions of the world, and such concerns could be partly resulting from the unintended environmental problems associated with energy-intensive, export-oriented development [30]. With few exceptions (i.e. Azerbaijan, Moldova, Tajikistan, Uzbekistan) these nations have all increased their per person electricity consumption in recent years as they pursue various pathways of development to enhance their collective human well-being [31][32][33] Prior to the post-communist transition, environmental policies in the region were relatively weak and pollution levels remained high [27,34]. However, with EU accession, new Member States worked to harmonize national laws with existing EU environmental directives, and public officials, nongovernmental organizations, and energy policy consultants all played important roles in these efforts [35][36][37]. For example, a primary directive addressing CO 2 emissions was the establishment of the EU Emissions Trading System in 2005, followed by the 2009 package of EU climate and energy agreements, which set targets for greenhouse gas emissions. In addition to the slate of environmental directives adopted by the new Member States, accession into the EU further opened up the new Member States' economies to global trade [33,38], and higher levels of exports have been found to be associated with growth in national emissions in the transition nations due to the increased manufacture and production of goods destined for wealthier nations [28]. Nonetheless, given the scope of the constellation of EU environmental directives and policies related to climate change mitigation, energy efficiency, and other environmental sustainability concerns, we hypothesize that EU membership for transition nations should have an overall beneficial effect on lowering CO 2 emissions from fossil-fuel power plants within Member States. Thus, we expect plant-level emissions to be, ceteris paribus, lower in the transition nations that joined the EU than in those that did not. However, environmental policies are sometimes decoupled from environmental improvements on the ground [39], and in the case of EU accession, compliance would prove to be costly for the new Member States [35]. As a consequence, compliance with EU directives and significant environmental improvements were likely not immediate. Thus, we also expect plant-level emissions to be, on average, lower in the transition nations that joined the EU in 2004 relative to those that joined in 2007. In this study, we use multilevel regression modeling techniques to analyze how facility-level characteristics and national-level factors are associated with CO 2 emissions levels for 1360 fossil-fuel power plants in the 25 transition nations in the year 2009, the most recent year in which these plant-level data are currently available. The facility-level characteristics include if the primary fuel source for the plant is coal relative to other fossil-fuels, plant size, plant age, capacity utilization rate, and heat rate. Consistent with past research on power plants [6,8], we expect that plant-level CO 2 emissions will be positively associated with each of these characteristics. Many estimated Environ. Res. Lett. 12 (2017) 044009 models also include plant-level CO 2 emissions in the year 2004 to account for the extent to which prior emissions levels influence their more recent CO 2 emissions levels [40]. In the analysis we include country-level measures that allow for assessing if EU membership, and duration of membership, might be associated with plant-level CO 2 emissions. As noted above, we anticipate that on average, emissions will be lower for fossil-fuel power plants in EU member transition nations than for plants in non-member transition nations, and that emissions in 2009 were likely lower, on average, for plants within transition nations that joined the EU in 2004 relative to plants in transition nations who joined the EU in 2007. We also assess how national-level socioeconomic factors, which were found in prior research to increase country-level emissions for the transition nations, might be associated with plant-level CO 2 emissions [26,27]. These national-level factors include level of economic development (measured as GDP per capita), total population size, and world-economic integration (measured as exports as a percent of total GDP, also referred to as export-oriented development). Prior research on fossil-fuel plants in nations throughout the world have also found some evidence indicating that population size and level of economic development influence plant-level emissions, with population size exhibiting a positive effect, and level of economic development exhibiting a negative effect [6,7]. The sample The analyzed sample consists of 1360 fossil-fuel power plants located within the 25 transition nations in Central and Eastern Europe and Eurasia. Table 1 Dependent variable Our dependent variable is total pounds of CO 2 emitted by a plant in the year 2009, the most recent year for which these data are currently available. Because the CO 2 data are highly positively skewed, and consistent with past research on plant-level emissions [6,8], we convert them into logarithmic form (base 10) for our statistical analysis. We obtained these data from the Center for Global Development's 'Carbon Monitoring for Action' (CARMA) database [41,42], which is consistent with prior studies of plant-level emissions [5][6][7][43][44][45]. CARMA assigns to each plant a unique Platts identification code, which enables researchers to obtain additional information gathered by Platts and other sources on characteristics of each plant. Plant-level CO 2 emissions data are disclosed and publicly available for all fossil-fuel power plants in the United States as well as for the majority of plants in Canada, India, South Africa, and the European Union [42]. These publicly available data are in the CARMA database. For power plants where no public data are available, CARMA provides estimates for their emissions that are derived from statistical models fitted to the publicly available U.S. plant-level data. The estimated plant-level CO 2 emissions values ¼ electricity generation à heat rate à CO 2 emission factor. For a more in-depth discussion of CARMA's estimation methodology, we refer readers to Ummel [42]. A comparison of the estimated values and the publicly available data for a sample of almost 3500 power plants, including approximately 800 plants with publicly available data from outside the United States [42], indicates that the CARMA estimates quite accurately capture broad differences among plants of various types and sizes (R 2 statistics over.90). And for any given plant in the CARMA database, it is estimated that the reported value is within 20 percent of the actual value in 75% of cases for annual CO 2 emissions [42]. As a validity check [7], we summed CARMA's plant-level emissions data for the fossil-fuel power plants in each of the 25 transition nations in Central and Eastern Europe and Eurasia, and then compared those values for each of the 25 nations with the International Energy Agency's (IEA) 2009 annual national measures of carbon dioxide emissions from fossil fuel combustion for the electivity production sector. These data are readily available in the 2011 online edition of the IEA Statistics' Report on CO 2 Emissions from Fuel Combustion. The Pearson's correlation for our summed measure of national CO 2 emissions from fossil-fuel power plants and the IEA's measure is 0.996 (N ¼ 25, 0.001 level of statistical significance, two-tailed test). Nonetheless, it is important to acknowledge that much of CARMA's plant-level emissions data, particularly for facilities outside of the US, are estimates with some level of uncertainty and measurement error. Level-one (facility-level) independent variables In all models we include a dummy-coded measure for coal as the plant's primary fuel (coal ¼ 1), which were obtained from Platts. The carbon content of coal varies by its moisture, but systematic information on the latter is not available for most plants, nor can it be readily estimated [6]. In an unreported sensitivity analysis, which we describe in the Results section, we also include a dummy-coded measure for natural gas as the plant's primary fuel. In models that include both the coal and natural gas dummy variables, liquid fossilfuels as the plant's primary fuel is the reference category. We include a measure of plant size, which specifically refers to full nameplate capacity in megawatt hours, and plant age, measured in years for the plant's oldest generator. We also include plantlevel measures of capacity utilization rate (i.e. percent of potential output being produced) and heat rate of generation in terajoules per gigawatt hour (i.e. ratio of input fuel energy to output electrical energy). Heat rate is the inverse of a plant's thermal efficiency. Like the plant-level measures of primary fuel source, these data were obtained from Platts. We note that electricity generation ¼ plant size à capacity utilization rate, and electricity generation as well as heat rate are two of the factors used by CARMA when estimating CO 2 emissions values for plants that do not have publicly available emissions data [42]. As suggested by an anonymous reviewer, for heat rate, capacity utilization rate, and plant size, our multilevel model estimates are to some extent returning the regression model coefficients that went into CARMA's estimates of plant-level CO 2 emissions. Simply, one should fully expect that the coefficients derived from our multilevel models for each of these three measures will be positive and statistically significant. Nonetheless, with these caveats in mind, and consistent with other studies of plant-level emissions [6], we consider the inclusion of these particular level-one predictors to be important for purposes of reducing omitted variable bias, allowing for more valid coefficient estimates for the other independent variables included in the study. In half of the ten estimated models we include total pounds of CO 2 emitted by a plant in the year 2004 (i.e. the lagged dependent variable, labeled 'lagged CO 2 emissions' in table 2) to account for the extent to which prior emissions influence current CO 2 emissions. These data are obtained from CARMA. Such an approach also allows us to partially capture other conditions from the past that might influence a plant's present emissions levels [40]. Like the 2009 measure, these 2004 emissions data are converted into logarithmic form (base 10) to minimize their skewness. For the models that include the lagged dependent variable, the overall sample size is restricted to 952 power plants, since 408 of the 1360 plants existing in 2009 did not exist in 2004. Level-two (country-level) independent variables In the first half of the estimated models, as a level-two predictor we employ a dummy-coded variable that indicates if a nation was a member of the European Union in 2009 (EU member ¼ 1). In the second half of the estimated models we instead employ two dummycoded variables that indicate if a nation joined the In four of the estimated models we also include country-level measures of gross domestic product per capita (GDP per capita), total population size, and exports as a percent of total GDP as level-two predictors (all estimated for the year 2009). These data are obtained from the World Bank's online World Development Indicators database (http://databank. worldbank.org/, accessed April 14, 2016). The GDP per capita data, which are converted to logarithmic form (base 10) to minimize skewness, are measured in constant 2010 US dollars. The total population data, which are also converted to logarithmic form (base 10) to minimize skewness, are mid-year estimates and based on the de facto definition of population, which counts all residents regardless of legal status or citizenship. The exports data represent the value of all goods and other market services provided to the rest of the world as a percent of total GDP. Univariate descriptive statistics and bivariate correlations for all variables included in the study are available from the lead author upon request. Model estimation technique We use the suite of 'mixed' commands in Stata software to estimate two-level random intercept models of plant-level CO 2 emissions [46][47][48]. Maximum likelihood estimation is employed to estimate the multilevel models, and for all models we estimate robust standard errors clustered by nation, which leads to relatively conservative tests of statistical significance [48]. A basic two-level random intercept model can be designated by the following [49,50]: The observation y ij is for individual i (e.g. power plant) within cluster j (e.g. country), and the individuals (power plants) comprise the first level while the clusters (countries) comprise the second level of the model. b 0 represents the overall mean intercept, m 0j represents the cluster-specific random intercept, bX ij represents a vector of coefficients for Notes: à p < .10 Ãà p < .05 ÃÃà p <. 01 (one-tailed tests); robust standard errors clustered by nation are in parentheses; LG denotes base 10 logarithmic form; R 2 statistics obtained using random effects model estimation techniques Environ. Res. Lett. 12 (2017) 044009 level-one predictors, bX j represents a vector of coefficients for level-two predictors, and e ij is the disturbance term. The following equation is for the most fully saturated multilevel random intercept model (Model 10) reported in the Results section: where the dependent variable, CO 2ij , is plant-level CO 2 emissions, and the level-one predictors include coal as the primary fuel (bCOAL ij ), plant size (bSIZE ij ), plant age (bAGE ij ), capacity utilization rate (bCAPRATE ij ), heat rate (bHEATRATE ij ), and lagged CO 2 emissions (bLAGCO 2ij ). The level-two predictors include joined EU in 2004 (bEUMEMBER2004 j ), joined EU in 2007 (bEUMEMBER2007 j ), GDP per capita (bGDP j ), population size (bPOPULATION j ), and exports as percent of GDP (bEXPORTS j ). Given the small level-two sample size (i.e. 25 nations), in the reported model estimates we treat p-values of .10 or less as statistically significant. Following the suggestion of an anonymous reviewer and consistent with other multilevel studies of plantlevel emissions [6], we report R 2 statistics for each model, which we estimated through the use of Stata's 'xtreg' group of commands for random effects models [48]. Results The multilevel regression model estimates are reported in table 2. Model 1 consists of coal, plant size, plant age, capacity utilization rate, and heat rate. All five independent variables have positive and statistically significant effects on plant-level CO 2 emissions in 2009. These level-one predictors are included in every estimated model. Model 2 introduces the other level-one predictor, lagged CO 2 emissions. As expected, lagged CO 2 emissions has a positive effect on emissions in 2009, and the other five level-one predictors continue to exhibit positive effects on emissions. The effects of coal, plant age, and capacity utilization rate are reduced in magnitude with the inclusion of the lagged dependent variable, while the effect of heat rate slightly increases in magnitude. The remaining eight models, which include different combinations of level-two predictors, have the same patterned structure with the levelone predictors, where the odd-numbered models (Models 3, 5, 7, 9) include coal, plant size, plant age, capacity utilization rate, and heat rate, and the even-numbered models (Models 4, 6, 8, 10) also include lagged CO 2 emissions. Models 3 and 4 introduce the measure for EU membership, while controlling for the level-one predictors. As a reminder, Bulgaria, Czech Republic, Estonia, Hungary, Latvia, Lithuania, Poland, Romania, Slovakia, and Slovenia were EU member nations in the year 2009. The estimated effect of EU membership on plant-level emissions is negative and statistically significant in both models, but smaller in magnitude in Model 4, which includes the lagged dependent variable. Overall, it appears that, on average, CO 2 emissions in 2009 were lower for fossil-fuel plants in EU member transition nations than for power plants in non-member transition nations. GDP per capita, population size, and exports as percent of GDP are added to Models 5 and 6. The estimated effect of GDP per capita on plant-level CO 2 emissions is negative and statistically significant, while the estimated effects of population size and exports as percent of GDP are both positive and statistically significant in Model 5. However, with the inclusion of the lagged dependent variable in Model 6, the effects of GDP per capita and population size on plant-level emissions become nonsignificant. For these two models the effect of EU membership remains negative and statistically significant and smaller in magnitude when lagged CO 2 emissions is included as well. Models 7 and 8 introduce the two more nuanced EU membership predictors: joined EU in 2004 (Czech Republic, Estonia, Hungary, Latvia, Lithuania, Poland, Slovakia, and Slovenia) and joined EU in 2007 (Bulgaria and Romania). GDP per capita, population size, and exports as percent of GDP are included as well in Models 9 and 10. Across all four models, the estimated effect of joined EU in 2004 is negative and statistically significant and stronger in magnitude than the effect of joined EU in 2007, which, with the exception of Model 9, is also negative and statistically significant. Exports as a percent of GDP continues to have a positive effect on plant-level emissions, and the effect of population size is positive and statistically significant in Model 9, but becomes nonsignificant with the inclusion of lagged CO 2 emissions in Model 10. However, with the inclusion of the more nuanced EU membership measures, the estimated effect of GDP per capita is nonsignificant in both of these final models. In an unreported sensitivity analysis we estimated multilevel models that also include country-level measures of urban population as a percent of total population, manufacturing as a percent of GDP, overall industrialization as a percent of GDP, and services as a percent of GDP. The effects of these additional level-two predictors on plant-level CO 2 emissions are all nonsignificant, and the estimated effects for the other predictors remain consistent with the findings reported in table 2. We also estimated sensitivity models that include a plant-level dummy-coded measure for natural Environ. Res. Lett. 12 (2017) 044009 gas as the primary fuel. In such models, which include both the dummy-coded measures for coal and for natural gas, liquid fossil-fuels as the plant-level primary fuel is the reference category. The estimated effect of coal remains positive and statistically significant, while the effect of natural gas is nonsignificant. Following the suggestion of an anonymous reviewer, we estimated two additional series of multilevel models, each of which includes a country-level measure to account for climate-related conditions that could potentially influence electricity consumption and thus plant-level carbon emissions. The first series includes country-level heating degree days (HDD), a measure designed to reflect the demand for energy needed to heat a home or business to a human comfort level of 18°C. The second series includes the midpoint latitude for each nation [7,51], with the general assumption that since each nation in the analysis is north of the equator, higher values of (northern) latitude will be associated, in general, with colder climate conditions. We note that these two country-level measures are correlated at .886 for the 25 nations in the study. The estimated effect of both country-level climate measures on plant-level emissions are nonsignificant in all models, and the estimated effects for the other predictors remain entirely consistent with the results presented in table 2. Also at the advice of an anonymous reviewer, in additional sensitivity models we included a countrylevel measure of average electricity price [6], which yielded a nonsignificant effect while not substantively altering the estimated effects of the facility-level and country-level factors on plant-level emissions. We have also estimated multilevel models for samples that systematically exclude the power plants within each of the twenty-five nations in the study. The results for all of the reduced samples are generally consistent with the reported findings for the full sample, suggesting that the analysis presented in Finally, we have also estimated models that include interactions for pairings of the level-one predictors and for pairings of the level-two predictors, as well as cross-level interactions between country-level and plant-level predictors. The estimated effects of all the interactions on plant-level CO 2 emissions are nonsignificant. Discussion and conclusion The 25 transition nations of Central and Eastern Europe and Eurasia provide a unique vista from which to examine the relationship between plant-level CO 2 emissions and both plant-level and national-level characteristics, as the development strategies taken by most countries in the region rapidly increased electricity consumption as they began transitioning away from centrally planned economies with limited connections to the world economy. Consistent with prior research, we found that facility-level measures such as plant size, plant age, capacity utilization, heat rate, and coal as the primary fuel source are all positively associated with plant-level emissions. Our findings further suggest that one development strategy in particular-export oriented development-might have been particularly consequential for CO 2 emissions: exports as a percent of GDP exhibits a positive association with plant-level emissions across all estimated models. However, not all of these nations adopted the same political and economic strategies as they transitioned away from Soviet-era, centrally planned economies. For some, accession into the European Union introduced an extensive set of market and environmental reforms that shaped energy production and subsequent emissions. In terms of market reforms, the transition to a market economy upon accession into the EU led countries like Poland, Hungary, and the Czech Republic to privatize energy production and lift energy price subsidies, which incentivized energy providers and investors to pursue more efficient energy production [52]. As a consequence, energy intensities of the EU-member transition economies began to converge with other EU countries [52,53]. In addition, accession into the EU introduced a host of environmental sustainability directives pertaining to energy efficiency, climate change mitigation, and other environmental concerns, such as the Accession Treaties of 2003, which consisted of greenhouse gas monitoring mechanisms associated with the Kyoto Protocol, and other directives that addressed greenhouse gas emissions from large facilities, including the Integrated Pollution Prevention and Control Directive (adopted in 1996 and codified in 2008), the Large Combustion Plant Directive (issued in 2001) and the National Emissions Ceiling Directive (originally agreed to in 2001). And although EU environmental directives have been unevenly implemented to some extent across Member States [54,55], we found that transition nations that joined the EU had lower plant-level CO 2 emissions in 2009, ceteris paribus, than their non-EU counterparts Environ. Res. Lett. 12 (2017) 044009 in the region. We also found that the eight Member States that joined the EU in 2004 had, on average, lower plant-level emissions in 2009 than the two transition nations that joined the EU in the year 2007. Our study is not without its limitations, such as those with CARMA's estimated data that we describe in the Methods section. We conclude by briefly highlighting two additional shortcomings. First, we are unable to ascertain which specific policies adopted in compliance with EU directives are directly associated with reduced plant-level emissions in any of the new Member States. Rather, EU accession is a proxy for a constellation of new policies, programs, and procedures that together are associated with reduced CO 2 emissions, despite the likely counter pressures of increased export-oriented development. Second, 2009 is the most recent year in which the plant-level emissions data are currently available. Therefore, we are unable to evaluate long-term effects of EU accession and other country-level and plantlevel factors on plants' emissions, nor are we able to evaluate if and how the recent global economic recession might have influenced these particular socioenvironmental relationships.
2019-05-19T13:05:01.462Z
2017-03-30T00:00:00.000
{ "year": 2016, "sha1": "4049d5283c977f5530fb3856b5f9ebef13779b16", "oa_license": "CCBY", "oa_url": "https://iopscience.iop.org/article/10.1088/1748-9326/aa650b/pdf", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "fad218d2feff0196304389729cf26fa3e794ee18", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Physics", "Economics" ] }
248424871
pes2o/s2orc
v3-fos-license
Fungal Microbiota Dysbiosis and Ecological Alterations in Gastric Cancer Changes in bacteriome composition have a strong association with gastric cancer (GC). However, the relationship between stomach fungal microbiota composition and human host immune factors remains largely unknown. With high-throughput internal transcribed spacer region 2 (ITS2) sequencing, we characterized gastric fungal microbiome among the GC (n = 22), matched para-GC (n = 22), and healthy individuals (n = 11). A total of 4.5 million valid tags were generated and stratified into 1,631 operational taxonomic units (OTUs), and 10 phyla and 301 genera were identified. The presence of GC was associated with a distinct gastric fungal mycobiome signature, characterized by a decreased biodiversity and richness and significant differences in fungal composition. In addition, fungal dysbiosis was reflected by the increased ratio of Basidiomycota to Ascomycota and a higher proportion of opportunistic fungi, such as Cutaneotrichosporon and Malassezia, as well as the loss of Rhizopus and Rhodotorula during the progression of cancers. A panel of GC-associated fungi (e.g., Cutaneotrichosporon and Rhodotorula) was found to adequately exhibit diagnostic value. Furthermore, the mRNA levels of cytokines and chemokines were detected and correlated with the specific fungal dysbiosis, indicating the possible mechanism of GC. This study reveals GC-associated mycobiome dysbiosis characterized by altered fungal composition and ecology and suggests that the fungal mycobiome might play a role in the pathogenesis of GC. INTRODUCTION Gastric cancer (GC) is a dangerous disease and in 2020, it ranked 3rd in causing human deaths around the world, and it is the fifth most widely diagnosed cancer (Ferlay et al., 2021). GC typically develops through multisteps, from atrophic gastritis (AG) that progresses into intestinal metaplasia (IM) eventually manifesting as GC. Host-microbiota interaction, such as gastric microbial infections and host immune factors, has been shown to be contributing to tumor development in gastric system (Dicken et al., 2005). However, the etiology and pathogenesis, as well as their translational roles in the pathogenesis of GC, need further research for efficient elucidation. Recent evidence indicates the involvement of the gastric microbiome in the disease onset and progression. The stomach was considered "a hostile place" for bacterial growth since the conditions were unsuitable for microbial growth (Rajilic-Stojanovic et al., 2020). Since the 1980s, Helicobacter pylori was found to be the most common microbial for gastrointestinal (GI) tract disorders (Parkin et al., 2005;Polk and Peek, 2010;de Sablet et al., 2011). However, among all the patients infected with H. pylori, only 1-3% will finally develop GC (Wang et al., 2014), and the progression of cancers can still be discovered after eradication of H. pylori by pharmacological treatment (Fukase et al., 2008), indicating that not only H. pylori but other microbes can also parasitize in the stomach and have a role in GC onset and progression. Advances in high-throughput sequencing technologies made it possible to find the alterations of gastric microbial composition and characteristics in healthy and illness stages (Dong et al., 2019;Deng et al., 2021;Yang et al., 2021). A study has provided evidence that the oral bacteria were more likely to aggregate in the GC samples , and the species Streptococcus, Prevotella, Neisseria, Haemophilus, and Porphyromonas were provided to be the most dominant bacteria in GC (Bik et al., 2006). The emerging or re-emerging fungal is becoming a worldwide public health threat closely associated with the immune modulation of the host (Lockhart and Guarner, 2019). Recent studies have confirmed the fungal compositional alterations in colorectal adenoma tissue (Coker et al., 2019), Crohn's disease (Liguori et al., 2016), and patients with ulcerative colitis (Sokol et al., 2017), providing opportunities for the discovery of a new relationship between host and fungal microbiome interactions. Relatively, there is a paucity of research to better understand the functional role of gastric fungal microbiota in GC, especially from the perspective of their potential diagnostic value in the screening of GC. In this study, through a comprehensive analysis of the fungal microbiomes in GC, we aimed to highlight and understand various components of the gastric fungal microbiome; signify the clinical relevance of specific fungi that can be crucial for GC pathogenesis and diagnosis that can be harnessed to develop better prevention and treatment strategies for GC. To the best of our knowledge, this study is the first of its kind that provides detailed evidence for the interactions between host immune factors and fungal microbiome, especially for the potential diagnostic significance of fungi in GC. Abbreviations: GC, gastric cancer; ITS2, internal transcribed spacer region 2; HC, healthy controls; OTUs, operational taxonomic units; AG, atrophic gastritis; IM, intestinal metaplasia; H. pylori, Helicobacter pylori; GI, gastrointestinal; LDA, linear discriminant analysis; LEFSe, LDA effect size (LEfSe); PCoA, Principal Coordinates Analysis; NMDS, nonparametric multidimensional scaling; ROC, receiver operating characteristic; AUC, area under curve; TAMs, Tumorassociated macrophages. FIGURE 1 | This flowchart illustrates the conceptual framework for the proposed study. We totally enrolled 11 healthy controls (HC) and 22 patients with gastric cancer (GC), collecting the tissues for fungal internal transcribed spacer region 2 gene sequencing. Then, we compared the biodiversity among the three groups, obtained the taxonomic information at the phylum and genus level, used the linear discriminant analysis (LDA) effect size (LEFSe) and receiver operating characteristic (ROC) curve analysis to identify the different fungi, measured the relative mRNA levels of inflammatory factors, and performed the specific fungal-immune association analysis to find out the possible mechanism of fungal in GC. Finally, we used the FUNGuild to further predict the functional classification of the specific fungal community. Research Design and Sample Collection We enrolled 22 patients with GC confirmed by surgery combined with pathologic biopsy and 11 healthy controls (HC) from the Affiliated Drum Tower Hospital of Nanjing University Medical School in this study. A total of 22 GC tissues and 22 para-GC tissues collected by surgery and 11 HC tissues collected by endoscopic biopsy were selected for microbiota analysis based on the American Society for Gastrointestinal Endoscopy guideline (ASGE Standards of Practice Committee et al., 2013) (Figure 1). The GC tissue we collected was obtained by total or partial gastrectomy. Location of tumor resection specimen includes antrum, body, and cardia. The adjacent tissue we selected is the tissue around the cancer tissue, which is 2 cm away from the cancer tissue. The inclusion criteria of the healthy group were healthy patients who came to the hospital for physical examination, without tumors, diabetes, and other digestive diseases. Ethics Committee of the Affiliated Drum Tower Hospital of Nanjing University Medical School approval was obtained for the study (ID: 2021-514-02). From every participant of the study, informed written consent was also obtained. PCR Amplification For subsequent examination, all specimens were kept at −80 • C. DNA was extracted using the E.Z.N.A. R soil DNA Kit (Omega Bio-Tek, Norcross, GA, USA) according to the manufacturer's guidelines. Fungal internal transcribed spacer region 2 (ITS2) amplification was performed using the primers ITS3F GCATCGATGAAGAACGCAGC and ITS4R TCCTCCGCTTATTGATATGC. The DNA content and cleanliness were tested using a NanoDrop 2000 UV-vis spectrophotometer, and the DNA samples were validated on a 1% agarose gel (Thermo FisherScientific, Wilmington, USA). The ITS2 rRNA PCR amplification was performed using the following protocol: initial denaturation at 95 • C for 3 min, followed by 35 cycles of denaturing at 95 • C for 30 s, annealing at 55 • C for 30 s, and extension at 72 • C for 45 s, then single extension at 72 • C for 10 min, and 10 • C until stopped. Of note, 2 × pro Taq buffer 10 µl, forward primer (5 µM) 0.8 µl, reverse primer (5 µM) 0.8 µl, template DNA 20 ng/µl, and last ddH2O up to 20 µl were used to make up the PCR mixes. Triplicate PCR reactions were carried out. The PCR product was purified and quantified using the AxyPrep DNA Gel Extraction Kit (Axygen Biosciences, Union City, CA, USA) according to the manufacturer's instructions (Promega, USA). Using an Illumina MiSeq platform, extracted amplicons were aggregated in an equimolar ratio and paired-end sequencing (Illumina, San Diego, USA). The raw reads were deposited in the Sequence Read Archive (SRA) database at the National Center for Biotechnology Information (NCBI) (Accession Code: PRJNA797736). Processing of Sequencing Results and Taxonomical Annotation The raw ITS2 rRNA gene sequencing reads were demultiplexed, quality-filtered by FASTP version 0.20.0 (Chen et al., 2018), and merged by FLASH version 1.2.7 (Magoc and Salzberg, 2011) with the following criteria: (i) the 300 bp reads were truncated at any site receiving an average quality score of <20 over a 50 bp sliding window, the truncated reads shorter than 50 bp were discarded, and reads containing ambiguous characters were also discarded; and (ii) only overlapping sequences longer than 10 bp were assembled according to their overlapped sequence. The maximum mismatch ratio of the overlap region is 0.2. Reads that could not be assembled were discarded. (iii) Samples were identified based on barcodes and primers at both ends of the sequence, and the sequence direction was adjusted, exact barcode matching, 2 nucleotides mismatches in primer matching. Operational taxonomic units (OTUs) with a 97% similarity cutoff value (Edgar, 2013) were clustered into the same operational classification unit using the UPARSE version 7.1, and chimeric sequences were identified and removed. Based on sequencing accuracy, alpha diversity was calculated with QIIME, including an index of observed_species, chao1, and PD_whole_tree (Schloss et al., 2009). PCoA of the Bray-Curtis distance with each sample colored by the disease phenotype was built and used to assess the variation between experimental groups. The taxonomical classification of fungal OTUs was performed according to the UNITE (Release 8.2) databases (Koljalg et al., 2013). Total Tissue RNA Extraction and Quantitative RT-PCR for Cytokines mRNA Through the use of an RNA extraction kit (Invitrogen), total RNA was retrieved from tissues (retained at −80 • C) and reversed according to the reverse transcription kit's instructions (Vazyme). The 2 − Ct technique had been used to calculate the relative mRNA level of cytokines considering GAPDH as an internal reference. Supplementary Table 1 lists the primers. Statistical Analysis Biomarker discovery analysis was performed by linear discriminant analysis (LDA) effect size (LEfSe) (Segata et al., 2011) combined with the Kruskal-Wallis rank-sum test to detect the features with significantly different abundances between assigned taxa and to estimate the effect size of each feature. The SPSS 24.0 statistical software had been used to run a two-tailed Mann-Whitney U test, and GraphPad Prism 8.0 (GraphPad, San Diego, CA, USA) had been used to determine statistically significant differences among case and control groups. The Wilcoxon rank test, Turkey group test, permutational multivariate analysis of variance (PERMANOVA), and Random Forest analysis were performed by the R project, and the fungal functional classification prediction was inferred by using FUNGuild (Nguyen et al., 2016) (version 1.0). The Gini index is calculated by python and plotted by ggplot. Basic Information of Study Participants In this study, we collected 55 tissues from 11 HCs and 22 patients with GC at the Affiliated Drum Tower Hospital of Nanjing University Medical School, dividing them into three groups (i.e., 11 HCs, 22 GC, and 22 para-GC) for the sake of comparability. The detailed clinical characteristics are shown in Table 1; Supplementary Table 2. The information of participants contains age, gender, and some clinical biochemistry indices. GC and HC have no significant differences in age. Furthermore, four serum tumor biomarkers (i.e., AFP, CEA, CA-125, and CA-199) in the digestive system seem to be unable to discern between controls and GCs. Comparison of the Microbial Diversity Between Patients With GC and HC We performed the ITS2 gene sequencing on the tissues to assess the composition of the fungal microbiota among the three groups. The sequencing yielded more than 4.5 million tags, with the dominant length of tags located among 260-320 bp after trimming and filtering (Supplementary Table 3). According to different similarity levels, all sequences were divided into 1,761 OTUs and 1,631 OTUs were finally obtained after specific processing. To evaluate the sequencing depth of all groups, the goods coverage index was selected, and all groups are >0.990. The rarefaction curves in both the GC and control samples tended to plateau, indicating that our sequencing depth and coverage were satisfactory and implicitly mirrored the species richness (Supplementary Figure 1). We looked at the alpha and beta diversities of fungal fractions to see whether our reference database was of good quality. Alpha diversity, a measure of genera richness (number of genera), was evaluated by chao1, observed_species, and PD_Whole_tree. Results in Supplementary Figures 2A-C indicate no significant difference in alpha diversity between the GC and the matched para-GC groups (chao1: p > 0.05; observed_species: p > 0.05; PD_Whole_tree: p > 0.05). The site-by-site alpha diversity analysis shows a peculiar variation with the lowest diversity in the GC tissues than the normal HC (chao1: p < 0.0001; observed_species: p = 0.0001501471; PD_whole_tree: p = 0.0002492035) (Figures 2A-C), demonstrating that microbial diversity and richness reduced in the creation and progression of GC. The beta diversity was next used to assess the microbial community compositions among the three groups by performing PERMANOVA with Bray-Curtis distance, which accounts for both patterns of presence-absence of taxa and changes in their relative abundance. The PCoA showed that individuals belonging to GC groups formed fairly well-separated from the HC (Figure 2D). In addition, the nonparametric multidimensional scaling (NMDS) is applied to visualize the distances between categories, as shown in Figure 2E, illustrating the distinct separation between the GC tissues and HC samples. However, there were no significant distinct groups between GC and para-GC groups, which essentially overlapped (Supplementary Figures 2D,E). Altogether, these results reveal that the GC group exhibited more unique fungal profiles than the HC group. Considering the number of samples, stages I and II are defined as early GC, and stages III and IV are defined as advanced GC. Results showed no significant difference in fungi between early and advanced GC (Supplementary Figure 3). The Gastric Fungal Profile Differs in Patients With GC We next used a Venn diagram to show the distribution of common and endemic OTUs according to the OTUs abundance. There were 64 OTUs that were shared among the three groups, with 869,213 and 339 OTUs unique for the HC, GC, and para-GC groups, respectively ( Figure 3A). The counts of OTUs in the HC group were 4-fold higher than the GC group and 3-fold higher than the para-GC group, revealing that the fungi of GC were different from those of normal controls. We matched the OTU representative sequences with the Unite database to acquire taxonomic information for the OTUs from phylum to species level based on these findings. In particular, there were considerable variations in abundance between both the HC and GC groups at the phylum level, FIGURE 2 | Altered bacterial microbiota biodiversity in GC. (A-C) Alpha diversity. Chao1, observed_species, and PD_whole_tree describe the alpha diversity of the fungi in GC and HC groups (Tukey test, ***p < 0.001; ****p < 0.0001). (D,E) Principal coordinate analysis of Bray-Curtis distance with each sample colored according to different groups. PC1 and PC2 represent the top two principal coordinates that captured most of the diversity. The fraction of diversity captured by the coordinate is given as a percentage. Groups were compared using the PERMANOVA method. but no statistical differences between the GC and para-GC groups (Supplementary Figure 4). Compared with the HC group, GC had a dominant abundance of the Ascomycota, followed by Basidiomycota (Figure 3B). A dramatically declined abundance of Rozellomycota and an obvious increasing tendency of Basidiomycota and Ascomycota can also be found in GC compared with the HC group. Moreover, the ratio of Basidiomycota to Ascomycota, which has been reported that can reflect fungal dysbiosis (Coker et al., 2019), was higher in GC than in HC group ( Figure 3C, p < 0.0001). Additional differences were observed at lower taxonomic levels, the healthy gastric fungal was composed of Saccharomyces (11.0%), Cladosporium (1.3%), and the unidentified fungal genus contributed to the vast majority (76.9%). This result is consistent with a previous study, which confirmed that the most prevalent fungal genus in healthy individuals contains Saccharomyces and Cladosporium (26). Additionally, Apiotrichum was found to be the dominant genus of the GC groups (23.7%), followed by Cutaneotrichosporon (8.8%), Starmerella (7.5%), Sarocladium (5.8%), Saccharomyces (4.3%), and Cladosporium (3.6%) ( Figure 3D). Furthermore, the proportion of several fungal were significantly enriched in the GC group, including Apiotrichum (p < 0.0001), Cutaneotrichosporon (p < 0.0001), Sarocladium (p <0.0001) as well as Malassezia (p = 0.004) ( Figure 3E). Rhizopus (p < 0.0001), Rhodotorula (p < 0.0001), Apodus (p < 0.0001) and Cystobasidium (p < 0.0001) were seen to be enriched in HC groups ( Figure 3F). Overall, the data points to the dysbiosis of the stomach's fungal microbiota, which might be linked to gastric carcinogenesis. Specific Taxonomic Changes To deeply characterize the microbiota alterations in GC, the multilevel LEfSe analysis was utilized for the groups in all taxa (Figures 4A,B). By comparison, the cladogram showed a total of 39 differential taxa, including nine classes, 11 orders, Comparisons of the relative abundance of dominant fungal taxa at the phylum level, "p" represents phylum. (C) The ratio of Basidiomycota to Ascomycota relative abundance between the two groups (Mann-Whitney U test, ****p < 0.0001). (D) Comparisons of the relative abundance of dominant fungal taxa at the genus level, "g" represents genus. (E,F) The differentially abundant fungal genus of GC-enriched (E) and HC-enriched (F) between HC and GC groups (Mann-Whitney U test, **p < 0.01; ****p < 0.0001). Frontiers in Microbiology | www.frontiersin.org and 19 families, and was identified to be responsible for discriminating GC and HC groups (LEfSe: p < 0.05, q < 0.05, LDA > 3.0). The preponderance of fungal species at several levels was similar across the GC and HC groups. For example, Malasseziaceae at the family level, Malasseziales at the order level, and Malasseziomycetes at the class level enriched in GC, while Sporidiobolaceae at the family level, Sporidiobolales at the order level, and Microbotryomycetes at the class level aggregated in controls. LEfSe showed that 40 taxa enriched in the GC group, including Apiotrichum, Cutaneotrichosporon, Sarocladium, Pichia, Chaetomium, and Malassezia, and 58 taxa enriched in the HC group, containing Rhodotorula, Exophiala, Rhizopus, Lecanactis, Coprinopsis, Russula, and so forth. The details of other taxa are shown in Figures 4A,B. Fungal Microbiota-Based Prediction of GC We further assessed the diagnostic ability of the top 10 microbial that displayed the most significant differences between GC and HC groups. The receiver operating characteristic (ROC) curves showed a great diagnostic potential, including Cystobasidium (Figure 4C). Combined with the Random Forest analysis (Supplementary Figure 5), these genera had an obvious effect in distinguishing GC and HC groups and thus can be used for diagnosis with a certain degree of accuracy. GC Microbiota Shows Specific Fungal-Immune Associations Several polymorphisms of cytokines and chemokines have been implicated in cancer development from tumor initiation, promotion, and progression to metastasis, which may multiply the risk of carcinoma (Grivennikov et al., 2010;Tsujimoto et al., 2010;Lee et al., 2014). To further investigate the relationship of these functional factors and fungi between GC and HC groups, we then measured the relative mRNA levels of cytokine and chemokines in the tumor and normal tissue samples. The mRNA levels of pro-inflammatory cytokines and chemokines, such as CXCL9, CXCL10, CXCL11, and TNF-α, were markedly elevated in the GC group, whereas anti-inflammatory cytokines and chemokines, such as CCL17, IL-4, IL-6, IL-8, and YM-1, were dramatically lowered ( Figure 5A). Interestingly, the GC group had a significantly higher amount of IL-10 mRNA (Figure 5A), which is probably released by tumor-associated macrophages (TAMs) and creates an immune evasive microenvironment and dictates poor prognosis (Zhang et al., 2022). It has also been reported that the mucosal levels of some cytokines were associated with H. pylori infection (Yamaoka et al., 1997), and a set of chemokines were involved in H. pylorirelated immunopathologic responses (Jafarzadeh et al., 2019). We next performed a correlation analysis of the top 20 different fungi and immune-related factors to assess whether the fungal microbiota composition was associated with the disordered inflammatory response. Results in Figure 5B show a positive correlation between the GC-enriched fungi, such as Apiotrichum, Cutaneotrichosporon, Simplicillium, and Sarocladium, and several immune-related pro-inflammatory, such as TNF-α, CXCL9, CXCL10, and CXCL11, which implicated these four fungi, may participate in the tumor-promoting immune reaction. Furthermore, increased IL-10 mRNA levels were positively linked with the presence of the four fungi stated earlier (Figures 5A,B). In contrast, the fungi reduced in GC containing Cystobasidium, Apodus, Lecanactis, Rhizopus, Rhodotorula, and Exophiala were positively correlated with immune factors that were characterized by anti-inflammatory properties (e.g., IL-4, IL-6, YM-1, and CCL17), suggesting that these genera have the potential to enhance anti-inflammatory response in GC. From above findings we can conclude that in gastric microbiota a dynamic interplay exists between fungi and immune related components and there are unique fungal alterations during GC. Functional Classification Prediction of the Specific Taxonomic Due to the lack of a powerful instrument for annotating fungal function, we focused on the trophic modes and functional guilds of the fungal communities instead. We used FUNGuild to further predict the functional and nutritional classification of the specific fungal community. In the HC group, the most widely distributed functions are Undefined Saprotroph (64.3%), while in the GC group is Soil Saprotroph (54.9%). The most diverse guild between the two groups was Animal Pathogen. Besides, we used the heatmaps to describe the functional classification predictions by analyzing the Guild Sum and Trophic. There were distinctively differential functions between GC and HC groups (Figures 6A,B). Thus, the above results exhibit a unique symbiotic ecological relationship during the occurrence and development of GC, which is essential to maintain the gastric fungal homeostasis, and may provide a new strategy for further studies. DISCUSSION The fungal microbiome, which includes the Fusarium and Aspergillus genera as well as some other genera (e.g., Alternaria and Mucor) that make up the emerging pathogen group in humans, is increasingly acknowledged to play a significant role in cancer etiology (Perlin et al., 2017). The interaction of the fungal microbiota with the host has emerged as a critical factor in modifying host physiology and gastric microbiota-related diseases. In this article, we show that in GC, disease-specific alterations in the fungal microbiome persist. For the first time, we uncover the immunological process of GC pathogenesis and the predicted efficacy of fungi in patients with GC from the hostmicrobiota interaction perspective by investigating the fungal and local immune reactions concurrently. With the advancement of the next-generation sequencing technologies, fungal taxa alterations have been proven to universally influence the development of host immune response and the initiation and progression of several cancers (Sokol et al., 2017;Aykut et al., 2019;Coker et al., 2019). Therefore, it is equally The relative tissue mRNA level of 10 immune cytokines in HC and GC groups. The differences are calculated by Mann-Whitney U test (***p < 0.001; ****p < 0.0001). (B) Correlation analysis of the top 20 differential genera and 10 immune-related factors between the two groups according to the Spearman's correlation analysis. The correlation effect is indicated by a color gradient from blue (negative correlation) to red (positive correlation). Correlation coefficients and p-values (*p < 0.05; **p < 0.01) are shown. important to explore the relationship between the disorder of fungi and the exact mechanism underlying the development of GC. It is well-known that the microbiota associated with GI tissue or mucosa is not well-represented in paired stool specimens. In this study, we directly profiled the disease-specific fungal microbiome in GC tissue, which is often the only strategy to uncover specific dysregulated states associated with the diseased tissue microenvironments. Biodiversity was found to decrease in both GC samples and the paired para-GC samples, and the composition was modified with the reduction of Rozellomycota and increase of Ascomycota and Basidiomycota compared with the HC group. Besides, the ratio of Basidiomycota to Ascomycota and the proportion of opportunistic fungi, i.e., Malassezia and Trichosporon, which reflects fungal dysbiosis (Kazmierczak-Siedlecka et al., 2020), were higher in GC than HC groups. We also found that the gastric fungi discriminated GC and normal controls into two significantly distinct groups, indicating unique fungal profiles. Zhong et al. (2021) found that the abundance of Fusicolla acetilerea, Arcopilus aureus, and Fusicolla aquaeductuum was relatively high in GC tissue than in adjacent non-cancerous tissues, but no difference was observed between GC tissue and HCs, owing to the tight anatomical proximity and/or different population attributes in this investigation. We also identified GC-specific shift in fungal composition, including nine classes, 11 orders, and 19 families, such as the enrichment of Cutaneotrichosporon, Apiotrichum, and Malassezia of the Basidiomycota phylum and the loss of Lecanactis, Apodus, and Exophiala of the Ascomycota phylum, implying that they may play a role in GC pathogenesis and necessitating further research. Previous studies have FIGURE 7 | The summary of the interactions between specific fungus with immune factors in the tumor microenvironment during the progression of GC. Apiotrichum, Simplicillium, Sarocladium, and Cutaneotrichosporon were positively correlated with M1 macrophages-related pro-inflammatory factors, such as CXCL9, 10, 11 and TNF-α, indicating these four fungi may involve in the initiation phase of tumor development. Cystobasidium, Apodus, Lecanactis, Rhizopus, Rhodotorula, and Exophiala were found to be positively correlated with M2 macrophages related anti-inflammatory factors IL-6, IL-8, CCL17, and YM-1, suggesting these genera may participate in the progression after tumor formation. Created with BioRender.com. demonstrated that Malassezia usually exists in human skin and has the ability to colonize the GI tract (Richard et al., 2015;Sokol et al., 2017), and it can also interact with cells that are involved in immune functions and induce the production of a variety of cytokines and some secreted enzymes, eventually leading to carcinogenesis (Gaitanis et al., 2012). Interestingly, our results showed an obvious downregulation of Rhodotorula in GC, which is emerged as an opportunity for pathogens to infect susceptible patients, especially for cancers and AIDS (Miceli et al., 2011). Notably, we identified some previously unreported GCassociated fungi, which may be due to the different variables, such as area, age, gender, diet, and the sequencing methods that may affect microbiome compositions. We also evaluated the diagnostic value for the top 10 genera, all of the genera exhibited promising results. These observations provide an opportunity to apply distinctive fungi in detecting and monitoring the progression of GC. Undeniably, further study is required with a larger sample size, more clinical centers, and stricter screening criteria. To uncover the mechanism between dysregulated fungi and GC, further epidemiological research and bio-functional testing are recommended. Aberrant cytokine production with the function of regulating angiogenesis, tumor growth, progression, and metastasis has been variously studied (Lee et al., 2014;Park et al., 2019). Pro-inflammatory chemokines CXCL9, CXCL10, and CXCL11 were 2-fold overexpressed in GC compared to normal tissues (Lee et al., 2014), which is consistent with our data. IL-10 was confirmed to be highly expressed in various types of cancer, including ovarian cancer (Ahmad et al., 2018), lymphoma (Gupta et al., 2012), prostate cancer (Lin and Zhang, 2017), and GC (Fortis et al., 1996), which can downregulate the inflammatory cytokines IL-6 and IL-8. In our study, the mRNA levels of IL-6 and IL-8 were decreased in the GC group, probably due to the elevated IL-10 level in the local area of the tumor. Interestingly, the associated analysis of different fungi in HC and GC groups indicated that immune responses were highly related to the variations in genera. The arguments of CXCL9, CXCL10, CXCL11, TNF-α, and IL-10 are positively correlated with Apiotrichum, Cutaneotrichosporon, Simplicillium, and Sarocladium, while the downregulated IL-4, IL-6, IL-8, CCL17, and YM-1 showed a positive association with Cystobasidium, Apodus, Lecanactis, Rhizopus, Rhodotorula, and Exophiala. All these results provide a basis for further investigation of the mechanisms of different fungal infections and GC (Figure 7). The limitation of current evidence includes the modest size of tissues samples and the high scale of unclassified taxa. A larger cohort in multicenters and optimized sequencing methods are needed to validate our results in the future. Furthermore, our research indicates the prospective use of gastric fungal markers in the prediction of GC by describing the alteration of gastric fungal mycobiome homeostasis in GC carcinogenesis. We also discover changes in GC-specific fungal and immunological markers, indicating that synergistic host-fungi interactions may contribute to GC pathogenesis. To assist the development of therapeutic and diagnostic targets against GC, further research on the interactions between fungus, bacteria, and host is recommended. DATA AVAILABILITY STATEMENT The datasets presented in this study can be found in online repositories. The names of the repository/repositories and accession number(s) can be found below: https://www.ncbi.nlm. nih.gov/, PRJNA797736. ETHICS STATEMENT The studies involving human participants were reviewed and approved by The Affiliated Drum Tower Hospital of Nanjing University Medical School (ID: 2021-514-02). The patients/participants provided their written informed consent to participate in this study. AUTHOR CONTRIBUTIONS PY, XZ, RX, XL, MC, and ZX: methodology and data validation. PY: formal analysis. KA provides methodology and language editing. PY and XZ: writing the original draft. HS, ZL, and ZX: conceptualization and supervision. All authors contributed to the article and approved the submitted version.
2022-04-29T13:56:38.085Z
2022-04-29T00:00:00.000
{ "year": 2022, "sha1": "251fd504ef9965705833b5dd03e056fa7e267907", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Frontier", "pdf_hash": "251fd504ef9965705833b5dd03e056fa7e267907", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
249312370
pes2o/s2orc
v3-fos-license
6-Methoxyflavone and Donepezil Behavioral Plus Neurochemical Correlates in Reversing Chronic Ethanol and Withdrawal Induced Cognitive Impairment Purpose Chronic ethanol exposure causes neurotoxicity and long-term learning and memory impairment along with hippocampal and frontal cortical dysfunction. Flavonoids possess antioxidant and anti-inflammatory properties believed to be contributory factors in reversing cognitive decline. 6-Methoxyflavone (6-MOF), a flavonoid occurring naturally in medicinal plants, has been reported to instigate neuroprotection by reversing cisplatin-induced hyperalgesia and allodynia. Consequently, this study was designed to investigate 6-MOF activity in models of chronic ethanol-induced cognitive impairment along with neurochemical correlates. Methods Mice were given ethanol orally (2.0 g/kg daily) for 24 days plus either saline, 6-MOF (25–75mg/kg) or donepezil (4mg/kg) and then ethanol was withdrawn for the next 6 days. Animals were subsequently assessed for their cognitive performance in several models on days 1, 12, and 24, during abstinence (Day-26) and on the 7th day of the washout period. Following behavioral assessment, post-mortem dopamine, noradrenaline and vitamin C concentrations were quantified in the frontal cortex, hippocampus and striatum, using HPLC with UV detection. Results Chronic ethanol treatment suppressed locomotor activity and impaired cognitive tasks, which included novel object recognition, performance in the Morris water maze as well as the Y-maze, socialization and nest-building behavior throughout the protocol and during withdrawal. These behavioral deficits were at least partially restored by the co-administration of 6-MOF or donepezil with ethanol as were ethanol-induced deficits in frontal cortical and hippocampal dopamine plus noradrenaline, together with striatal dopamine. 6-MOF co-administration with ethanol also modestly restored striatal vitamin C levels. Conclusion It is postulated that, apart from donepezil, 6-MOF may be useful not only in the treatment of ethanol withdrawal severity but also in the management of chronic ethanol withdrawal induced cognitive impairment. Introduction Chronic alcoholism is invariably associated with cognitive abnormalities that give rise to a long-term impairment of learning and memory. Such outcomes have been coupled not only with shrinkage in brain volume but may also be detrimental to hippocampal and prefrontal cortical function. 1 In addition, chronic alcohol exposure followed by abstinence, is injurious to grey matter by way of microstructural disruption of myelin and dysfunction in the prefrontal cortex instigating impaired retrieval and recall of fear memories. 2 Studies have shown that chronic alcohol consumption modifies cholinergic and monoamine neurotransmission inducing a negative affective state, although in this respect, adaptation to hippocampal neuronal excitability is subject to a gender difference. 3,4 Moreover, chronic ethanol induces further cognitive dysfunction as the result of cortical and hippocampal oxide-nitrosative stress, elevated cytokine levels, neuroinflammation and raised acetylcholinesterase (AChE) activity. 5 In light of this, donepezil is an anticholinesterase drug reported to have a beneficial effect on cognitive functioning 6 in alcoholic patients although further studies have been advocated to confirm any possible role in managing alcohol-related dementia. Donepezil has also been reported to possess neuroprotective properties 7 and anti-apoptotic activity as well as an inhibitory action against alcohol-induced toxicity. 8 Flavonoids are secondary plant metabolites with a broad spectrum of medicinal properties, and they have been employed as a fundamental constituent of pharmaceutical, cosmetic and nutraceutical preparations. 9,10 Their diverse pharmacological properties include antioxidant, 11 anti-inflammatory, 12 anxiolytic 13 and as neuroprotective, 14 so they have been used in the treatment of several ailments. These polyphenolic phytochemical compounds have been reported to reverse neuronal damage, stroke and ischemia. [15][16][17][18] Their antioxidant and anti-neuroinflammatory properties are considered to be major contributory factors in slowing rates of cognitive decline. Thus, it has been shown that the consumption of flavonoid-rich foods could be a valuable approach towards reducing cognitive impairment in older adults. 19 Flavonoids modulate neuronal activity by interacting with γ-aminobutyric acid (GABA), dopamine, glycine and serotonin neurotransmitters. 20 Even though there is evidence that flavonoids enhance cognitive function in both humans and animals, the underlying mechanism(s) have yet to be fully elucidated. 10 6-Methoxyflavone (6-MOF, Figure 1), which can be found naturally in Anvillea garcini leaves, is a flavonoid 21 capable of alleviating cisplatin-induced neuropathic-like pain. 22 In addition, it has been shown to act as a flumazenilinsensitive positive allosteric modulator of GABA responses at human recombinant a1b2c2L and a2b2c2L GABA A receptors and of GABA at benzodiazepine sensitive mutant ρ1I307S/W328 M GABA receptors in Xenopus oocytes. 23 It has also been reported that 6-MOF has immunomodulatory activity capable of suppressing NFAT-mediated T-cell activation. 24 The present experiments were conducted, bearing in mind the effect of chronic ethanol on cognitive impairment and the pharmacological potential of flavonoids for improving cognition. Thus, the pharmacological effects of 6-MOF on chronic ethanol-induced cognitive deficits were investigated in a range of behavioral paradigms involving cognition in parallel with neurochemical studies. The tests included locomotor activity, spatial working memory in the Morris water maze as well as the Y-maze, novel object recognition, socialization and nest-building behavior while postmortem monoamine levels were determined in the prefrontal cortex, hippocampus and striatum. Methods Animals Adult male BALB/c mice (n=6/group; 22-28 g) were acquired from the Veterinary Research Institute, Peshawar, Pakistan. Animals were kept under regular environmental conditions of temperature maintained at 22.0 ± 2.0°C on a 12/12 h light/dark cycle with ad libitum food and water access. The ethical committee of COMSATS University Islamabad, Abbottabad campus approved all experimental procedures under the certificate number PHM.Eth/CS-M03-015-1106, which conformed to the guidelines of the Animals Scientific Procedure Act (1986) UK. Experimental Protocol In experiments involving chronic ethanol-induced cognitive impairment, 2.0 g/kg (15% W/V) aqueous ethanol was given daily by the oral route (P.O.) for 24 consecutive days. 25 6-MOF was given at previously established doses of 25, 50 and 75 mg/kg, 22 and donepezil was administered at a dose of 4 mg/kg. 26 Animals were divided into six groups: Group 1, received normal saline vehicle (10 mL/kg); Group 2 received ethanol (2.0 g/kg p.o); Group 3 received ethanol + donepezil (4.0 mg/kg P.O.), Group 4 received ethanol + 6-MOF (25 mg/kg P.O.); Group 5 received ethanol + 6-MOF (50 mg/kg P.O.); Group 6 received ethanol + 6-MOF (75 mg/kg P.O.). 6-MOF or donepezil was administered 15 min before ethanol administration for 24 successive days. After 24 days of co-administration (ethanol + test compounds), treatment was withdrawn for the next 6 days. Animals were assessed for their cognitive performances in several behavioral models on days 1, 12, and 24, during abstinence (Day-26) and on the 7th day of the washout period. 25 Behavioral Activity Tests Locomotor Activity Test (Open Field) Locomotor activity was assessed in activity boxes (46 × 46 cm) 27,28 internally divided into four quadrants measuring (23 × 23 cm) with floor line-markings. 29 All animals were placed individually into the locomotor boxes, and activity was recorded by a video camera mounted 230 cm above the box. The number of lines crossed in 30 minutes was noted, and data were logged. 30 During all experimental procedures, 70% ethanol was used to clean the apparatus thoroughly between recordings. 31 Measurement of Spatial Learning and Memory in the Morris Water Maze The Morris water maze was employed to evaluate spatial learning and memory. The test procedure was performed using a circular pool (120 cm in diameter and 60 cm in height) filled with water, which was rendered opaque with milk powder. The pool was divided into four notional quadrants, and spatial cues with geometrical shapes were attached to the walls of the pool. Phase −1 (training), mice were trained for 5 days to find the location of a platform (13 cm diameter and 34 cm high) that was hidden 1 cm below the water level. This platform was placed in the same target quadrant on each occasion during all training sessions. Phase 2 (Trial) 5 trials per day were given with an intervening time interval of 10 minutes. During each trial, mice were placed facing the wall in one of the four quadrants with a randomly selected starting point. Each animal was permitted to locate the hidden platform for 90 seconds and was then allowed to sit on the platform for 5 seconds. Animals failing to locate the hidden platform were gently placed on the platform for 20 seconds (Phase-3). On the test day, ie, probe trial day, the platform was removed from the quadrant and mice were allowed to explore the maze for 90 seconds while being recorded by the video camera. Cognitive function was evaluated by recording the time spent in the target quadrant, the number of entries in the target quadrant and the number of platform location crossings by each animal. 32,33 Spontaneous Alternation Y-Maze A Y-maze apparatus with three equal-length arms positioned at an angle of 120°(21 cm long × 8.5 cm wide × 40 cm height) was used. Each animal was positioned at the midpoint of the apparatus and permitted to freely explore all arms for 5 minutes. The time spent in each arm was recorded using the video camera and between each animal procedure, the apparatus was swabbed with 70% ethanol. In the analysis of spontaneous Y-maze activity, the number of alternations, number of entries in each arm and percent alternations were recorded. Alternations were measured as entries into each arm and percentage alternations were calculated using the following formula. 34 % alternations ¼ Total alternations No : of arms entries � 100 Novel Object Recognition A three-day protocol was followed using an open arena (60cm × 50cm × 40cm). On day-1 (Acclimatization phase), animals explored the experimental boxes for 10 minutes. On day-2 (The training phase), animals were exposed to two novel objects for 10 minutes. On the test day, one familiar object was replaced with a novel object. Animals explored the objects for 10 minutes. During the experimental protocol, 70% ethanol was used to clean the boxes and objects. The time spent with each object was recorded by the video camera. 35 Socialization Test Mice were habituated to the test environment for 30 minutes. A juvenile male was used as a presenter animal. A test involved a sampling phase in which a juvenile presenter was introduced to the test animal, allowed to interact for 5 minutes and then removed. After a 1 hr interval, in the recognition phase, the same juvenile male or a novel male juvenile was introduced into the test cage for a 5-minute interaction. 36 Nest Building Behavior A single animal was placed in each cage and provided with cotton nesting material (5 × 5 cm square) and then left overnight. Animals were not disturbed during the nest-building period. The height and width of each nest were measured and recorded, then each nest was scored according to the criteria of Kraeuter et al. 37 Neurotransmitter and Vitamin C Quantification Using HPLC Sample Preparation After behavioral experimentation, all the animals were euthanized and different brain areas, ie, frontal cortex, striatum and hippocampus, were dissected on ice-chilled plates, weighed and then stored at −80°C. During sample preparation, a Teflon-glass homogenizer (Ultra-Turax ® T-50) was used for tissue homogenization in 0.2% ice-cold perchloric acid at 5000 rpm. The sample was then cold centrifuged at 12,000 rpm/min (4°C) (DLAB Scientific), and the supernatant was filtered using a 0.45 mm filter (CNW technologies) before introduction into the HPLC autosampler. 29 Chromatographic Conditions Chromatographic analysis was performed utilizing a Waters Alliance 2690 separation module with PDA, UV detector, and autosampler (USA). A C18 stainless steel column (250× 4.6 mm) (Waters X Select ® HSS Ireland) with a 5µm particle size was employed. The mobile phase comprised methanol and 20mM monobasic sodium phosphate (5:95, v/v); while detection was performed at 280 nm with isocratic elution. The elution rate was set at a flow rate of 0.5 mL/min while the column was maintained at a temperature of 35°C. 29,38 Standard Preparation For the preparation of the standard stock solutions, 1.0 mg of dopamine, noradrenaline and vitamin C were dissolved in 10 mL HPLC grade water. The stock solution of each neurotransmitter was then diluted to make 5 concentrations (100-500 ng/mL) used for the calibration curve. These samples were then placed in the HPLC autosampler, and a 20µL volume was withdrawn for injection into the system by the software (Empower TM ). The calibration curve was then made by plotting the peak area of dopamine, noradrenaline and vitamin C (y) against the concentration of dopamine, noradrenaline and vitamin C (x), respectively, using linear regression analysis. 29 Statistical Analysis Data were presented as mean ± standard error and processed by Graph Pad Prism version 8 statistical software. One-way ANOVA followed by post hoc Dunnett's test was applied. A value of p < 0.05 was taken as the threshold level of significance and data were considered significant if ***p < 0.001, **p < 0.01 and *p < 0.05. Results The Activity of Chronic Ethanol Treatment on Locomotor Activity (Open Field Test) Ethanol (2.0g/kg/p.o) was given alone for 24 days followed by 6 days of ethanol abstinence, and locomotor activity was assessed on days-1, 12, 24, during abstinence (Day-26) and day-7 (post-withdrawal). There was significant hyperlocomotion observed in the ethanol-treated animals on day 12. However, there was a depression of locomotor activity during 6-days of ethanol abstinence and on day-7 (post-withdrawal) ( ) were coadministered daily for 24 days followed by 6 days of ethanol abstinence. Animal locomotor activity was tested on days-1, 12 and 24, then during abstinence (Day-26) and on day-7 (post-withdrawal). There was a significant elevation of ethanol stimulated locomotion induced by 75 mg/kg of 6-MOF on day-12. During ethanol abstinence, the groups cotreated with all doses of 6-MOF up to day 24 expressed an increase in locomotor activity as did the group on the 7th protocol day (post-ethanol withdrawal). In contrast, the group co-administered donepezil with ethanol displayed a decrease in locomotion on protocol day-24 and during the abstinence period, locomotion was markedly increased ( Figure 3). The Activity of Chronic Ethanol Treatment on Novel Object Recognition Ethanol (2.0g/kg/p.o) was administered for 24 days followed by 6 days of ethanol abstinence, and novel object recognition was assessed on days-1, 12 and 24, during abstinence (Day-26), and on day-7 (post-withdrawal). On each test day, a familiar object was replaced with a novel object and animals were tested for object recognition memory for 10 minutes. There was a significant decrease in exploration time with novel objects observed throughout treatment and upon withdrawal in ethanol-treated animals compared to that administered saline vehicle ( Figure 4). The Activity of 6-Methoxyflavone or Donepezil on the Chronic Ethanol-Induced Cognitive Deficit in the Novel Object Recognition Test A significant increase in novel object exploration time was observed on protocol days-12, 24, abstinence (Day-26) and on the 7th-day (post-withdrawal) in the groups that received ethanol combined with all three 6-MOF doses (25, 50 and 75 mg/kg). In the chronic donepezil plus ethanol treatment group, there was also an increased novel object exploration time during the whole protocol in comparison with the ethanol alone treatment group ( Figure 5). The Activity of Chronic Ethanol Treatment on Morris Water Maze Performance Ethanol (2.0g/kg P.O.) was administered for 24 days followed by 6 days of ethanol abstinence, and Morris water maze performance was assessed on days-1, 12 and 24, during abstinence (Day-26) and on day-7 (post-withdrawal). It produced a statistically significant decrease in the time spent in the target quadrant ( Figure 6A), the number of entries in the target quadrant ( Figure 6B) and the number of platform location crossings ( Figure 6C) throughout the entire protocol. Co-administration of 6-MOF with ethanol also augmented the % alternations on protocol day-12 (75mg/kg), at all doses on day-24 and post-withdrawal, but only at the two higher doses during abstinence. The chronic ethanol plus donepezil combination significantly increased % alternations throughout except on day-24 ( Figure 9C). The Activity of Chronic Ethanol Treatment on Socialization Behavior Mice chronically treated with ethanol were tested for their socialization behavior on days-1, 12, and 24, during abstinence (Day-26) and the 7th-day post-withdrawal. There was a significant decrease in exploration time in sniffing a novel juvenile animal throughout the whole course of the protocol ( Figure 10). The Activity of 6-Methoxyflavone or Donepezil on Chronic Ethanol-Induced Cognitive Deficit Expressed in Socialization Behavior Chronic co-administration of 6-MOF with ethanol reversed the ethanol-induced cognitive deficit on socialization behavior. Hence, treatment with all three doses of 6-MOF significantly increased ethanol-induced exploration time in sniffing novel juvenile mice at all stages of the protocol similarly, chronic donepezil also had an identical pronounced socialization behavioral effect in combination with ethanol during the whole protocol ( Figure 11). The Activity of Chronic Ethanol Treatment on Nest-Building Behavior Ethanol (2.0g/kg/p.o) was given for 24 days followed by 6 days of ethanol abstinence. A significant decrease in the height of nests built by ethanol-treated animals was observed in comparison with the saline-vehicle group all through the protocol ( Figure 12A). Statistical analysis revealed that the width of the nests, indicating the amount of untouched material, was significantly increased in chronic ethanol-treated animals versus the saline-vehicle controls ( Figure 12B). However, concerning the nest building score, the quality of nest building was impaired compared to the control animals ( Figure 12C). There was a considerable increase in the height of nests built by mice cotreated with ethanol plus all doses of 6-MOF or donepezil at each stage of the protocol ( Figure 13A). Statistical analysis disclosed that the nest width, as an indication of the amount of untouched material, was decreased during the whole protocol by chronic ethanol with 6-MOF or donepezil concomitant dosing versus ethanol alone ( Figure 13B). Also, as a consequence of evaluation by scoring, the quality of chronic ethanol-induced nest building was improved by combining it with 6-MOF or donepezil for the whole protocol duration ( Figure 13C). Quantification of Neurotransmitters and Vitamin C Using HPLC-UV Action of Chronic Ethanol Treatment with 6-Methoxyflavone or Donepezil on Frontal Cortical Tissue Concentrations of Dopamine, Noradrenaline and Vitamin C Chronic ethanol treatment induced a marked decrease in frontal cortical levels of dopamine, noradrenaline and vitamin C compared to the saline (vehicle) treated groups. However, combined treatment with 6-MOF (75 mg/kg) plus ethanol elevated the levels of dopamine and vitamin C, which were formerly suppressed by chronic ethanol. Additionally, all three doses of 6-MOF, as well as donepezil, manifestly reversed the repressive action of chronic ethanol on noradrenaline levels in the frontal cortex and the highest dose of 6-MOF also increased the vitamin C level (Table 1). (Table 3). Discussion Chronic ethanol has been reported to alter the behavioral and cognitive aptitude of both animals and humans primarily by disrupting hippocampal-dependent learning and memory. 39 This cognitive decline is largely attributable to the ability of ethanol to cross the blood-brain barrier inducing oxidative stress. 40,41 Studies have shown that ethanol withdrawal produces cognitive dysfunction by perturbing frontal cortical, striatal and hippocampal functions. 5,42,43 In the current study, an increase in spontaneous locomotor activity was observed on protocol days 12 and 24 of chronic ethanol administration. A similar treatment schedule has been shown to induce robust ethanol sensitization though in the present case, a further ethanol challenge was not presented to expose the expression of sensitization. 44 However, a decrease in locomotor activity was noted during ethanol abstinence and on the 7th day, post-withdrawal compared to saline controls. During long-term ethanol withdrawal, it has been reported that the inhibitory action of GABA was diminished due to impaired GABA A receptor function leading to anxiety-like behavior and a downturn in locomotor activity. 45 In contrast, our findings showed that 6-MOF reversed chronic ethanol-induced hyperlocomotion after 12 days of combined treatment. Concerning this, 6-MOF possesses an inherent GABA A agonist effect 23 and it has also been reported that an increased release of GABA in the nucleus accumbens results in hyperlocomotion that is mediated by inhibition of dopaminergic pathways in the ventral tegmental area and nucleus accumbens. 46 During abstinence and post-withdrawal from chronic ethanol plus 6-MOF or donepezil, there was augmented locomotor activity compared to chronic ethanol treatment alone. Thus, an enhanced GABA-ergic function or anticholinesterase activity may well have been a factor in the counteraction of ethanol withdrawal hypolocomotion. Additionally, long-term ethanol ingestion results in altered activity of dopaminergic and noradrenergic pathways, which affect the phosphorylation and trafficking of tyrosine kinase receptor B (TrkB) via cyclic AMP stimulation. Studies in humans and animals have suggested that there is a convincing link between memory formation in the hippocampus and dopaminergic neuromodulation. 47 It is also relevant to mention that in our study, both 6-MOF and donepezil increased ethanol suppressed noradrenaline levels in the hippocampus. Moreover, postsynaptic activation of 5-HT 1A receptors has been shown to elicit an increase in extracellular noradrenaline levels in the hippocampus causing psychomotor stimulation and hyperlocomotion. 48 Additionally, donepezil is an AChE inhibitor with a neuroprotective effect and it has been shown to improve cognition in mild, moderate and severe Alzheimer's disease by both cholinergic and non-cholinergic mechanisms. 49,50 Thus, increased hippocampal noradrenaline levels can provide a plausible mechanism underlying the enhancement of ethanol withdrawal locomotor activity by 6-MOF and donepezil. Chronic ethanol consumption results in anterograde amnesia mitigating the acquisition of new information through an impaired processing capability accompanied by the debility to forget rapidly. 51 This feature is commonly found when there are lesions in the hippocampus and ethanol tends to decrease hippocampal synaptic plasticity promoting an inability to store information before it is consolidated in long-term memory. 51,52 1585 Previous findings employing the novel object recognition test have been heavily influenced by lesions in both the hippocampus and cortex. 53 Hence, in rats and primates, the hippocampus is associated with the perirhinal and prefrontal cortical areas, which play a role in the recognition of memory tasks by activating the executive center. 53,54 In our study, chronic ethanol administration in mice compromised object recognition memory and this accords with a similar outcome in rats. 51 Furthermore, we demonstrated that sustained ethanol consumption decreased the time spent with a novel object not only during the period of treatment but also during withdrawal. Taking this into account, the medial prefrontal cortex is an important area that is responsible for a range of functions, not least of which include episodic and contextual memory formation. 55 It has been demonstrated that ethanol-induced impairment of spatial memory may be ascribed to a reduction in the release of hippocampal glutamate 56 in addition to attenuating BDNF levels. 57 In this context, hippocampal glutamate release is modulated by BDNF-TrkB signaling 58 and this has been shown to critically impact long-term potentiation and long-term memory formation. 59 It is also noteworthy that BDNF is essential for maintaining noradrenergic tone in catecholaminergic neurons. 59 Furthermore, ethanol consumption not only decreases levels of nerve growth factor but also inhibits NGF-mediated Notes: All data are presented as mean ± SEM and analyzed using ANOVA (one way) and post hoc Dunnett's test. Group statistical comparisons were performed between ethanol alone versus the saline treatment and between co-administered ethanol with 6-methoxyflavone or donepezil versus ethanol alone treatment. *p<0.05, **p<0.01 and ***p<0.001. Notes: All data are presented as mean ± SEM and analyzed using ANOVA (one way) and post hoc Dunnett's test. Group statistical comparisons were performed between ethanol alone versus the saline treatment and between co-administered ethanol with 6-methoxyflavone or donepezil versus ethanol alone treatment. ***p<0.001. The asterisks (*) are only intended to flag levels of significance If a p-value is less than 0.05, it is flagged with one star (*) *p<0.05. If a p-value is less than 0.01, it is flagged with 2 stars (**) **p<0.01. If a p-value is less than 0.001, it is flagged with three stars (***) ***p<0.001. 60 Flavones are TrkB receptor agonists an example being 7, 8-dihydroxyflavone which improves object recognition memory in mice. 61 Additionally, flavonoids affect the regulation of NGF leading to cellular phosphorylation of hippocampal proteins aiding memory formation. 62 It is a distinct possibility, therefore, that 6-MOF similarly increases novel object exploration time via modulation of BDNF-TrkB receptors and stimulation of NGF release. Monoamines in the frontal cortex and hippocampus are implicated in mnemonic processes, such as learning, memory consolidation, formation and retrieval, 63 while ethanol consumption disrupts dopaminergic and noradrenergic pathways. 47 Our experiments disclosed that donepezil or 6-MOF treatment with chronic ethanol escalated frontal cortical levels of noradrenaline, while only 6-MOF increased hippocampal dopamine. Both of these consequences may be instrumental towards the increased time spent with a novel object and the overall enhancement of object recognition memory. On top of this, chronic ethanol causes oxidative stress 40 by lipid peroxidation 41,64 and contrary to this, vitamin C has antioxidant properties. Consequently, vitamin C conserves oxidative mechanisms improving cognition and memory 65,66 in addition to reversing an ethanol-generated apoptotic neuronal loss, neuroinflammation and oxidative stress. 67 In our study, 6-MOF at the highest coadministered dose did evoke a moderate but significant reversal of chronic ethanol attenuated frontal cortical vitamin C level, which may have had a contribution to offsetting some oxidative stress. The Morris water maze is a frequently used behavioral test to assess hippocampal-centered spatial reference and working memory. 68 It has been documented that lesions in the dorsal hippocampus and striatum, 69 along with the cerebral cortex, basal forebrain and cerebellum impair Morris's water maze performance. 70 Concerning this, it is notable that chronic ethanol consumption causes lesions in the hippocampus and cerebral cortex possibly by increasing both AChE activity and oxidative stress. 71 The Y-maze paradigm is used for assessing spatial and short-term working memory and learning. The model evaluates spontaneous alternation behavior, which involves a functional association between the prefrontal cortex and hippocampus. Lesions in these key areas result in a deterioration of Y-maze performance 72 and prolonged consumption of ethanol also produces prefrontal cortical dysfunction. 73 Our findings exposed a substantially decreased performance by chronic ethanol-treated animals in the Morris water maze and Y-maze. A broad range of neurotransmitters, including dopamine, acetylcholine, noradrenaline, serotonin and glutamate as a monoamine cotransmitter, are involved in memory formation. 74,75 However, the spatial choice of animals in the Morris water maze is altered due to changes in GABAergic, cholinergic and monoaminergic (particularly serotonergic and noradrenergic) neurotransmission 76 while Y-maze performance is exacerbated by dopaminergic D1 blockade in the medial frontal cortex. 77 Ethanol consumption has been reported to disrupt dopaminergic as well as noradrenergic transmission in addition to causing oxidative stress. 47 Dopamine D1-like receptors, when activated in the hippocampus, enhance memory formation and monoamine oxidase-B (MAO-B) inhibitors have been reported to raise extracellular dopamine levels in this brain region. 78 Intriguingly, O-methylated flavonoids inhibit MAO-B, 79 which may contribute to increased dopamine levels in the hippocampus and the positive impact of 6-MOF on spatial memory in the Y-maze. Correspondingly, augmented brain noradrenaline levels improve cognition, 80 and in our study, chronic ethanol plus 6-MOF produced a specific frontal cortical noradrenaline upsurge likely to improve Y-maze performance. Spatial learning, memory and neurogenesis in the hippocampus are compromised by depletion of hippocampal noradrenaline levels. 81 Regarding this assertion, in our study 6-MOF increased ethanol suppressed hippocampal noradrenaline levels, which would have been likely to improve MWM performance. Vitamin C has an antioxidant effect that tends to oppose memory-related impairment in several ailments and it improves memory chiefly by conserving the hippocampal antioxidant mechanism. 65 We found that chronic ethanol administration diminished vitamin C levels in all three brain areas examined and in the chronic ethanol/6MOF combination group, only a marginal, though significant reversal, was observed in the frontal cortex. This may have had some bearing on the antioxidant status of the frontal cortex in ameliorating Y-maze performance. Oxidative stress and the generation of reactive oxygen species is one of the underlying mechanisms by which chronic ethanol induces impairment of spatial working memory. 82 It has been shown that the prefrontal cortex and hippocampus are primarily involved in the modulation of spatial learning and memory and oxidative stress in these key brain areas results in spatial memory impairment. 83 Vitamin C, a natural antioxidant, has been documented to facilitate hippocampal antioxidant enzyme activity thereby improving cognitive function. 84 We found that 6-MOF treatment curtailed ethanol repressed frontal cortical vitamin C levels and this may have contributed to improved Morris water maze performance. The socialization test has been used commonly for the quantification of short-term memory and is primarily centered on the innate preference of rodents to explore unknown conspecifics more strongly than familiar ones. 85,86 Socialization is critical for survival, social collaboration, reproduction and adaptation of social behavior. 86 The dorsal hippocampal CA1 region and frontal cortical area are involved in the sensory system and social interaction behaviors, whereas the amygdala and somatosensory areas appear to be more concerned with behavioral regulation. 87 Chronic ethanol exposure modifies neuronal function in the hippocampus, medial prefrontal cortex and dentate gyrus. 1 This neuronal activity would tend to impair socialization as confirmed by our findings that there was a decrease in social interaction (time spent sniffing novel juvenile mice) by chronic ethanol-treated animals. Flavonoids have been reported to ameliorate cognitive function in humans, 88 and the outcome of treatment chronically with ethanol plus 6-MOF partially reversed ethanol impaired socialization behavior in our study. In this context, noradrenaline acting on β-adrenoceptors and dopamine operating through D1/D5 receptors, both play an essential role in social recognition memory. 86 This was corroborated by our result that chronic ethanol plus 6-MOF treatment raised the levels of noradrenaline and dopamine in the frontal cortex, while the dopamine concentration was boosted in the hippocampus, thereby improving socialization. Nest building behavior in mice has been utilized as an assay for affective states as well as sensorimotor function and these entities are modified even during acute ethanol withdrawal. 89 The hippocampus plays an important role in nest building and hippocampal lesions lead to impairment of this behavior. [90][91][92] In addition, to acute ethanol withdrawal, abstinence from chronic ethanol predictably modulates nest-building behavior 93 and this is substantiated by our results. Nest building entails orofacial and forelimb measures, which are dopamine-dependent 94 and it has been previously demonstrated that flavonoids improve deficits in nest building, social interactive behaviors and cognitive function. 95 In our study, a treatment combination of ethanol with either donepezil or 6-MOF ameliorated ethanol impaired nest height, width and overall score. Both dopaminergic and noradrenergic pathways play an important role in nest-building and deficiencies in these systems are inclined to exacerbate the behavior. 94 Thus, dopaminergic dysfunction and decreased dopamine levels in the striatum result in impaired nest-building activity; however, when dopamine levels are restored, nest-building is reinstated. 94 Our outcome of chronic ethanol treatment with 6-MOF generated increased levels of noradrenaline and dopamine in the frontal cortex and dopamine in the hippocampus and striatum may well be underlying nesting mechanisms. Conclusion We have shown that chronic ethanol treatment suppressed locomotor activity in addition to impairing cognitive tasks, which included novel object recognition, performance in the Morris water maze and Y-maze, socialization and nestbuilding behavior throughout a 24-day protocol and during subsequent withdrawal. These behavioral deficits were at least partially restored by co-administration of ethanol plus 6-MOF or donepezil as were ethanol-induced deficits in frontal cortical dopamine and noradrenaline, hippocampal or striatal dopamine and frontal cortical vitamin C by 6-MOF cotreatment. Flavonoids are not only AChE inhibitors but are also thought to be neuroprotective through protein kinase and lipid kinase signaling cascades, preserving neuronal Ca 2+ homeostasis, binding ATP sites as well as BDNF-TrkB receptors and regulating NGF. 96 6-MOF is a flavone flavonoid and after oral administration, it is well absorbed from the intestine and can cross the blood-brain barrier to impart neuroprotective activity. 97 Accordingly, it may be postulated that it has conferred neuroprotection during ethanol-induced cognitive decline by one or more of its recognized mechanisms. It might be proposed as a consequence that 6-MOF would be useful not only in the treatment of ethanol withdrawal severity but also in the management of ethanol memory impairment. Taking this into account, it is worth remembering that flavonoids are generally considered to be safe. 98 Limitations of the Study This study has involved behavioral and biochemical techniques and more specific investigative work at the molecular level is needed to explore the underlying mechanism(s) of 6-MOF on cognitive decline. Publish your work in this journal Drug Design, Development and Therapy is an international, peer-reviewed open-access journal that spans the spectrum of drug design and development through to clinical applications. Clinical outcomes, patient safety, and programs for the development and effective, safe, and sustained use of medicines are a feature of the journal, which has also been accepted for indexing on PubMed Central. The manuscript management system is completely online and includes a very quick and fair peer-review system, which is all easy to use. Visit http://www.dovepress.com/testimonials.php to read real quotes from published authors.
2022-06-04T05:12:08.999Z
2022-05-28T00:00:00.000
{ "year": 2022, "sha1": "42d88eaead27265dd570fa1068175c07f22499cb", "oa_license": "CCBYNC", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "42d88eaead27265dd570fa1068175c07f22499cb", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
229249857
pes2o/s2orc
v3-fos-license
Anthropometry, bone mineral density and risk of breast cancer in premenopausal and postmenopausal Saudi women Introduction Anthropometry and bone mineral density are linked to hormonal imbalance, which plays a possible role in breast carcinogenesis. The current study was designed to explore the relationship between selected anthropometric and bone mineral density parameters and the risk of breast cancer in premenopausal and postmenopausal Saudi women. Material and methods A cross-sectional study was carried out among premenopausal (n = 308) and postmenopausal (n = 148) women at two Medical Cities in Riyadh, Saudi Arabia from May 2015 to June 2016. Selected anthropometric measurements were obtained from 456 women; 213 of them had breast cancer. Bone mineral density was also measured using dual-energy X-ray absorptiometry. Results Greater waist circumference was significantly correlated with a higher breast cancer risk in premenopausal women (OR = 1.02, p = 0.03) but not in postmenopausal women. Greater triceps skinfold thickness had been found to be significantly correlated with a higher risk of breast cancer in premenopausal (OR = 1.06, p = 0.001) and postmenopausal (OR = 1.06, p = 0.001) women. However, bone mineral density was not significantly associated with breast cancer risk in either group of participants. Conclusions Breast cancer risk was significantly associated with waist circumference and triceps skinfold thickness in premenopausal women and with only triceps skinfold thickness in postmenopausal women. Introduction Breast cancer is the most common malignancy reported in females worldwide. It considered as the top reason for cancer mortality among females globally [1]. In 2018, about two million cases of breast cancer in females were expected to be diagnosed worldwide, which accounts for almost one-fourth of all cancer cases among women [2]. Fortunately, several developed countries have experienced a fall in breast cancer morbidity and mortality during the past few decades, which was partly attributable to an increase in breast cancer screening and optimal management [3][4][5]. However, the incidence rates of this serious disease continue to rise rapidly, especially in countries that had low incidence rates historically [1]. According to the latest Saudi cancer incidence report, breast cancer was classified as the most frequent malignancy reported in Saudi females who were newly diagnosed with cancer in 2015. It accounted for 16.7% and 30.1% of all cancers reported among both genders combined and females at all ages respectively [6]. Although Saudi Arabia has relatively low breast cancer incidence rates at the global level, the available data show that breast cancer incidence rates rise with time among Saudi women [7]. The high burden of breast cancer at the global level inspires extensive literature to explore the risk factors of this disease. Several factors are proved to be commonly linked with an elevated breast cancer risk in women such as factors related to genetics, menstruation, reproduction, exogenous hormone intake, and nutrition [8]. Nevertheless, there are emerging factors thought to contribute to breast cancer risk, such as anthropometry and bone mineral density (BMD) [9,10]. Prior research suggested a possible link between breast cancer risk and anthropometric measurements such as body mass index (BMI). Although the outcomes from different studies regarding the nature and the magnitude of such risk factor relationships are still controversial, many studies have reported some association and indicated that this association seems to be interrelated with menopausal status and connected with the levels of steroid hormone in the body [9]. Another evolving risk factor is BMD. Numerous studies have shown that high BMD might be associated with higher breast cancer risk, mainly in postmenopausal females due to long-term estrogen exposure. However, this relationship is still inconclusive and needs further investigation [10]. Natural menopause is defined as the time when female menstrual cycles cease permanently (recognized after one year of amenorrhea) due to agerelated hormonal changes in the reproductive system. It usually occurs at age 49 to 52 years [11,12]. Care about women's health should be started at a young age to minimize health problems at older ages [13]. Women experience several somatic and mental changes at menopause that negatively affect their health status and life quality [14,15]. Obesity and central obesity are particularly considered as common disturbances associated with menopause [16,17]. The key cause of weight gain and body composition changes associated with menopause appears to be the quick reduction in body levels of the specific sex hormone estrogen. In the female body, estrogens stimulate fat accumulation in the subcutaneous tissue, mostly in the femoral and gluteal areas. Contrarily, another sex hormone, androgens, stimulate fat accumulation in the abdominal region. Consequently, the relative hyperandrogenemia concurrent with lack of estrogens during menopause contributes to metabolically unfavorable fat redistribution from a gynoid to android site and thus central obesity development [18]. Another health problem associated with menopause in women is bone loss. Reduction in BMD occurs significantly during late perimenopause and occurs at a similar rate during the early postmenopausal years [19,20]. Besides the effect of declining estrogen levels at menopause, current evidence suggests that bone loss during the menopausal transition could be linked with a rise in serum follicle-stimulating hormone (FSH) through osteoclastogenesis stimulation leading to bone resorption by osteoclasts [21]. Generally, there are differences between premenopausal and postmenopausal women in terms of breast cancer risk factors [22]. Moreover, there are variations in the relationship between the common risk factors and breast cancer occurrence among different ethnicities [23]. Furthermore, most of the existing literature on different aspects of breast cancer risk factors has been mainly reported for populations from developed countries, whereas these data from Saudi Arabia seem either scattered or not made public [7]. Consequently, discovering risk factors associated with breast carcinogenesis among Saudi women will provide guidance for the needed strategies that could potentially reduce the burden of this serious disease. Therefore, the current study objective is to explore relationships between selected anthropometric and BMD parameters and breast cancer risk among Saudi women after stratification of study subjects based on menopausal status. The current study question is whether there any relationships between anthropometric and BMD parameters and breast cancer risk among premenopausal and postmenopausal Saudi women. We hypothesized that anthropometric and BMD parameters could be associated with breast cancer risk among Saudi women. Also, we suppose that this association could differ depending on menopausal status. Study design and participants The design of the current study is cross-sectional. In total, 456 women joined the present study between May 2015 and June 2016 from King Saud Medical City (n = 120) and King Fahad Medical City (n = 336) in Riyadh, the capital city of Saudi Arabia. The study subjects were chosen using the method of systematic random sampling from women who visited the surgical clinics to undergo breast cancer screening using mammography. The diagnosis of breast cancer was made by oncologists in the above-mentioned hospitals. The inclusion criteria were: Saudi women aged 20-65 years, not pregnant or lactating at the time of recruitment and who had not been diagnosed with any other malignancy. The recruited subjects had not received any types of therapies before or at the times of recruitment and data collection. The study subjects provided informed written consent in their native language to sign prior to enrollment in line with the Helsinki Declaration. Ethical approval for the study protocol was obtained from the Institutional Review Boards of the King Saud Medical City and the King Fahad Medical City, Riyadh, Saudi Arabia. Descriptive data collection Personal interview by administering a specific questionnaire was adopted to collect descriptive data. These data include selected sociodemographic characteristics: age, education level, employment status, and marital status, and selected lifestyle and maternal characteristics: sunlight exposure and tobacco smoking as well as menopausal status and were collected by trained dietitians from the study participants. The frequency of sunlight exposure was defined as exposure to sunlight three times weekly at least. Each time, at least 20% of their body surface area had to be exposed to sunlight directly. Presence or absence of menses throughout the previous year or hysterectomy was used to determine menopause status. Postmenopausal women are those with cessation of menstrual periods for at least twelve consecutive months. Health characteristics of participants were collected from their patient medical records. Anthropometric measurements Anthropometric measurements were collected by trained dietitians from the study subjects using standardized methods. Collected anthropometric measurements include body height, body weight, circumferences of waist, hip, and mid-upper arm, skinfold thickness at triceps and body composition (body protein, fat, water, and mineral percentages). The measurement of body height was done with a stadiometer to the nearest 0.1 cm. The measurement of body weight was done with a calibrated weight scale to the nearest 0.1 kg. Body mass index was obtained by dividing weight (kg) by height (m 2 ). Participants were considered obese when BMI was equal to 30 or higher. The measurement of waist, hip, and mid-upper arm circumferences was carried out by non-stretchable measuring tape to the nearest 1 mm. Waist circumference was divided by hip circumference to calculate the waist-hip ratio. Triceps skinfold thickness was measured in duplicate from the left hand with a calibrated skinfold caliper to the nearest 1 mm. Finally, the body composition, including body protein, fat, water, and mineral percentages, was measured using a body fat analyzer (IOI 353, Danilsmc Co., Ltd, South Korea). Measurement of bone mineral density Measurement of BMD was done for all study subjects using dual-energy X-ray absorptiometry (DXA) (Lunar Prodigy, GE Healthcare, United States). Dual-energy X-ray absorptiometry scans were performed in the Department of Radiology at the King Saud Medical City and the King Fahad Medical City, Riyadh, Saudi Arabia. Two sites were selected to measure BMD: the right hip and the lumbar spine (L1 to L4). Bone mineral density values of each participant were automatically compared to ideal BMD, and T-score values (standard deviation from the mean for young adults) were given. Consequently, the bone health status was classified based on WHO criteria as follows: normal (BMD T-score is -1.0 or higher), osteopenia (BMD T-score is between -1.0 and -2.5) or osteoporosis (BMD T-score is -2.5 or lower). The bone health status was diagnosed by radiologists. Statistical analysis SPSS version 23 was used to complete data analysis. Statistical analysis was carried out after study subjects were stratified based on menopausal status. Categorical variables were given as frequencies (%). The normality of variables was investigated using the Shapiro-Wilk test. The χ 2 test was used to analyze them. Continuous variables were given as means (SD). The one-way ANOVA test was used to analyze them. The Tukey post hoc test was used to determine significant differences. Univariate logistic regression analysis was performed to detect the factors which might be related to breast cancer risk. Differences were considered statistically significant when p-values < 0.05. Results Four hundred and fifty-six Saudi women (308 premenopausal and 148 postmenopausal) participated in the current study. Selected sociodemographic, lifestyle, maternal and health characteristics of subjects are given in Table I patients with breast cancer and cancer-free participants in both premenopausal and postmenopausal women (Tables II and III). For premenopausal women, significant differences were found in waist circumference and triceps skinfold thickness. The mean waist circumference in breast cancer patients (102.2 ±14.0) was significantly higher than that in cancer-free women (98.4 ±16.0, p = 0.029). Similarly, the mean triceps skinfold thickness in women with breast cancer (25.8 ±9.7) was significantly lower than that in cancer-free women (30.6 ±8.3, p = 0.001). For postmenopausal women, a significant difference was only found in triceps skinfold thickness. The mean triceps skinfold thickness in women with breast cancer (23.4 ±10.3) was significantly lower than that in cancer-free women (28.9 ±8.6, p = 0.001). Two anthropometric measurements were found to be significantly correlated with breast cancer risk in premenopausal subjects (Table IV). Greater had a college degree at least, whereas the education level did not exceed high school for the rest. Similarly, 42.5% and 15.5% of premenopausal and postmenopausal participants were employed, respectively. In addition, most of the premenopausal (78.2%) and postmenopausal (72.3%) women were married, while the remaining women were unmarried, including single, divorced and widowed women. Frequent sunlight exposure was reported among 73.4% and 70.9% of premenopausal and postmenopausal subjects, respectively. Moreover, tobacco smoking was reported by only 3.9% and 1.4% of premenopausal and postmenopausal subjects respectively. The vast majority of participants have pregnancy and lactation history. Finally, some participants had been diagnosed with certain endocrine diseases, including diabetes, hypothyroidism and hyperthyroidism. An analysis of anthropometric and BMD parameters after stratification according to menopausal status revealed a few differences between waist circumference and greater triceps skinfold thickness were significantly linked with a higher breast cancer risk (odds ratio [OR] = 1.02, p = 0.03 and OR = 1.06, p = 0.001 respectively). A higher waist-hip ratio was correlated with a higher breast cancer risk, but not significantly (OR = 1.53, p = 0.07). In postmenopausal women, triceps skinfold thickness was the only anthropometric measurement that was significantly linked with breast cancer risk. Greater triceps skinfold thickness was found to be significantly correlated with a higher breast cancer risk in postmenopausal subjects (OR = 1.06, p = 0.001; Table V). On the other hand, the used BMD parameters were not significantly linked with breast cancer risk in premenopausal and postmenopausal participants. Interestingly, mean waist circumference for osteoporotic patients was significantly different (p < 0.05) from that for participants with normal bone health status in both premenopausal and postmenopausal women (Figure 1). Similarly, mean triceps skinfold thickness for osteoporotic patients was significantly different (p < 0.05) from that for participants with normal bone health status in premenopausal women ( Figure 2). Discussion There is substantial inconsistency among different studies in results regarding the connection between anthropometry and BMD and breast cancer risk. Furthermore, this association tends to vary depending on certain population characteristics such as ethnicity and menopausal status [9,10]. Thus, to capture the most sensitive estimation of risk factor association, research effort needs to stratify analyses based on these characteristics. The current study is the first study investigating the possible relationship between anthropometry and BMD and breast cancer risk in Saudi women. Anthropometry is among the few modifiable breast cancer risk factors. It is considered crucial in breast cancer etiology [9]. However, a connec- tion between anthropometry and breast cancer risk is still debatable in the literature and is influenced by numerous aspects including ethnicity, reproduction, lifestyle, and menopausal status [24]. Obesity generally influences breast cancer incidence by generating metabolic and hormonal abnormalities, especially in persons with abdominal obesity. Abdominal obesity is linked to insulin resistance. Elevation of insulin level inhibits hepatic production of sex hormone-binding globulin, raises levels of leptin, and reduces adiponectin levels. Additionally, insulin modulates vascular endothelial growth factor expression. The combined effect of these substances accelerates cell divisions and induces synthesis of transcriptional factors which lead to promotion of mammary carcinogenesis [25]. Our data did not reveal any correlation between BMI or obesity and the risk of breast cancer among both groups of subjects, conversely to several previous studies [9,24]. However, the current study revealed a significant correlation between waist circumference and the risk of breast can-cer in premenopausal women. A similar finding was reported previously [9]. Contrarily, the impact of waist circumference on breast cancer risk is not observed in our results for postmenopausal subjects, in contrast to several previous studies. This finding might be related to a general tendency in elderly women toward central obesity caused by a disturbance in the level of estrogen, unhealthy dietary habits and sedentary lifestyle [26]. Interestingly, our data revealed that triceps skinfold thickness was significantly linked with breast cancer risk in premenopausal and postmenopausal subjects. Another study reported the same result only in premenopausal women [26]. Skinfold thickness is a measure of subcutaneous fat that is assessed as a prediction of the total amount of body fat. Since adipose tissues have a crucial effect on production of female sex steroid hormones, general obesity caused an elevation in the level of estrogen and androgen. Thus, they act in the breast cells as mitogenic agents which contribute to higher breast cancer risk [27]. Bone mineral density is considered as a crucial parameter used to evaluate the bone quality and to identify people with osteoporosis, a disease that commonly strikes women, particularly after menopause due to greater bone loss caused by lower levels of estrogen [10,28]. Notably, high BMD is a lifetime marker of continued exposure to estrogen, which controls the turnover of bones through suppressing bone resorption and activating specific hormones that stimulate the formation of bones [29]. Furthermore, elevated BMD values have been reported to be related to a higher breast cancer risk. Nonetheless, this relationship is still uncertain [30]. However, no significant correlation was detected in this study between used BMD parameters and risk of breast cancer in premenopausal and postmenopausal subjects. Similar null findings were also observed in a recent long-term follow-up study on postmenopausal women, and a recent dose-response metaanaly sis [31,32]. Interestingly, BMD may be linked with certain factors such as obesity [33]. During the menopausal transition, estrogen levels drop gradually. This causes a loss of bone mass. The situation might be different among obese women due to higher estrogen exposure, which contributes to down-regulating the resorption of bones by restraint of osteoclasts [29]. Furthermore, in obese women who had breast cancer, the outcome is improved due to a high androgen level, which has positive effects on bone tissue. Androgens' action takes place through alteration to estrogen by an enzymatic pathway or through direct binding to androgen receptors [34]. Nevertheless, abdominal obesity causes metabolic complications and low-grade inflammation that could lead to a harmful effect on bone health [33]. Overall, all of these connections are still debatable and open areas for further scientific research. Finally, the present study is limited by the sampling bias given the study design. However, this study still provides valuable information about the relationships between selected anthropometric and BMD parameters and breast cancer risk among premenopausal and postmenopausal Saudi women. In conclusion, our study revealed that the risk of breast cancer was significantly associated with waist circumference and triceps skinfold thickness in premenopausal women and with only triceps skinfold thickness in postmenopausal women. Moreover, our results do not support any link between BMD and risk of breast cancer. In light of the contradiction in the available data about the connection between body anthropometric and BMD parameters and risk of breast cancer, a large trial is required, which may allow further understanding of these associations and their mechanistic pathways.
2020-10-28T18:33:59.148Z
2020-09-07T00:00:00.000
{ "year": 2020, "sha1": "1450272ed848a2c09af0e77202ec09333137ae86", "oa_license": "CCBYNCSA", "oa_url": "https://www.archivesofmedicalscience.com/pdf-124991-59527?filename=Anthropometry_%20bone.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "855df113660195d6efcfc03a5555e84a8cd31e33", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
15826921
pes2o/s2orc
v3-fos-license
Disentangling the Galaxy at low Galactic latitudes We have used the field stars from the open cluster survey BOCCE, to study three low-latitude fields imaged with the Canada-France-Hawaii telescope (CFHT), with the aim of better understanding the Galactic structure in those directions. Due to the deep and accurate photometry in these fields, they provide a powerful discriminant among Galactic structure models. In the present paper we discuss if a canonical star count model, expressed in terms of thin and thick disc radial scales, thick disc normalization and reddening distribution, can explain the observed CMDs. Disc and thick disc are described with double exponentials, the spheroid is represented with a De Vaucouleurs density law. In order to assess the fit quality of a particular set of parameters, the colour distribution and luminosity function of synthetic photometry is compared to that of target stars selected from the blue sequence of the observed colour-magnitude diagrams. Through a Kolmogorov-Smirnov test we find that the classical decomposition halo-thin/thick disc is sufficient to reproduce the observations--no additional population is strictly necessary. In terms of solutions common to all three fields, we have found a thick disc scale length that is equal to (or slightly longer than) the thin disc scale. INTRODUCTION In order to reconstruct a coherent picture of our Galaxy, star count models typically exploit the colour-magnitude diagrams (CMD) from several lines of sight. The primary goal is to constrain the structure and relative strength of the various Galactic components. In addition, other quantities such as the star formation rate (SFR), the initial mass function (IMF), the chemical composition, and the reddening laws are also tested. Despite the pioneering successes by Bahcall in the '80s (see e.g. Bahcall & Soneira 1984) and the extraordinary amount of precise data available today, many aspects of the Galactic structure remain ambiguous. The number of Galactic components (halo, bulge, disc, thick disc, etc..), their chemical composition, and their origin are widely debated. Recent large-scale surveys (e.g. SDSS, 2MASS, QUEST) have detected the presence of substructures in the outer halo, which are taken to be the remnants of disrupted galaxies. For the disc structures, while there is consensus that most of the thin disc population has a dissipative history, the thick disc origin remains contentious. There are (at least) three main processes which are now proposed to be responsible for thick disc formation: 1) External origin-the stars are accreted from out-⋆ E-mail:michele.cignoni@unibo.it side, during merging with satellite galaxies, 2) Induced event-the thin disc has been puffed up during close encounters with satellite galaxies, 3) Evolutionary event-the thick disc settled during the collapse of the proto-galactic cloud, before the thin disc formation. Given this uncertain scenario, it is intriguing to learn about more or less pronounced sequences (see e.g. Conn et al. 2007) crossing the CMDs at low Galactic latitudes. However, fitting these features in a self-consistent scenario is rather challenging. For instance, an incorrect metallicity and a complex reddening distribution can both conspire to bias results. Moreover, in order to detect a possible stellar over-density, it is essential to have at least a rough idea of the underlying Galactic structure. In other words, one must know how the "average" Galactic CMD should look, especially close to the Galactic plane. A way out could be to observe symmetrical directions relative to the plane (see e.g. Conn et al. 2007): invoking a north-south symmetry, the observed CMDs should indicate the presence of a bona fide over-density. However, this option is fraught with uncertainties as well: is the Galaxy symmetrical? Is the Galaxy (stellar disc) warped (see e.g. López-Corredoira et al. 2007)? Can asymmetrical reddening distributions or stellar chemical gradients mimic asymmetrical star counts? This paper discusses the capability of a Galactic synthesis model to interpret the star counts at low Galactic latitudes. Typically, star count models create a main sequence template, and at-tempting to recover the underlying distribution. We make use of an alternate scheme in which we try to translate the current knowledge of the Galactic populations (thin disc, thick disc and halo) into synthetic CMDs, and see if they are compatible (and at which degree) with the observed CMDs. We try to answer the following question: do the many uncertainties on the Milky Way structure allow to explain the observed CMDs without invoking anomalies? Our method does not produce unique scenarios, and furthermore, we argue that one cannot generally infer unique results. Taking advantage of the deep and wide-field photometry acquired with the CFH telescope, whose original targets were open clusters close to the Galactic plane (Kalirai et al. 2001(Kalirai et al. a,b,c, 2007, we are sensitive to disc structures for several kpc before being dominated by the halo. These low-latitude regions are often avoided by star count analyses for their high obscuration. Hence, the published results suffer from a bias: most of the investigations are devoted to the study of the disc scale heights and the halo structure, information available at intermediate to high Galactic latitudes, whilst the disc scale lengths are often neglected. The results we find in literature are extremely variable, ranging from 2 kpc to 5 kpc for the thick disc scale length. Some of these studies provide evidence for a thick disc/thin disc decomposition with similar scale lengths, while others do not. For instance, Robin et al. (1996) and others find 2.5 kpc for the thin disc and 2.8 kpc for the thick disc, Ojha (2001) finds a thin disc scale length of 2.8 kpc and a thick disc of 3.7 kpc, find a thick disc scale length larger than 4 kpc. From edge-on disc galaxies, Yoachim & Dalcanton (2006) find support for thick discs larger than the embedded thin discs, and Parker et al. (2003) argue that the thick disc is not axisymmetrical. The main issue is whether the thick disc is an independent structure. Although chemical investigations indicate a different αelements history for the thick disc, suggesting it is a separate component, most of these studies must assume a well defined kinematical signature, neglecting stars with intermediate kinematics. A marked scale height difference between the two discs (250 pc versus 1 kpc), a well established result, does not exclude a heating origin. The radial scales could in principle distinguish among different formation scenarios: N-body simulations suggest that a heating mechanism can increase the scale height of a population, but it hardly produces a longer scale length. The paper is organized as follows. First we introduce the data in section 2. Section 3 gives an overview of the method. In sections 4 and 5, the star counts in each direction are described in terms of thin and thick disc, and the reddening distribution along the line of sight is determined. In section 6, we assess the implications of our findings. DATA The data used in this study are in three low latitude fields, ob- For a complete description of the observations and reductions see Kalirai et al. 2001 a,b,c. These data, which were originally obtained to study cluster white dwarfs, represent very deep windows in the thin and thick disc. We have selected these three fields out of the twenty currently available from the BOCCE (Bologna Open Clusters Chemical Evolution) project (Bragaglia & Tosi 2006), because they are the deepest, widest and cleanest ones. To select bona fide field stars, we specifically focus our analysis on the (V, B − V ) region below the clusters' main sequences: this region differs for each field (as shown in the upper panel of Figures 1, 2, 3), due to the various locations of the clusters, reddening distribution, etc... To increase the sensitivity to the structural parameters, each region has been further divided into subregions. The chosen "grid" is set up keeping several factors in mind: including red stars gives a better counting statistics; a narrow colour range shortens the mass range of the stellar populations, weakening the constraints on SFR and the IMF; and focusing only on blue stars preserves the B-magnitude completeness. The bright and faint magnitude limits are chosen to avoid cluster stars, while guaranteeing sample completeness. In the directions (l, b) • = (115.5, −5.38) • (NGC7789) and (l, b) • = (73.98, +8.48) • (NGC6819) the bulk of thin disc stars are close to the cluster, and therefore, share similar CMD positions. Although this is a strong limitation for the thin disc analysis, these data still represent a unique chance to study the thick disc structure. In fact, few, if any, of the sources with magnitude fainter than V = 18 (and suitable colours) are likely to be physically associated to the clusters. Again, thanks to the excellent photometry, we can exploit the CMDs as faint as V = 22. According to simulations, at this magnitude it is possible to trace the radial scale length of the thick disc. The situation is different in the anticentre field (l, b) • = (177.63, +3.09) • (NGC2099). The proximity to the Galactic plane offers a deep snapshot of the outer thin disc, but consequently provides little information about the thick disc. Combining the three lines of sight is an effective test for our Galactic model. Finally, the CMD density of these stars reflects the matter distribution along the line of sight: the luminosity function is more sensitive to the Galactic structure, while the colour distribution is a major discriminant for age/metallicity/reddening combinations. THE MODEL Both the thin and the thick disc components are shaped as double exponentials, characterized by vertical and radial scales. Because of the low-latitude, our lines of sight are less informative about the vertical structure. In our simulations, we make the simplifying assumption that the thick disc scale height (H thick ) is 1 kpc, which is comfortably within the literature range, while the thin disc scale height (H thin ) is tested for three characteristic values, namely 200, 250, 300 pc. Both the thick and thin disc radial scale lengths are allowed to vary freely. Halo and thick disc local densities are expressed as a fraction of the thin disc density. In particular, the local halo fraction is fixed to be 0.0015 (see e.g. Siegel et al. 2002), while the thick disc value is a free parameter. The stellar halo is characterised by a De Vaucouleurs density law with a half-light radius of 2.6 kpc. Although our data are marginally sensitive to the halo structure, simulations indicate that this component is required to improve the quality of the fit. In conclusion, our Galactic model relies on four free parameters, namely the two scale lengths (L thin and L thick ), H thin , and the local thick disc normalization. The complete list of model ingredients is given in Table 1 (for further details see e.g. Cignoni et al. 2007 andCastellani et al. 2002). For each component, the SFR is assumed constant. The recent SFR of the thin disc is chosen to reproduce the blue edge of the CMD. The IMF is a power law with a Salpeter exponent. Masses and ages are randomly extracted from the IMF and the SFR, colours are interpolated using the Pisa evolutionary library (Cariulo et al. 2004). Once the absolute photometry is created, the line of sight is populated according to the density profiles and a reddening correction is introduced. To reduce the Poisson noise, the model CMDs are built from samples ten times larger than the observed ones. For stars brighter than V=22, photometric errors do not exceed 0.01 mag and the completeness in V is around 80% (Kalirai et al. 2001b,c), thus the simulated CMDs can be directly compared with the data without blurring or corrections. Finally, to decide the match quality of a given model, a Kolmogorov-Smirnov test was carried out to investigate both the colour distribution in each subregion and the luminosity function of the whole box. Only models giving a KS-probability larger than 0.001 for each constraint are selected. RESULTS ABOUT THIN DISC AND THICK DISC STRUCTURE The lower panels of Figures 1, 2, and 3 show the synthetic diagrams which best reproduce the observed CMD of the top panels. Figure 4 shows the allowed region in the parameter space L thick versus L thin . It seems clear that our fields at (l, b) • = (115.5, −5.38) • (NGC7789) and (l, b) • = (73.98, +8.48) • (NGC6819) are poorly suited to constrain the thin disc structure: any thin disc scale length between 1000 and 6000 pc looks acceptable. Even allowing for different thin disc scale heights does not actually reduce the parameter space. In these CMDs, cluster and thin disc are partially overlapped, causing the loss of field stars during the already mentioned selection process. In conclusion, there are seemingly insufficient thin disc stars in our selected regions to determine its properties. In contrast, these directions clearly indicate a preferred range for the thick disc scale length. The solutions for (l, b) • = (177.63, +3.09) • (NGC2099) present the opposite situation: in this direction, the L thin values are well constrained, while no preferred solution emerges for L thick (which varies between 2000 pc and 6000 pc). Evidently, the low latitude of NGC2099 combined with the intrinsically short vertical scale of the thin disc, implies that a significative portion of the thin disc is included in our field of view. On the other hand, this is not true for the thick disc, whose population density close to the Galactic plain is much lower. Remarkably, a region exists in the parameter space which is consistent with the combined directions. The three panels of Figure 4 explore any dependence of this region on the thin disc scale height (H thin ). For H thin = 200 pc the acceptable solutions for L thick do not show any correlation with L thin (L thick versus L thin is flat); for this scale height, the thin disc has a negligible influence on the explored CMD region (the thin disc is brighter than V ∼ 17 − 18, i.e., the low-magnitude limit). If H thin is increased, the probability of finding thin disc stars fainter than V ∼ 17 − 18 The thin disc scale height is 250 pc, the radial scales for thin disc and thick disc are 2500 pc and 3700 pc respectively. The local thick disc normalization is 7%. increases as well. This implies a kind of degeneracy between H thin and L thick . This effect is particularly evident for NGC6819, the highest latitude field: L thick is strongly correlated with H thin . Regardless of the particular thin disc scale height, it is noteworthy that the recovered ratio L thick /L thin never falls below one (the straight line in Figures 4 stands for L thick /L thin = 1). Most of the common solutions clump around H thin = 200 pc, supporting similar values for L thick and L thin . If H thin is increased, the accepted L thick becomes slightly larger than L thin . Figures 5 show the acceptable pairs of thick disc normalization and L thick . The two parameters influence the luminosity function in a different fashion. The first parameter is the fraction of thick disc stars with respect to thin disc stars in the solar neighbourhood: it controls the ratio between bright (essentially thin disc) and faint (essentially thick disc) stars. On the other hand, the thick disc scale length is associated both with the ratio bright/faint and with the luminosity function decline in the faint end (which is always thick disc dominated). For each field, the solution space of Figures 5 demonstrates the strong anti-correlation between L thick and the thick disc local nor- malization. This effect is related to the fact that the total number of stars subtended by an exponential distribution is proportional to the scale; hence, when the model scale decreases the local normalization must increase, in order to reproduce the observed starcounts. Common solutions exist only for H thin shorter than 250 pc. Beyond this value, the space solutions for the directions to NGC6819 and NGC7789 split into two distinct regions and common solutions no longer exist. Summarising: (i) the thin disc scale height is shorter than 250 pc; (ii) the thick and the thin disc scale lengths are similar; (iii) the scale lengths are quite short (2250-3000 pc for the thin disc, 2500-3250 pc for the thick disc); (iv) a set of parameters (ρ thick , L thick , L thin , H thin ) can simultaneously satisfy the requirements of the three directions; (v) the thick disc normalization is smaller than 10%. REDDENING AND STAR FORMATION RATE Together with the spatial structure, the reddening distribution and the thin disc star formation are also constrained. For this task, particularly informative is the blue edge of the CMDs, namely the envelope of the main sequence turn-offs (vertical or shifted to the red by reddening). Once the model metallicity is assumed, the blue edge of the colour-magnitude diagram is a function of the star formation rate and the reddening distribution along the line of sight. In particular, the brightest stars of the blue edge (i.e., the blue plume to the left and/or immediately below the cluster turn-off) are ideal candidates to infer information on the thin disc SFR: given the proximity of these stars, it is reasonable to suppose a low reddening. Figures 1 and 2 show clearly that for NGC6819 and NGC7789 the blue edge is quite constant in colour, with B − V ranging between 0.6 and 0.7. This feature is a strong clue that in these directions the reddening is fairly independent of distance. For NGC2099 the situation is different, with the blue envelope moving from B −V ∼ 0.4 to B − V ∼ 1.15, so the reddening is expected to vary along the line of sight. In order to infer the thin disc SFR and the reddening distribution, the synthetic diagrams are computed from the following principles: (i) Given that our data are only weakly constraining the precise star formation law, the SFR is assumed constant and only the recent SFR cut-off is allowed to vary; (ii) The models have been reddened using the standard RV = 3.1 reddening curve of Cardelli et al. (1989). The distribution of the reddening material is assumed to be a free function of the heliocentric distance. The reddening is increased (with steps of 1 kpc) until the synthetic and the observed blue edge match (according to the KS test for the colour distributions) at any distance. In general, for a young population like the thin disc, the degeneracy between SFR and reddening is high. In contrast, for an old population like the thick disc (age > 7 − 8 Gyr) the SFR has a minor impact 1 and the blue edge is a strong indicator of the reddening. In the following we report our results for each field. (73.98, +8.48) • (NGC6819) In order to reconcile the theoretical thick disc turn-off with the observed blue edge, our analysis yields a constant reddening E(B − V ) ∼ 0.16, which is roughly the same as that of the cluster (according to Kalirai & Tosi 2004, E(B-V) is 0.1 − 0.15 and the distance from the Sun ∼ 2.5 kpc), whereas it is lower than Schlegel's (1998) estimate, E(B-V)=0.2. This result is a consequence of the relatively "high" Galactic latitude (b=8.5), which confines the dimming material very close to the Sun 2 . For a similar reason, we also find that a thin disc star formation which is still ongoing is apparently inconsistent with the 2 At 3.0 kpc the line of sight is higher than 450 pc over the Galactic plane, thus, exceeding the vertical structure of the disc, it is plausible that most of the extinction occurs before the cluster. observed CMD: indeed, to reproduce the blue edge at magnitudes fainter than V=18, the thin disc activity must be switched off about 1-1.5 Gyr ago: evidently, most of the sampled thin disc stars belong to the old thin disc, while younger objects (age < 1 Gyr), with a typical scale height of 100 pc, are severely under-sampled 3 in our field. To further examine and verify this hypothesis, we have included in our Galactic model a synthetic young population with an exponential scale height of 100 pc: although the presence of the cluster in the CMD does not allow a quantitative comparison, we have verified that this intrusion has a minimum impact in the range 18 < V < 22. In other words, ongoing star formation very close to the Galactic plane is not actually ruled out. 5.2 (115.5, −5.38) • (NGC7789) As for NGC6819, a constant reddening correction, namely E(B-V)=0.2 for all the synthetic stars (i.e., independently of the distance), gives the best result. This value, while in good agreement with the cluster estimate 4 , is signicantly lower than that quoted on Schlegel's maps [E(B-V)=0.4] in this direction. As with NGC6819, thin disc star formation seems extinguished about 1-1.5 Gyr ago, but again this is a consequence of the selection effect against young, low-latitude stars. (177.63, +3.09) • (NGC2099) This field lies in the Galactic plane, and therefore we expect the reddening to be a function of the heliocentric distance. This is indeed clear from the CMD of Figure 3 (upper panel); while CMDs of the other fields show a quasi vertical blue edge, as a consequence of the reddening material along the line of sight, the blue edge in this field is redder at fainter magnitudes. According to simulations, this field is thin disc dominated. A constant SFR which is still ongoing is necessary to explain the closest stars. If the same SFR is assumed for the entire thin disc, the reddening law we recover is indicated in Figure 6 (the error bars indicate the range of acceptable models). For comparison, the same figure shows also the theoretical reddening distribution, for the same direction, expected using an exponentially decaying law perpendicular to the Galactic plane (the asymptotic value is fixed to the recovered one). The recovered distribution is not fitted by any exponential reddening (calculations for HRED = 100 and 200 pc are showed). REMAINING UNCERTAINTIES Uncertainties in Galaxy modelling due, for example, to the assumed metallicity, binary population and incompleteness, can make our estimates imprecise: Metallicity -The mean metallicity for both the thin and the thick disc is debated. Recent and old observational studies on the solar neighbourhood provide evidence for a metallicity spread (see e.g. Nordström et al. 2004) at any given age which is not explained in the framework of standard models of chemical evolution. Moreover, the presence of an age-metallicity relation and/or a spatial metallicity gradient further complicate the issue. Ignoring the right metallicity can lead to an incorrect interpretation of the spatial structure. Following the paradigm that a metal rich system appears under-luminous with respect to a metal poor one, if the data are metal poor compared to the model, the inferred scale length will be too short. Binaries -The inclusion of binaries increases the luminosity of the population. Our model adopt only single stars, thus, if the binary fraction is conspicuous, the retrieved scale will turn out to be too short. Completeness -Losing faint stars leads to spuriously short scale lengths (especially for the thick disc). Halo component -Standard Galactic halo parameters have been used. However, the literature also documents very different prescriptions. A different choice could affect the thick disc distribution. Structure -Our findings make sense within the context of our parametrisation: exponential profiles, whose radial and vertical scales are constant within the Galaxy. Introducing new features, such as an increase of the scale height (flare) or a warp of the Galactic plane, may produce a very different picture. Nevertheless, our parametrisation is sufficient to reproduce the observations. In addition, our Galactic model is axisymmetrical. Therefore, any variation with Galactic longitude is missed. Finally, the thick disc scale height is assumed to play a minor role (because of the lowlatitude), hence it is fixed at a canonical 1 kpc. However, sporadic strong variations from this value are reported in the literature. In order to explore also this parameter, additional and preferably higher latitude fields would be needed. CONCLUSIONS We have carried out preliminary modelling of the observations in three deep and low latitude Galactic fields imaged at the CFH telescope. Using a population synthesis method we gain information about the spatial structure, the thin disc star formation and the reddening distribution along the lines of sight. The directions (l, b) • = (73.98, +8.48) • (NGC6819) and (l, b) • = (115.5, −5.38) • (NGC7789) are very sensitive to the thick disc scale length, whereas the line of sight (l, b) • = (177.63, +3.09) • (NGC2099) is sensitive to the thin disc scale length. We decompose the Galaxy into halo, thick disc, and thin disc. Solutions common to all lines of sight do exist and require that the thin disc has a vertical scale shorter than about 250 pc. The inferred radial scales are consistent with the thick disc equally extended or slightly larger than the thin disc. Our results support a typical scale of 2250-3000 pc for the thin disc and 2500-3250 pc for the thick disc. Similar scales for the thin disc are basically found by Ruphy et al. (1996) (L thin ∼ 2.3 ± 0.1 kpc) and Robin et al. (1992) (L thin ∼ 2.5 kpc). It is noteworthy that the Robin et al. (1992) field is in the same direction of NGC2099, giving an independent support to our result (they use a different statistical procedure). However, these authors argue for a thin disc cut-off at 15 kpc, whereas our determination invokes a reddening effect. In fact, the only acceptable solutions for the direction to NGC2099 requires a structured reddening distribution, which seems to differ from simple exponential laws, revealing the inadequacy of classical distributions close to the Galactic plane. Concerning the thick disc, our recovered scale length is perfectly compatible with Robin et al. (1996) (L thick ∼ 2.8 ± 0.8 kpc), Buser et al. (1999) (L thick ∼ 3 ± 1.5 kpc), Cabrera-Lavers et al. (2005) (L thick ∼ 3.04 ± 0.11 kpc). Ojha (2001) finds a thick disc scale length (L thick ∼ 3.7± 0.8 0.5 kpc) longer than ours, but the mean metallicity they adopt is higher ([F e/H] ∼ −0.7). Likewise, the very extended thick disc (L thick > 4 kpc) quoted by , may be due to adopting the luminosity function of 47 Tucanae ([F e/H] ∼ −0.7/ − 0.8), and could be reconcilable with our findings as well. Moreover, our conclusion about the thick disc normalization in the solar vicinity is consistent with the results obtained by the abovementioned papers. Searching for common solutions, the results presented here depict a thick disc scale length that may be only slightly longer (∼ 20%) than the thin disc one. If the two discs are really decoupled, the task for the future will be to understand the underlying mechanisms which have promoted very different scale heights, while preserving similar scale lengths. Clearly this is only a pilot study showing what could be achieved by combining deep high quality photometric fields with appropriate models for the Galaxy's components. To reach a better understanding of the Galactic structure, more fields of this kind should be studied, possibly including symmetrical locations.
2008-03-05T15:01:47.000Z
2008-03-05T00:00:00.000
{ "year": 2008, "sha1": "e2b17f9c52c008e6ff6d2edbc1e9850ceef6da0f", "oa_license": null, "oa_url": "https://academic.oup.com/mnras/article-pdf/386/4/2235/4281262/mnras0386-2235.pdf", "oa_status": "BRONZE", "pdf_src": "Arxiv", "pdf_hash": "e2b17f9c52c008e6ff6d2edbc1e9850ceef6da0f", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
255149931
pes2o/s2orc
v3-fos-license
Cerebral Infarction Caused by Trousseau’s Syndrome Associated With Lung Cancer Background Lung cancer is one of the common cancers that can cause Trousseau’s syndrome. However, there are few reports of cerebral infarction due to Trousseau’s syndrome associated with lung cancer. The aim of this study is to investigate the clinical features of lung cancer-related cerebral infarction and effective management practice. Methods Japanese patients diagnosed with Trousseau’s syndrome-related cerebral infarction associated with lung cancer between August 2012 and November 2021 in our hospital were retrospectively enrolled. Clinical data, treatment, and outcomes of the patients were collected. Results Ten patients were enrolled. The median age was 65 years (range: 43 - 84 years). All patients had advanced lung cancer. The histological types were adenocarcinoma (n = 8), pleomorphic carcinoma (n = 1), and small cell lung cancer (n = 1). Recurrent cerebral infarction occurred in six patients. Among four patients who had continued heparin since the initial infarction, recurrence occurred in one. D-dimer was high in all 10 patients at the initial cerebral infarction. D-dimer level at the time of recurrent cerebral infarctions was higher than that at the first cerebral infarctions. Since performance status declined in nine patients, one patient continued anticancer drugs after cerebral infarction. Four patients died within 100 days of the onset of cerebral infarction. Conclusions Cerebral infarction of lung cancer-related Trousseau’s syndrome has poor prognosis. Heparin may be effective in controlling the condition. In addition, D-dimer may serve as a marker of cancer-related thrombosis. Introduction Trousseau's syndrome is cancer-associated hypercoagulative disorder in the broad definitions, and also means cancer-associated cerebral artery thromboembolisms in the narrow definitions. Thrombotic events are the second leading cause of death in patients with cancer, after death from cancer itself [1]. In particular, cerebral infarction has a significant impact on prognosis because it can cause a decrease in performance status. Although lung cancer is one of the most common cancer types associated with cerebral infarction [2], clinical characteristics and optimal management of Trousseau's syndromerelated cerebral infarction in lung cancer are unclear. Here, we summarized and analyzed the characteristics of 10 lung cancer patients with Trousseau's syndrome-related cerebral infarction and discussed effective management practice. Materials and Methods Japanese lung cancer patients diagnosed with lung cancer between August 2012 and November 2021 in our hospital were retrospectively enrolled. The diagnosis of lung cancer was confirmed pathologically. Among those lung cancer patients, we searched for the patients who had experienced cerebral infarction associated with lung cancer-related Trousseau's syndrome. The diagnosis of cerebral infarction was based on diffusion-weighted brain magnetic resonance imaging (MRI). Patients treated with anti-vascular endothelial growth factor (VEGF) antibodies, which is known as a risk factor of thrombosis, were excluded. Whether the cerebral infarction was caused by lung cancer or other causes, such as atherosclerosis, was determined after consultation with neurologists or neurosurgeons. We collected and analyzed clinical data of baseline clinical characteristics, anticoagulants, introduction of anticancer drugs, and outcomes. The study was approved by the Institutional Review Board (IRB); and was conducted in compliance with the ethical standards of the responsible institution on human subjects as well as with the Helsinki Declaration. Results During the 9 years and 3 months, a total of 2,113 patients were diagnosed with lung cancer in our hospital. There were 970 Cerebral Infarction Induced by Lung Cancer World J Oncol. 2022;13(6):403-408 patients with stage 0 -II, 361 with stage III, 685 with stage IV, and 97 with unidentified stage patients. Ten patients were diagnosed with lung cancer-associated cerebral infarction. The incidence of lung cancer-related cerebral infarction was 0.28% in stage III and 1.31% in stage IV. None of the stage 0 -II patients had a lung cancer-related cerebral infarction. The backgrounds of the 10 patients are summarized in Tables 1 and 2. The characteristics, treatment and outcomes of cerebral infarction are shown in Table 3. The patient backgrounds The median age was 65 years (range: 43 -84 years). The patients included seven males and three females. All 10 patients had advanced stage of cancer. The most common histological type was adenocarcinoma (n = 8). One patient had pleomor-phic carcinoma which partially contained adenocarcinomalike mucinous cells, and the other had small cell lung cancer (SCLC). Five patients harbored driver mutations, positive epidermal growth factor receptor (EGFR) mutation in four and positive anaplastic lymphoma kinase (ALK) rearrangement in one. Untreated brain metastasis was observed in two. While one underwent whole brain radiation therapy approximately 1 year before the onset of cerebral infarction, brain magnetic resonance angiography (MRA) showed the vessel wall irregularities due to atherosclerosis and no arterial stenosis. Regarding the risk of atherosclerosis, a smoking history, dyslipidemia, hypertension, and diabetes mellitus were present in eight, one, three, and one, respectively. Brain MRA showed atherosclerotic changes in the vessels in two cases. As for the timing of onset of cerebral infarction, in nine cases, the cerebral infarction occurred when the cancer was not under control. Four patients had cerebral infarction while their lung cancer was resistant to anticancer treatment and progressed. Four patients had cerebral infarctions when they were scheduled to start their first anticancer drug. One patient was diagnosed with lung cancer after the cerebral infarction occurred. One patient had cerebral infarction while the cancer was stable due to anticancer drugs. Four patients had other thrombotic complications such as deep venous thrombosis (DVT) (n = 2) and renal infarcts (n = 2). While transthoracic echocardiography was performed in eight patients, nonbacterial thrombotic endocarditis (NBTE) or right-left shunt were not identified in any of the patients. Transesophageal echocardiography was not performed on any patient due to their poor general condition. The characteristics and treatment of cerebral infarction Multiple and coincidental cerebral infarctions in bilateral multiple vascular territories were the most frequent radiological features shown by brain MRI (n = 9). As representative cases, the images of cases 1and 2 are shown in Figures 1 and 2, respectively. Nine had neurological symptoms at the time of the initial cerebral infarction, while one was asymptomatic. Recurrent cerebral infarctions were observed in six patients, all within 50 days. Five had recurrence despite anticoagulation therapy. Among the four patients who started and continued heparin at the initial onset, one had a recurrence. On the other hand, among the other four patients whose initial treatment was anticoagulants other than heparin or who did not continue heparin, all four had recurrences. Two patients did not receive anticoagulants at the initial onset because of their poor general condition. While heparin was used for initial treatment in four patients, it was used in seven patients during the course of treatment. Moreover, three patients were discharged using self-injection of subcutaneous heparin calcium at home. Since low-molecular-weight heparin has not been approved by Japanese medical insurance for the treatment of cerebral infarction, we used intravenous unfractionated heparin or subcutaneous heparin calcium. All patients showed high D-dimer levels at the time of both initial and recurrent cerebral infarction. The D-dimer level at the initial onset was 16.1 ± 12.5 µg/mL (mean ± standard deviation (SD)). Among the six patients with recurrent cerebral infarction, five had higher D-dimer at the recurrence compared with the level of the initial onset. In cases 1 and 3, heparin decreased D-dimer within normal range after the recurrences. However, in case 3, heparin was discontinued because of the heparin-induced bleeding. Outcomes of the patients Nine patients showed a decrease in performance status (PS) due to cerebral infarction or cancer progression. While the most frequent Eastern Cooperative Oncology Group (ECOG)-PS before cerebral infarction was 1, after cerebral infarction, it became 3. As a result, three patients (cases 1, 2, and 4) received chemotherapy after the cerebral infarction. However, two discontinued the chemotherapy during the first cycle due to a decline in PS caused by recurrent cerebral infarction (cases 2 and 4). Thus, one (case 1) could continue both chemotherapy and long-term heparin calcium self-injection and keep ECOG-PS 1 without recurrent cerebral infarction after the first recurrence. This patient was the only one who survived more than a year. At least, four patients died within 100 days of the onset of cerebral infarction, except for two patients (cases 6 and 10) who were lost to follow-up. Discussion To our knowledge, this is the first report of a rare condition of cerebral infarction associated with Trousseau's syndrome due to lung cancer. First, our study showed that cerebral infarction of Trousseau's syndrome had seriously poor prognosis. A decline in PS due to cerebral infarction made it difficult to start or continue anticancer drugs. The principle for controlling Trousseau's syndrome is the treatment of cancer. In this study, cerebral infarction due to Trousseau's syndrome occurred only in patients with advanced lung cancer which required chemotherapy. Nevertheless, 90% of the patients were unable to continue chemotherapy after cerebral infarction. Cerebral infarctions in Trousseau's syndrome tended to be multiple and recurrent. Therefore, it was difficult for patients to recover their general condition. Second, heparin might be effective treatment in pre-venting recurrence of cerebral infarction. Although there are guidelines recommending low-molecular-weight heparin for the treatment of cancer-related venous thrombosis [3,4], there is no guideline for cancer-related arterial thrombosis, such as cerebral infarction. However, heparin is the preferred drug in Trousseau's syndrome, and indeed there were many reports of exacerbation of thrombosis followed by discontinuation of heparin [5,6]. This is because heparin inhibits a variety of cancer-related thrombogenic pathways, including antithrombin activation, secretin-mediated cancer mucin thrombus production [7], and the tissue factor pathway [8]. On the other hand, most of the direct oral anticoagulants (DOACs) specifically inhibit factor Xa. As Hokusai venous thromboembolism (VTE) cancer study demonstrated the noninferiority of edoxaban to low-molecular-weight heparin [9], DOACs are an available option for the treatment of cancer-related VTE. However, little evidence exists regarding the effects of DOACs on cancerrelated arterial thrombosis. In our cases, recurrence of cerebral infarction occurred in four patients with oral antithrombotic drugs such as aspirin and DOACs, but in only one patient with heparin. Among the seven patients who received heparin, six patients had no recurrence. Third, D-dimer may be a useful marker of Trousseau's syndrome-related cerebral infarction. Elevated D-dimer in patients with cerebral infarction was reported to suggest a cancerinduced hypercoagulative state [10]. In all of our cases, D-dimer was elevated at the initial cerebral infarction. Moreover, in the six patients with recurrent cerebral infarction, D-dimer levels were still high at the time of recurrence. Ito et al reported that high D-dimer levels in the subacute phase of cerebral infarction associated with Trousseau's syndrome were related to poor prognosis [11]. Thus, D-dimer monitoring may be useful in assessing whether cancer-associated thrombosis is under control. This study had some limitations. First, this study lacked statistical analysis due to the small number of cases. Second, this study was a retrospective analysis and might contain selection bias. We are afraid of missing cases of cerebral infarctions in severe conditions and indistinguishable cases from multiple brain metastases. Thus, the incidence rate of this condition may be higher than that of our report. In conclusion, cerebral infarction of lung cancer-related Trousseau's syndrome has poor prognosis. Heparin may be effective in controlling the condition. In addition, D-dimer may serve as a marker of cancer-related thrombosis.
2022-12-27T16:02:34.108Z
2022-12-01T00:00:00.000
{ "year": 2022, "sha1": "84f7c103472c6241c3b2a7fdb3697378275fc250", "oa_license": "CCBYNC", "oa_url": "https://www.wjon.org/index.php/wjon/article/download/1523/1259", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "c88ac33703680bad64c33e503d3ef6533ad6caf6", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
269952180
pes2o/s2orc
v3-fos-license
Why do languages tolerate heterography? An experimental investigation into the emergence of informative orthography It is widely acknowledged that opaque orthographies place additional demands on learning, often requiring many years to fully acquire. It is less widely recognized, however, that such opacity may offer certain benefits in the context of reading. For example, heterographic homophones such as 〈 knight 〉 and 〈 night 〉 (words that sound the same but which are spelled differently) impose additional costs in learning but reduce ambiguity in reading. Here, we consider the possibility that — left to evolve freely — writing systems will sometimes choose to forego some simplicity for the sake of informativeness when there is functional pressure to do so. We investigate this hypothesis by simulating the evolution of orthography as it is transmitted from one generation to the next, both with and without a communicative pressure for ambiguity avoidance. In addition, we consider two mechanisms by which informative heterography might be selected for: differentiation, in which new spellings are created to differentiate meaning (e.g., 〈 lite 〉 vs. 〈 light 〉 ), and conservation, in which heterography arises as a byproduct of sound change (e.g., 〈 meat 〉 vs. 〈 meet 〉 ). Under pressure from learning alone, orthographic systems become transparent, but when combined with communicative pressure, they tend to favor some additional informativeness. Nevertheless, our findings also suggest that, in the long term, simpler, transparent spellings may be preferred in the absence of top-down explicit teaching. Introduction Writing systems, particularly those employing alphabetic scripts, are commonly regarded as providing a visual representation of speech, with letters or chunks of letters corresponding to distinct sounds.However, it is also well understood that writing systems diverge from their spoken counterparts in important ways (Biber, 1988;Bolinger, 1946;Coulmas, 1991).The insertion of spacing between words, for example, is almost ubiquitous across alphabetic writing systems, even though no such spacing exists between words in speech (Parkes, 1992;Saenger, 1997).It seems likely that graphic innovations such as these exist because they confer some benefit that is not required in the spoken modality (Rastle, 2019).In the case of spacing, for example, the separation of words into discrete chunks presumably aids in the targeting and extraction of visuolinguistic information-constraints that do not exist in the auditory modality.In principle, the same may be true of spelling: Words may be spelled in ways that diverge from the spoken language because such divergence confers some benefit in reading (Ulicheva, Harvey, Aronoff, & Rastle, 2020). One potential case of such functional divergence is heterographic homophony-words that sound alike but which are written differently (e.g., 〈meat〉 and 〈meet〉 for /miːt/).Heterographic spellings such as these may serve a valuable function in reading.For example, an English speaker faced with a spoken sentence beginning /ðεr…/ will have high uncertainty about what word-or even what sentence structure-is likely to come next: a noun, as in /ðεr kat/, a form of the verb to be, as in /ðεr ɪz/, or the progressive form of a verb, as in /ðεr gəʊɪŋ/.In writing, by contrast, this uncertainty is greatly reduced; the spellings 〈their〉, 〈there〉, and 〈they're〉 differentiate these cases, giving the reader a headstart on processing the upcoming syntactic structure and semantic content.Heterographic homophony is also common below the word level, since many orthographies forego the phonological principle in favor of the morphological principle in the spelling of affixes (Sandra, Ravid, & Plag, 2024).The English suffixes -er (denoting the comparative form of an adjective; e.g., nicer) and -or (denoting the performer of an action; e.g., actor) are homophonous in speech (/ər/), but their spellings differentiate these meanings in writing.Of course, English orthography is suboptimal here in that -er may also indicate agentive status (e.g., builder); nevertheless, statistical patterns such as these hold across a variety of English affixes (Berg & Aronoff, 2017) and it has been shown that readers are sensitive to and make use of such cues in reading (Ulicheva et al., 2020).Heterography might be especially important given the differing constraints of the written modality, including the lack of other cues to meaning, such as stress, context, and body language, and the inability for reader and writer to engage in immediate feedback and repair.Furthermore, written language has richer vocabulary and more complex syntax than spoken language (Biber, 1988;Korochkina, Marelli, Brysbaert, & Rastle, 2024;Nation, Dawson, & Hsiao, 2022), placing different pressures on ambiguity resolution. Heterography is particularly notable in English, but it is also a feature of many other languages and writing systems.French is similar to English in having a large number of heterographic homophones: cent (hundred), sang (blood), sans (without), and sens (feel), for example, are all pronounced /sɑ/ (although their pronunciations will sometimes be distinguished through liaison).In Danish, the words hver (every), vejr (weather), vaer (be), and vaerd (worth) are all pronounced /vεˀɐ̯ /.In Vietnamese, the graphemes 〈d〉, 〈gi〉, and 〈r〉 are homophonous, resulting in sets like dao (knife), giao (delivery), and rao (advertise), all pronounced /zāw/.In some cases, features of an orthography designed for other purposes can inadvertently result in homophone disambiguation: Noun capitalization in German contrasts Wagen (car) and wagen (to dare), both pronounced /vaːɡṇ/; eclipsis marking in Irish contrasts bpáistí (children) and báistí (rain), both pronounced /bɑʃti/; and the morphological principle of Russian orthography contrasts приступить (to start) and преступить (to transgress), both pronounced /pristupʲitʲ/.Even in the most transparent of orthographies it is possible to find some instances of heterography: In Italian, a residual initial 〈h〉 inherited from Latin contrasts hanno (have) and anno (year), both pronounced /anno/, while the grave accent is sometimes used to distinguish common homophonous words, such as la (the) and là (there).Perhaps the most elaborate example of how a writing system can deal with homophony head-on is the Chinese orthography.The Chinese spoken languages are rich in homophones, making heterographic spellings-and therefore a logographic writing system-particularly useful (Frost, 2012).In Mandarin Chinese, the words 糖 (sugar), 塘 (embankment), 溏 (pond), and 搪 (to block) are homophonous in speech but heterographic in writing-the phonetic radical on the right (〈唐〉, /táŋ/) represents the spoken syllable, while the semantic radicals on the left differentiate the meanings (Coulmas, 1991, p. 101).In addition, 唐 itself is a surname/dynasty (Tang), and another unrelated word 堂 (hall) is also pronounced /táŋ/, yielding at least six ways to write the same sound depending on the meaning.This property allows the written form of Chinese to convey more information about meaning-to be more informative-than its spoken counterpart. Despite the benefits that heterography may provide in reading, it comes with two main costs.Firstly, by definition, heterography implies that a single sound can be spelled multiple ways.In English, the heterographic spellings 〈meat〉 and 〈meet〉 imply that /iː/ can be spelled 〈ea〉 or 〈ee〉.Readers are therefore required to learn alternate spellings for a single sound, resulting in longer learning periods and more difficult decoding (Reis, Araújo, Morais, & Faísca, 2020;Seymour, Aro, & Erskine, 2003;Spencer & Hanley, 2003;Taylor, Plunkett, & Nation, 2011;Zhao, Li, Elliott, & Rueckl, 2018).Secondly, the arbitrary mapping between heterographic forms and meaning must also be learned.From the point of view of a modern English speaker, there is no intrinsic reason why meat is spelled 〈ea〉 and meet is spelled 〈ee〉.Nevertheless, these arbitrary spelling distinctions must be learned if they are to be useful, and they presumably place an additional burden on reading and-perhaps even more-so-on writing (Frith, 1979;Shankweiler & Lundquist, 1992). In this paper, we consider the possibility that-left to evolve freely-writing systems will sometimes choose to forego some simplicity for the sake of informativeness.A simple spelling system would be one that is easy to learn, use, and process; for example, by being transparent with respect to phonology.An informative spelling system, on the other hand, would be one that precisely conveys meaning.This idea of a tradeoff between simplicity and informativeness in the writing system has long been noted (e.g., Coulmas, 1991), and such a tradeoff has also been discussed within the study of language more broadly (e.g., Gabelentz, 1891;Martinet, 1952;Rosch, 1978;Zipf, 1949).Recent typological (e.g., Kemp, Xu, & Regier, 2018) and experimental (e.g., Kirby, Tamariz, Cornish, & Smith, 2015) studies have also subjected these ideas to empirical investigation in various domains.Of particular note here is the finding that complex, systematic structure emerges under concurrent pressures to be both simple and informative, a point we return to shortly. First, however, it is useful to consider the mechanisms by which selection could occur if it is indeed the case that heterography emerges for functional reasons.Berg and Aronoff (2021, pp. 325-326) outline two models of how a word might enter a state of heterography.The first model, the differentiation model (Fig. 1A), explains heterography through the creation of new orthographic forms.For example, the spelling 〈lite〉 for the word light is frequently used in food products to mean light-incalorific-weight; in British English, the spelling 〈cheque〉 (perhaps influenced by French chèque) differentiates the bank draft from other meanings of the word check; and the word byte was a deliberate respelling of bite to avoid accidental mutation into the closely related term bit (Buchholz, 1977).Many monosyllabic words that are homophonous with common function words also tend to adopt alternate spellings, often by appending 〈e〉 or by doubling the final consonant: be-bee, but-butt, by-bye-buy, for-fore-four, in-inn, or-oar-ore, so-sew, to-too-two, we-wee.Differentiation is also common in surnames-Clarke, Greene, Wilde; Carr, Hogg, Mann-and trade names-Blu Tack, Froot Loops, Wite-Out (Carney, 1994, sec. 6).One important way in which differentiation can occur-especially in a language like English that has historically contained a lot of spelling variation (Nevalainen, 2012;Stenroos & Smith, 2016)-is by the conditioning of variant spellings on meaning; pairs like discreet-discrete, flour-flower, and plain-plane, which were once variant spellings of the same word, have taken on distinct meanings over time (Carney, 1994, sec. 5.4).Berg and Aronoff (2017, p. 58) have referred to this as the "functionalization of leftovers": The spelling variants that survive are those that "can find distributional or functional niches." The second model, the conservation model (Fig. 1B), explains heterographic homophones as the historical residue of sound change: Two spoken forms merge and become homophonous, but the original spellings are conserved in the orthography.For example, the meat-meet merger that occurred during the Great Vowel Shift ultimately resulted in Middle English /εː/ (spelled 〈ea〉) and /eː/ (spelled 〈ee〉) being pronounced /iː/ in Early Modern English (Lass, 2000), but the spellings were never changed accordingly, thus giving rise to a set of heterographic homophones that persist in present-day English (Wells, 1982, pp. 140-141): heal-heel, leak-leek, meat-meet, read-reed, sea-see, team-teem, weak-week.The same is true of the pain-pane merger (Wells, 1982, pp. 141-142): maid-made, main-mane, pain-pane, raise-raze, sail-sale, vain-vane.Sound changes involving consonants have also resulted in (or contributed to) pairs of words entering a state of heterography, such as the reduction of /kn/ into /n/ (e.g., knight-night, know-no, knot-not), the loss of /ç/ (e.g., eight-ate, right-rite, sight-site), and the merger of /ʍ/ into /w/ (e.g., whale-wail, which-witch, whine-wine). 1 A more recent (and perhaps in-progress) example can be 1 Although we can never be entirely certain how words were pronounced before the advent of sound recording technology, historical linguists have compiled persuasive evidence by a variety of methods.Comparison to modern German, for example, offers an insight into how these words might have been pronounced in the past (e.g., knot is cognate with Knoten where the /kn/ cluster continues to be fully rendered and eight is cognate with acht, where the palatal fricative still exists). In some cases, it is debatable whether a given case of heterography was delivered by the differentiation or conservation mechanism.For example, while the etymological (and folk-etymological) respellings introduced during the Renaissance might appear to be cases of conservation (the most notorious example being the replacement of 〈dout〉 with 〈doubt〉 to indicate the word's Latin derivation from dubitare; Crystal, 2005, p. 268), it has also been argued that such respellings were motivated in part by a desire to differentiate homophones such as scene-seen, scent-sent, and whole-hole (Scragg, 1974, pp. 58-59).Nevertheless, regardless of the particular mechanism behind specific cases in English or any other language, our primary contention here is that both of these mechanisms provide adaptive, functional explanations for heterography.Differentiated spellings that prove communicatively useful will be more likely to survive; likewise, conserved spellings that prove communicatively useful will be more likely to survive. Our aims in this paper are twofold.First, we test the idea that heterography emerges in response to a functional pressure to disambiguate meaning in writing.Second, we seek to understand how the emergence of heterography plays out under the two mechanisms of differentiation and conservation.Is one of these a better candidate explanation than the other?Approaching these evolutionary questions using data from natural languages is challenging.In particular, the available diachronic data (for any language) will necessarily be limited and impoverished-languages do not fossilize well, especially in their spoken forms.In addition, any answer derived from such datasets will have to rely on correlational, as opposed to causal, evidence-we cannot rerun history many times under different conditions. We therefore turn to a different approach.Here we experimentally simulate the processes of differentiation and conservation using the experimental iterated learning paradigm (Kirby, Cornish, & Smith, 2008).In this paradigm, an artificially constructed language (or socalled "alien language") is passed along a transmission chain of human participants, simulating what happens during the cultural transmission and evolution of language.Participant i in a transmission chain learns the system based on the linguistic output of participant i − 1 and, subsequently, produces new linguistic output for participant i + 1 to learn from, although the participants themselves are not aware of this generational structure.It has been demonstrated in a wide variety of studies that, after several generations of cultural transmission, artificial languages can gradually adapt to the biases of the human learners and the environments in which they are used, yielding emergent linguistic phenomena, such as compositionality (Beckner, Pierrehumbert, & Hay, 2017;Kirby et al., 2008Kirby et al., , 2015)), combinatoriality (Verhoef, Kirby, & Boer, 2015), semantic category structure (Canini, Griffiths, Vanpaemel, & Kalish, 2014;Carr, Smith, Cornish and Kirby, 2017;Silvey, Kirby, & Smith, 2019), regularization (Smith & Wonnacott, 2010), and argument marking (Motamedi, Smith, Schouwstra, Culbertson, & Kirby, 2021), among many other things.For reviews, see Bailes and Cuskley (2023), Kirby et al. (2014), Kirby (2017), Smith (2022), andTamariz (2017).Kirby et al. (2008) described the first experimental application of the iterated learning framework (which had previously been confined to computational modeling), showing that compositional structure-a systematic relationship between recombinant linguistic units and meaning-could spontaneously emerge under a bottleneck on transmission.This "bottleneck" defines a limit on the amount of information that can flow from one generation to the next (Brighton, 2002).Under a tight bottleneck, where little data passes from one generation to the next, the learner must perform more generalization from less input, such that the cognitive biases that the learner brings to the table become more important in shaping the structure of the emergent language.Generalization from limited input is a major driver of systematic structure in the language as a whole, since human learners tend to generalize in ways that increase the simplicity (i.e., systematicity) of the system, albeit unconsciously (Culbertson & Kirby, 2016).However, left unchecked, this bias for simplicity would ultimately result in the emergence of maximally simple, degenerate languages.Kirby et al. (2015) therefore extended the framework by including a communicative task; instead of each generation consisting of a single participant, each generation now consisted of a pair of participants engaged in a shared task requiring communicative precision.Crucially, this communicative component prevented the artificial languages from degenerating; instead, the languages find a tradeoff between simplicity on the one hand and informativeness on the other, just as in natural language (Kemp et al., 2018;Kemp & Regier, 2012;Mollica et al., 2021;Regier, Kemp, & Kay, 2015;Zaslavsky, Kemp, Regier, & Tishby, 2018). Our paper reports the results of two experiments-focusing on the differentiation and conservation models respectively-with the goal of demonstrating that functional heterography arises preferentially under a communicative need for disambiguation.All data and code is available from https://osf.io/7auw6/.To increase the transparency of our work, we created a preregistration at https://aspredicted.org/p8aw9.pdf.Note, however, that due to the more exploratory nature of this project, our preregistration did not specify strong confirmatory hypotheses or precise statistical models, focusing instead on the research question, experimental conditions, general predictions, primary measurement constructs, sample size, and exclusion criteria.Two models of heterography.A In the differentiation model, two meanings are, at time T 1 , expressed by a single phonetic form P 1 and a single orthographic form O 1 ; however, by time T 2 , two orthographic forms have emerged to differentiate the meanings in writing.B In the conservation model, the two distinct phonetic forms that existed at time T 1 have become homophonous by time T 2 , but the two corresponding orthographic forms have been conserved, resulting in the same state of heterography as in the differentiation model.Adapted from Berg and Aronoff (2021, pp. 325-326) with permission. Experiment 1 Our first experiment tests the ability of the differentiation model to explain the emergence of informative orthography.Can variant spellings become conditioned on meaning, such that the written form of the language diverges from the spoken form in a way that is expressive, despite the extra cost in learning?We had two main hypotheses: 1.Under pressure from learning alone, we expect to see the emergence of an increasingly transparent orthography.2. Under additional pressure for disambiguation, we expect to see greater use of differentiated, non-transparent spellings. Methods Our methods follow the experimental iterated learning literature, as described above, with one main difference: The artificial language has both spoken and written forms that may diverge or converge over time.Broadly, participants are first asked to learn a simple alien language (consisting of words for colored shapes) and are then asked to reproduce what they learned in a test phase.The written form of the language may change over time, since the orthographic output of participant i becomes the input to participant i + 1 in a transmission chain design, but the spoken form of the language remains fixed and under experimenter control.To explore the hypotheses outlined above, we conducted the experiment under two different conditions: Transmission-only, in which the test phases emphasizes simple reproduction, and Transmission + Communication, in which the test phase encourages disambiguation. Participants We recruited 287 participants via the Prolific platform.Participants were paid £2.00 for participation plus additional bonuses of up to £1.08 as detailed below (median bonus: £0.74).The median completion time was 15 m with a median hourly rate of £8.05 (£10.94including bonus).We limited recruitment to (self-declared) native English speakers, since it was important that participants would perceive the spoken forms in a relatively consistent way (particularly in the case of Experiment 2).14 participants were excluded because they (or their communication partners) used English color words (8) or failed the auditory attention checks (6).A further three participants were lost to communicationgame pairing failures.The final dataset comprises 270 participants: 90 in the Transmission-only condition (10 chains of 9 participants) and 180 in the Transmission + Communication condition (10 chains of 9 pairs of participants). Stimuli Participants were taught words for nine alien objects-three shapes (pentagon, star, torus) in three colors (pink, yellow, blue), as depicted in Fig. 2. The alien words had a spoken and written form composed of a stem and suffix.The stems, which always express the shape dimension, were /buvɪ/ 〈buvi〉 (the pentagon), /zεtɪ/ 〈zeti〉 (the star), and /wɒpɪ/ 〈wopi〉 (the torus).These stems were fixed and unchanging throughout both experiments reported in this paper and were designed to be easy to learn by being graphically and phonetically iconic of the shapes they represent (e.g., the round torus shape is represented by "round" sounds/ letters).Throughout Experiment 1, the spoken form of the suffix was always pronounced /kəʊ/, but its spelling was free to change over time.Thus, the spoken form of the language consists of just three unique words-/buvɪkəʊ/, /zεtɪkəʊ/, and /wɒpɪkəʊ/-that mark only a shape distinction; however, the spelling of the suffix could potentially take on different forms to mark color. Each of the 10 transmission chains was seeded with a randomlygenerated suffix spelling system, which was created by randomly mapping the following nine spellings onto the nine objects: 〈co〉, 〈coe〉, 〈coh〉, 〈ko〉, 〈koe〉, 〈koh〉, 〈qo〉, 〈qoe〉, 〈qoh〉.In other words, the /k/ sound may be spelled 〈c〉, 〈k〉, or 〈q〉 and the /əʊ/ sound may be spelled 〈o〉, 〈oe〉, or 〈oh〉, although the initial seed system contained no particular regularity.This procedure models an initial state of high spelling variation (every object has a unique suffix spelling), but these spellings may, over time, become transparent (the /k/ and/or /əʊ/ sounds take on consistent spellings) or expressive (spellings of /k/ and/or /əʊ/ become systematically associated with meaning).The spoken forms were synthesized using the Apple text-to-speech synthesizer (Tessa voice). Transmission procedure Participants were arranged into transmission chains such that the spellings produced by one participant would subsequently be taught to the next participant in the chain (see Fig. 3A).The first participant in a chain was taught the initial, randomly generated seed system, and this system was then free to evolve as it was transmitted to subsequent generations.Importantly, this process was subject to a bottleneck on transmission: Not all nine spellings were transmitted from one generation to the next; rather, the participant at generation i would observe only six of the nine spellings produced at generation i − 1 (at least one of each shape and at least one of each color).2Nevertheless, participants were asked to produce a spelling for all nine objects, meaning that generalization was required for three unseen items.Transmission continued for nine generations in each of ten independent chains.In the Transmission + Communication condition (see Fig. 3B), each generation consisted of a pair of participants, but the productions of only one of the two (the primary participant; determined by whichever participant started the experiment first) were iterated to the next generation.The productions of the secondary participant were not iterated any further and were thus a cultural deadend; the role of the secondary participant was to act as a genuine communicative partner for the primary participant, inducing pressure for the language to become informative. Fig. 2. The nine object stimuli with their stems and suffixes.The spoken and written forms of the stems were fixed and unchanging throughout the experiment, as were the spoken forms of the suffixes, which were always homophonous; however, the written forms of the suffixes were free to evolve over time, potentially taking on differentiated forms to indicate color (e.g., 〈co〉, 〈ko〉, and 〈qo〉 to represent pink, yellow, and blue). Training procedure All participants were trained on the spoken and written forms through a combination of passive exposure trials and "mini-test" trials, lasting around 8 min.Participants were also told explicitly before starting the training session that the stems looked and sounded like the objects' shapes, allowing participants to focus on learning the suffixes during the training phase.In passive exposure trials, the alien objects were presented alongside the written and spoken forms in quick succession for 2 s each.In mini-test trials, which were interleaved among the passive exposure trials, the participant was asked to type the appropriate written form for an object and was given feedback on any errors (deleted characters shown in red strikethrough text and additions shown in bold green text).The participant received a 2p bonus for spelling the word correctly but had to submit their response within 20 s.Each of the six object-word pairs in the training set (i.e., the seen items that passed through the bottleneck on transmission) was passively exposed 18 times and mini-tested six times, resulting in a total of 108 passive exposure trials and 36 mini-test trials (the maximum bonus in training was therefore 72p).To check that participants were listening to the spoken forms, they were asked auditorily to click on the alien object at three random points during training; participants who did not follow this instruction were excluded.The instructions provided to participants are provided in Appendix A in the supplementary material. Test procedure in transmission-only After training, participants assigned to the Transmission-only condition completed a test phase, alternating between production and comprehension trials.In production trials, the participant was shown an object and heard its associated pronunciation.The participant's task was to type the appropriate spelling.The input box was limited to eight lowercase Latin characters, and participants had to spell the stem correctly to continue to the next trial.Since participants heard the word pronounced aloud, typing the stem correctly should have been trivial, but in cases where the stem was initially spelled incorrectly, a popup message explicitly reminded the participant of the correct spelling of the stem. 3This restriction was imposed to prevent the stems from diverging from their spoken forms over time; however, no such restriction was imposed on the spelling of the suffix.Since the overall word length was restricted to eight characters and since all stems were four letters long, participants could use, at a minimum, a zero suffix and, at a maximum, a four-letter suffix.In comprehension trials, the participant was shown a word and had to click on the matching object from an array of all nine objects arranged in random order (in cases where multiple objects were described by the same wordform, any of the objects was a valid choice).In both types of test trial, the participant was awarded a bonus of 2p for each correct answer, but no explicit feedback was provided on the correctness of the signal or object selection.Each of the nine object-word pairs (i.e., including unseen items) was tested once in production and once in comprehension, resulting in 18 trials (the maximum bonus in test was therefore 36p). Test procedure in transmission + communication Participants assigned to the communicative condition completed a live, over-the-internet communication game with another participant.Both participants received training on the same orthographic system inherited from the previous generation. 4The communication game, which shares similarities with Kirby et al. (2015), closely mirrored the overall structure of the test administered to participants in the noncommunicative condition described above, with the production and comprehension trials becoming the two sides of a single communicative interaction.On a given trial, one participant (the director) would complete a production trial under the same input restrictions described Fig. 3.A Transmission-only procedure.Each generation consists of a single participant, who first receives training on six of the nine suffixes and then produces suffixes for all nine items.These productions are then used as the training material for the next generation in the chain.The initial system of suffixes is randomly generated with high spelling variation; but by the ninth generation, the system is expected to become transparent.B Transmission + Communication procedure.Each generation now consists of two participants who engage in a communicative task.Both participants receive training on the same system from the previous generation.Under a communicative pressure, the suffix spellings are expected to become expressive of color, despite the spoken forms being homophonous.above (i.e., produce a form for a target meaning), and the word they used was relayed to the other participant (the matcher), who would then complete a comprehension trial in response to that word (i.e., pick an object from the matcher array).The two participants then switched roles, resulting in the same overall trial structure as the Transmissiononly condition (i.e., alternation between production and comprehension trials). The framing and goal of the communication game was, however, quite different from the non-communicative test.In communication, the shared goal of the director and matcher was to have a successful communicative interaction, not necessarily to reproduce what they had learned in training.Both participants received the 2p bonus each time an interaction was successful; that is, the reward structure is not based on using the "correct" forms taught in training but based on accurately conveying meaning.The director is thus incentivized to produce a wordform that is unambiguous, and the matcher is incentivized to carefully interpret what the director has attempted to convey.The second important difference from the non-communicative test was that participants received rich feedback on the interaction: The director saw which object the matcher clicked on, and the matcher saw which object was the correct target.As such, we view feedback as an intrinsic part of communicative interaction; thus, in the non-communicative test described above, no feedback was provided, as is the case in similar studies (Carr et al., 2017;Motamedi, Schouwstra, Smith, Culbertson, & Kirby, 2019;Saldana, Kirby, Truswell, & Smith, 2019;Silvey, Kirby, & Smith, 2019). Overall, the communication game is identical to the noncommunicative test in terms of the task to be performed (nine productions and nine comprehensions), but the goal is quite different.In the non-communicative test, the goal and reward structure are based on accurately reproducing the orthography learned during training, whereas in the communication game, the goal and reward structure are based on successfully communicating a target object. Results The results from all ten chains (labeled A-J) in the Transmission-only condition are shown in Fig. 4.Each 3×3 matrix represents the suffix spelling system in use at a particular generation with shape represented along the rows and color represented along the columns, following the same 3×3 layout used in Fig. 2. The color-coding of these matrices indicates similarity in suffix form: Similar colors are used to represent similar suffixes, making it easier to see how the suffixes pattern with meaning. 5For example, the system at Generation 9 in Chain D uses three spellings (〈koe〉, 〈ko〉, and 〈co〉) to express the shape dimension, yielding a horizontal stripes pattern in the matrix representation.We describe such a system as "redundant" because shape was consistently and reliably expressed by the stem, so the suffix spelling system that emerged in this case conveys no additional information-the suffix simply repeats whichever shape was marked by the stem.Redundant suffix systems are characteristic of the Transmission-only condition, with similar outcomes occurring in Chains C, E, F, and H.We also saw the emergence of fully transparent suffix spelling systems in Chains A, B, I, and J. Chain I, for example, ultimately uses a single spelling, 〈coe〉, to represent the /kəʊ/ sound; that is, the written suffix forms make no distinction between shapes or colors, just like the spoken language.Chain G did not settle on a clear pattern, using 〈co〉, 〈coe〉, and 〈coh〉 somewhat interchangeably in the final generation, although there were some signs of a colorexpressive system emerging in, for example, Generation 6, where yellow is consistently spelled 〈coh〉, blue is consistently spelled 〈coe〉, and pink uses both 〈co〉 and 〈coe〉.The only other signs of color-expressive systems are H1, which was rapidly converted into a redundant system in subsequent generations, and J2, which ultimately degenerated toward transparency. The results for the Transmission + Communication condition (Chains K-T) are shown in Fig. 5. Like Transmission-only, degeneration to a single spelling by the ninth generation is a relatively common outcome (e.g., Chains N, R, and S and to a lesser extent Chains L, M, and O).In contrast, redundant, shape-expressive systems (as indicated by horizontal stripes) are relatively rare (e.g., K8, M6 and Q1-6).Instead, the presence of the communicative task appears to favor color-expressive systems, although these are far from common and often unstable.In particular, there are two main kinds of color-expressive system that emerge.The first are the generalization-based expressive systems: K5, M1, P5, and S2.In these cases, the participant tended to generalize their input in a way that is consistent with expressing color over shape.For example, in K5, pink was consistently spelled 〈ko〉 and blue was consistently spelled 〈co〉.In the case of M1, color is expressed by the final vowel letters (〈oe〉 for pink, 〈o〉 for yellow, and 〈oh〉 for blue) with the spelling of the /k/ sound conditioned on the stem (〈buvic-〉, 〈zetik-〉, and 〈wopiq-〉).We note, however, that in all these cases the expressive system was not reciprocated by the participant's partner-resulting in low communicative accuracy-and not sustained or elaborated on in subsequent generations. The second type of color-expressive system is one that simply appropriates the expressive power of English to differentiate color: K9, M7, Q7, and to a lesser extent L7 and O4.In M7, for example, the participant added 〈r〉, 〈g〉, and 〈b〉 (presumably red, gold, blue) to the ends of the words, although the participant's partner only reciprocated the 〈g〉 spelling and seemingly failed to understand what was meant by 〈r〉 and 〈b〉.In the case of Q7, the pair of participants added 〈r〉, 〈o〉, and 〈b〉 (presumably red, orange, blue) to communicate with high accuracy, but, although this system was retained into Generation 8, it started to Fig. 4. Results from the Transmission-only condition in Experiment 1 (differentiation).Each matrix shows the suffix spelling system in use at a particular generation (shape on the rows, color on the columns, as in Fig. 2).Chains are labeled A-J and generations are labeled 0-9 (0 is the randomly generated seed system).Each chain uses an independent color palette, with each color representing a particular suffix spelling; similar colors indicate similar spellings.Spellings in bold-italic are the generalizations on unseen items.By the ninth generation, most systems are degenerate (e.g., Chains A, B, and I), redundant (e. g., Chains D, E, and H), or a mixture of the two (e.g., Chain F). disintegrate by Generation 9. Overall, in the five cases where English color letters were used, the systems did not really catch on, perhaps because they retained spelling redundancy from the previous generation, resulting in unnecessarily complex suffix spellings that were too difficult to learn for subsequent generations.Q7, for example, uses 〈-coe-〉, 〈-co-〉, 〈-qo-〉 for shape plus 〈-r〉, 〈-o〉, 〈-b〉 for color.In addition to this handful of cases, there were a further four pairs of participants who used full English words as the suffix (typically 〈-pink〉 or 〈-red〉, 〈-oran〉 or 〈-yell〉, and 〈-blue〉), but these pairs were excluded and replaced before iteration to the next generation. 6We mention these cases, however, because they are still examples of a conscious effort to differentiate-something that never happened in the Transmission-only condition. To analyze more formally what types of system tend to emerge under different conditions, we first designated four suffix systems that are of particular interest: holistic, expressive, redundant, and degenerate.These four typological categories may be positioned along the simplicity-informativeness continuum in terms of the number of suffix forms they make use of and how these forms are conditioned on meaning, as illustrated in Fig. 6.The holistic7 system has nine unique forms each of which expresses a particular color-shape combination.This is complex to learn, but the suffix alone can pick out exactly one meaning.The expressive and redundant systems have three unique forms that express either color (expressive) or shape (redundant).These systems are easier to learn, but only the expressive system is fully informative (when acting in combination with the stem).The degenerate system uses just one suffix form.This makes it trivial to learn, but entirely uninformative.To classify a participant's output into one of these four typological categories, we computed which of the reference systems was most similar in terms of its information content.To do this, we formalized the systems as set partitions (i.e., a partitioning of the universe of meanings into disjoint subsets) with variation of information (Meilȃ, 2007) defined as the distance metric between any two such partitions. 8A given participant's output system is then classified into one of the four typological categories based on whichever reference system is closest. The typological distributions are plotted in Fig. 7A, revealing the proportion of the 10 chains that fall into each typological category at each generation.In Transmission-only, the holistic systems used to initialize the chains are rapidly replaced by redundant systems by Generation 2. The dominance of the redundant category is then gradually eroded as the chains transition to degeneracy.In Transmission + Communication, there is initially a fairly even mix of all four kinds of system, but by Generation 9, degenerate systems tend to be most common.There is also a notable increase in holistic systems emerging in later generations; these are the cases of compositional suffixes that arose through the addition of English color letters on top of redundant suffix spellings (i.e., K9, M1, M7, and Q7).Although there was not much evidence of expressive systems emerging in either condition, it is interesting to note that the communicative condition did seem to disfavor the inexpressive redundant systems.Fig. 5. Results from the Transmission + Communication condition in Experiment 1 (differentiation).Each matrix shows the suffix spelling system in use at a particular generation (shape on the rows, color on the columns, as in Fig. 2).Chains are labeled K-T and generations are labeled 0-9 (0 is the randomly generated seed system).Each chain uses an independent color palette, with each color representing a particular suffix spelling; similar colors indicate similar spellings.Spellings in bold-italic are the generalizations on unseen items.There are some isolated examples of differentiation through implicit generalization (K5, M1, P5, S2) and explicit innovation (K9, L7, M7, O4, Q7).Fig. 6.Four primary systems of interest (or typological categories) arranged along the cost/complexity continuum.A holistic system uses a unique suffix spelling for each shape-color combination.An expressive system only expresses color.A redundant system only expresses shape (which is already conveyed by the stem).A degenerate system expresses nothing.are focused on the suffix level and not the word level.By holistic, we only mean that each shape-color combination has a unique suffix form.We do not distinguish between truly holistic suffixes (nine unique suffixes with no structure in how they relate to each other) and compositional suffixes (nine unique suffixes that can be generated from compositional rules, as was the case in, for example, Q8). 8 Variation of information is a proper metric on set partitions, measuring the amount of information (in bits) that is lost and gained in the transformation of one partition into another.Under this metric, the holistic and degenerate systems are considered very dissimilar because they carry very different levels of information content.The expressive and redundant systems are also considered quite dissimilar because, although they carry the same amount of information, the information they carry is orthogonal (shape in the case of redundant, and color in the case of expressive). To analyze how informativeness changes over time and how this compares between the two conditions, we reduced the typological classifications into two broader categories: informative systems (i.e., holistic or expressive) and uninformative systems (i.e., redundant or degenerate).We then fit a Bayesian mixed-effects logistic regression model that predicts whether or not a system is informative as a function of generation with by-chain random slopes and intercepts,9 which is the standard model structure used to analyze iterated learning experiments (Winter & Wieling, 2016).Our Bayesian approach produces posterior estimates of two key parameters: α, which represents the intercept of the regression model (i.e., the model estimate of the dependent variable at Generation 0), and β, which represents its slope (i.e., the model estimate of how much the dependent variable changes per generation).To determine if there is a statistical difference between conditions, we compute the difference in slopes, Δ(β) = β comm − β trans , and check if this posterior difference satisfactorily rejects zero.In other words, we test to see whether the dependent variable is changing over time more rapidly in one condition compared to the other.Here we follow the convention that the 95% highest density interval (HDI; the narrowest interval that contains 95% of the posterior probability mass) should not include zero.We emphasize, however, that the posterior is a complete description of the evidence (given the data and model assumptions) and does not strictly need to be reduced to a binary yes/no decision.The results are shown in Fig. 7B.In both conditions, there is a decrease in informativeness over time (the slopes are negative), but in the Transmission + Communication condition, the slope is shallower, suggesting that informativeness decreases more slowly in the presence of communicative pressure.The difference in slopes, Δ(β) = 4.53 (95% HDI: 1.48, 7.88), clearly rejects zero, pointing to a meaningful difference between conditions. One issue with the above approach is that collapsing the systems into binary categories (informative vs. uninformative) results in a loss of information about how informative the systems are.In addition, there was evidence to suggest that the model was a suboptimal description of the data, since there was also a difference in the α estimates (the intercepts; Table B1 in the supplementary material), which should theoretically be the same (i.e., there should be no difference between conditions at Generation 0).We address these limitations with a second measure of informativeness, communicative cost (Kemp et al., 2018;Kemp & Regier, 2012;Regier et al., 2015), which has previously been used in similar experimental studies (Carr, Smith, Culbertson, & Kirby, 2020;Carstensen, Xu, Smith, & Regier, 2015;Smith, Frank, Rolando, Kirby, & Loy, 2020) and was proposed in our preregistration as the primary measure of informativeness.Communicative cost is an information-theoretic measure that expresses how much information will be lost, on average, when a speaker/writer attempts to convey a meaning to a listener/reader using some shared signaling system.If the system contains no ambiguities (all meanings are expressed by unique signals), communicative cost will be zero bits-that is, zero information will be lost during each attempt to communicate using that system.Communicative cost will take some larger value if the system contains ambiguity.It is given by ∑ m∈U Pr(m)( − logPr(m|s m ) ), where U is the universe of meanings that may be expressed, Pr(m) is the probability that a particular meaning would need to be expressed, and Pr(m|s m ) is the probability that a reader would infer meaning m given that a writer produced signal s for meaning m.In our case, U is the set of nine alien objects, Pr(m) is set to 1/|U| (all objects need to be talked about with equal probability), and Pr(m|s m ) is given by 1/|M s |, where M s is the set of meanings labeled s according to the system.The results are shown in Fig. 7C.We used the same mixed-effects linear regression model described above (except that the likelihood is now Gaussian).In line with the previous analysis, cost increases with generation in both conditions but increases more slowly under communicative pressure.However, the support for a difference between conditions was weaker under this more nuanced measure, Δ(β) = − 0.17 (95% HDI: − 0.36, 0.02); although we were not able to reject zero at the 95% level, we were able to reject it at the 92% level (92% HDI: − 0.35, − 0.01). Aside from the informativeness of the orthographic systems, we also predicted in our preregistration that the systems would become easier to learn over time in both conditions, albeit for slightly different reasons.In Transmission-only, the orthographic system is expected to become increasingly learnable as it degenerates into a single, transparent suffix form.In Transmission + Communication, the system is expected to become more learnable as the unsystematic, holistic systems transform into other easier to learn systems (notably expressive systems, although-as noted already-expressive systems rarely emerged).Following prior work, we operationalized learnability as transmission error-the amount of error that the participant at Generation i made in reproducing the orthographic system that existed at Generation i − 1; transmission error is defined as the mean Levenshtein edit distance between the corresponding orthographic forms in consecutive generations (see e.g., Kirby et al., 2008).These results are plotted in Fig. 7D.In both conditions, the estimates of the β parameters are negative and clearly reject zero, suggesting that the systems do indeed become increasingly learnable over time as hypothesized.Our analysis also suggested that there was little difference between the conditions in terms of how rapidly transmission error decreases over time (i.e., Δ(β) is highly compatible with zero), although we did note that transmission error tended to be a little higher in the communicative condition.This is to be expected because the goal in the Transmission-only condition is to reproduce the forms taught in training, whereas the goal in the communicative condition is to devise a system that permits accurate communication, which necessitates greater change to the system taught in training. Summary Our first experiment tested whether informative, heterographic orthography could emerge through spelling differentiation and whether it would emerge preferentially under communicative pressure.Although there was evidence to suggest that the orthographic systems remain informative for longer under communicative pressure, both conditions ultimately converged on degenerate, uninformative systems and there was little evidence of systematic differentiation in spelling.Other than resorting to English, such differentiation could have been achieved in a number of ways, most obviously through the conditioning of spelling variation on meaning (e.g., 〈ko〉 for pink, 〈co〉 for yellow, and 〈qo〉 for blue), but also through less obvious strategies such as the use of length (e.g., 〈ko〉 for pink, 〈kko〉 for yellow, and 〈kkko〉 for blue) or the use of arbitrary silent letters (e.g., 〈kox〉 for pink, 〈kof〉 for yellow, and 〈kom〉 for blue).Such differentiation was not forthcoming, however, even under communicative pressure.Instead, if spelling variation was conditioned on anything, it was conditioned on shape, resulting in redundant suffix spellings.This result is in stark contrast to most prior experimental iterated learning studies, in which informative, compositional systems do typically emerge, especially under communicative pressure (e.g., Kirby et al., 2015). So, what was different about the present experiment compared to the large body of prior experimental iterated learning studies?The primary difference was the presence of a spoken language that is decidedly not informative about one the dimensions.Indeed, the point of our experiment is to see whether orthography can resist phonology under sufficient pressure for informativeness.The presence of homophonous suffix forms acts as a cue to participants that the language itself does not mark color and that, therefore, the orthography should also not mark color.In support of this explanation, we conducted an additional experiment during review, which showed that when homophony is removed, the systems tend to resist degeneration in line with prior work.We discuss this experiment in more detail in the Discussion section and in Appendix C of the supplementary material. Faced with orthographic variation that could be conditioned on either dimension, learners appear to rule out the possibility that it might be conditioned on color, since such a hypothesis would be in conflict with the spoken language.As a result, any variation in spelling comes to be associated with particular stems, resulting in the emergence of redundant suffix spellings that serve no real purpose.Similar outcomes have been noted before in the context of artificial language learning experiments (Smith & Wonnacott, 2010), and a rough analog can be found in English in the spelling of /-ʃən/ (〈cian〉, 〈cion〉, 〈sion〉, 〈ssion〉, or 〈tion〉), which is conditioned on the stem (e.g., magician suspicion, expulsion, transmission, and station) following a complex set of rules (Carney, 1994, pp. 420-421).Interestingly, however, these redundant systems were relatively uncommon under communicative pressure, suggesting that communicating participants recognized the futility of using the suffix to mark shape.Overall, although the orthographic systems tended to remain slightly more informative under communicative pressure, the emergent orthographies ultimately preferred to transparently encode sound rather than meaning.This finding seems to align with our general experience of the world: If someone decided to start using the spelling 〈banque〉 to differentiate the financial institution from river banks, would anyone take that spelling seriously or even understand the intention?Without top-down diktat, it is hard to get spelling differentiation off the ground in the written modality. Experiment 2 We now turn our attention to the conservation model of heterographic homophones: Given that an informative system already exists (both in speech and in writing), does that informative system persist in writing even after the spoken language has degenerated into homophony?And, importantly, does this happen preferentially in the presence of communicative pressure?Our hypotheses were as follows: 1.Under pressure from learning alone, we expect to find that orthography will track the spoken form of the language, becoming increasingly degenerate as homophony increases.2. Under additional pressure for disambiguation, we expect the orthography to conserve archaic (but informative) spelling distinctions even after these distinctions cease to exist in the spoken form of the language. Methods The methods were identical to Experiment 1 with two exceptions: The artificial language used to seed each chain started out fully compositional (in both its spoken and written forms), and two sound mergers were artificially induced during cultural transmission, resulting in the spoken forms of the suffixes becoming increasingly homophonous and uninformative over time.This was designed to model the historical processes of sound change and conservation described in the Introduction. Participants The experiment was completed by 297 native-English participants recruited through Prolific.The payment and bonusing scheme was identical to Experiment 1 (median bonus: £0.78).The median completion time was 15 m with a median hourly rate of £7.99 (£10.68 including bonus).17 participants were excluded because they (or their partners) failed the auditory attention checks (15) or used English color words (2).A further 10 participants were lost to communication-game pairing failures.Like Experiment 1, the final dataset comprises 270 participants: 90 in the Transmission-only condition (10 chains of 9 participants) and 180 in the Transmission + Communication condition (10 chains of 9 pairs of participants). Stimuli The alien objects and word stems were identical to Experiment 1. Unlike Experiment 1, however, the transmission chains were seeded with a fully compositional language that used three distinct suffixes to systematically express each of the colors.A separate set of suffixes was created for each chain by concatenating a randomly drawn consonant from {/f/, /s/, /ʃ/} and a randomly drawn vowel from {/ə/, /εɪ/, /əʊ/}, both without replacement.For example, one chain might use the suffixes /fəʊ/, /ʃə/, /sεɪ/ to represent the colors pink, yellow, and blue, while another chain might use /sə/, /fεɪ/, /ʃəʊ/ for those colors.The initial orthographic system was transparent and based on the following phoneme-grapheme mapping: {/f/→〈f〉, /s/→〈s〉, /ʃ/→〈x〉, /ə/→〈a〉, /εɪ/ →〈ei〉, /əʊ/→〈oe〉}.The suffixes were designed to be distinctive (and therefore easy to memorize and associate with colors), but also similar enough to (mostly) allow for somewhat plausible sound mergers and result in somewhat plausible spellings following sound merger (e.g., it is plausible that /f/ might supplant /s/ or /ʃ/ in speech or that /ʃ/ might be spelled 〈s〉 or 〈x〉 in writing).We attempted to achieve this balance by combining consonants that are very similar with vowels that are very dissimilar, while also avoiding reuse of any sounds present in the stems.The spoken forms were synthesized using the Apple text-to-speech synthesizer (Moira voice). Sound change Each transmission chain was run for nine generations, which were divided into three epochs.During Epoch I (Generations 1 to 3), the three spoken suffixes were distinct (as described above), allowing the spoken language to express all three colors without ambiguity.During Epoch II (Generations 4 to 6), the spoken language had two distinct suffix forms, reducing its informativeness.During Epoch III (Generations 7 to 9), all three spoken suffixes were homophonous, just as in Experiment 1, making the spoken language entirely uninformative about color.This was achieved through two sets of sound changes, the first occurring in the transition from Epoch I to II and the second occurring in the transition from Epoch II to III.An example is illustrated in Fig. 8.In the first sound change, two of the spoken suffix forms were chosen at random and the consonant from one (chosen at random) was paired with the vowel from the other, resulting in a new suffix form that replaced the original two.In the second sound change, the remaining two suffix forms were merged in the same way, resulting in full homophony.Crucially, the spellings did not automatically change following a sound change event; rather, the orthographic system was free to adapt (or not) in response to the sound changes.Note also that individual participants did not directly experience the sound changes; a Generation 4 participant, for example, would always hear Epoch II sounds, while observing spellings produced by a Generation 3 participant (presumably representing the Epoch I sounds).In reality, of course, sound change is more gradual, with individual speakers experiencing both outgoing and Fig. 8. Examples of the spoken suffixes in Experiment 2. During Epoch I, color is represented by three distinct spoken suffixes.During Epoch II, two of the suffixes are homophonous, reducing the informativeness of the spoken language.During Epoch III, the spoken form of the language makes no color distinction. incoming spoken forms within their lifetimes. Results The results for all ten chains (labeled A-J) in the Transmission-only condition are shown in Fig. 9.It is immediately clear that colorexpressive suffixes (as indicated by vertical stripes) are maintained fairly reliably through the first epoch; perfectly in the case of Chains A, B, D, F, and I, and with some errors in the other five chains.Some of these errors are very minor, such as the use of 〈sha〉 instead of 〈xa〉 for one item in H3, while others are more catastrophic, such as the early loss of the 〈sei〉/〈xa〉 distinction in Chain J. Generation 4 represents the first real test of the orthographic systems in the face of sound change, and, in most cases, the Generation 3 systems are preserved quite faithfully ( Chains A, B, D, E, H, and I), but by the end of the epoch (Generation 6), many have degenerated into redundant systems that encode shape (Chains B, G, and J) or transparent systems that mirror the Epoch II pattern of homophony (Chains C, E, F, and I).These processes continue into Epoch III and by the ninth generation, all systems have become degenerate, redundant, or some mixture of the two.The one exception is Chain D, whose original spellings were conserved perfectly through to the final generation with only one generalization error in Generation 8, which was quickly reverted in Generation 9. Like Experiment 1, redundant systems are characteristic of the Transmission-only condition, especially in Epoch III.As the spoken suffixes become more homophonous, the variant spellings are increasingly conditioned on shape rather than color, perhaps because the spoken language signals to learners that the language does not mark color, so they rationalize the system as three words with idiosyncratic spellings.Interestingly, however, all chains exhibited conservation of spelling form, even if the way in which spelling was conditioned on meaning was lost.For example, Chain A ultimately represents the sound /fə/ with the spellings 〈foe〉 and 〈xa〉, spellings that are internally inconsistent and contrary to standard uses of the Latin alphabet, but which trace their origins back to the original seed orthography.Overall, then, the Transmission-only condition in Experiment 2 is characterized by the conservation of spelling form without conserving how form patterns with meaning. The results for the Transmission + Communication condition (Chains K-T) are shown in Fig. 10.Like Transmission-only, the seed systems are mostly maintained faithfully through Epoch I; perfectly in the case of Chains K, L, N, O, and T, and with some errors in the other five chains, although some of these errors are non-destructive changes, such as 〈x〉 being replaced with 〈sh〉 in R1.Several systems were then maintained through Epoch II, notably Chains O, P, R, and T, and, in one case, through to the end of Epoch III (Chain R, albeit with the original <xei> spelling replaced with 〈shei〉).The Chain O system was preserved up to Generation 7, Chain P was maintained up to Generation 7 and almost to Generation 9 with two modifications (〈sha〉 instead of 〈xa〉 and 〈fa〉 instead of 〈fei〉), and Chain T was conserved faithfully up to Generation 8.The final form of Chain Q was also fully expressive, albeit through a combination of both conservation and differentiation: The 〈fei〉 form was conserved from the seed orthography, the 〈oxie〉 spelling appears to derive from a misremembering of 〈xoe〉 (partial conservation), and the 〈fe〉 spelling, which began as a typo introduced in Q6, seems to have been generalized across the blue items in Q7, perhaps to differentiate them from the yellow items (indeed, the participant's partner made the same generalization).A similar case of differentiation might also have occurred in L9, where the 〈sol〉 spelling (originally a typo on 〈so〉) was generalized across the yellow items, resulting in a semi-expressive system (although the participant's partner generalized the 〈sol〉 spelling across shape). Unlike Experiment 1, no participant pairs attempted to use English color letters and there was only one case of a pair using English color Fig. 9. Results from the Transmission-only condition in Experiment 2 (conservation).Each matrix shows the suffix spelling system in use at a particular generation (shape on the rows, color on the columns, as in Fig. 2).Chains are labeled A-J and generations are labeled 0-9 (0 is the randomly generated seed system).Each chain uses an independent color palette, with each color representing a particular suffix spelling; similar colors indicate similar spellings.Spellings in bold-italic are the generalizations on unseen items.The final systems are characterized by the conservation of form without the conservation of expressivity.Each matrix shows the suffix spelling system in use at a particular generation (shape on the rows, color on the columns, as in Fig. 2).Chains are labeled K-T and generations are labeled 0-9 (0 is the randomly generated seed system).Each chain uses an independent color palette, with each color representing a particular suffix spelling; similar colors indicate similar spellings.Spellings in bold-italic are the generalizations on unseen items.Five chains (O, P, Q, R, T) remain fully expressive into the final epoch, in most cases conserving the original forms.words (Generation 9 of Chain K), although this pair was excluded and replaced (this generation was the only case in which the training input was fully degenerate, which would have resulted in a strong pressure to find a communicative solution in the form of English).Presumably, the general conservation of expressive spelling in Experiment 2 negated the need to innovate novel systems. Our quantitative analysis of Experiment 2 is identical to that of Experiment 1 with a slight change to the statistical model.Rather than predict the outcome variables as a function of generation, we now predict the outcome variables as a function of the epoch number (1, 2, or 3) and the generation number within the epoch (1, 2, or 3). 10 This yields two slope parameters: β, which represents the effect of epoch, and γ, which represents the additional effect of generational turnover.This model is more appropriate to the Experiment 2 setup, where the experimentally induced homophony results in discontinuities from one epoch to the next, and it allows us to separate out the effect of the homophony pressure from the more general effect of generational turnover. Fig. 11A plots the typological distributions by generation and condition.Initially, all systems are expressive, but the dominance of this category is gradually eroded over time, particularly during the second and third epochs once the spoken forms had become homophonous.Notably, however, the loss of expressive systems appears to be slower in Transmission + Communication, and redundant systems were also less popular under communicative pressure.As in Experiment 1, we further collapsed the typological categories into two broader categories (informative vs. uninformative) to analyze the trends over time.The results, shown in Fig. 11B, show that the probability of a system being informative drops over time in both conditions (primarily as a function of epoch), but does so more slowly in the communicative condition.Although there was some weak evidence of a difference in epoch slopes (Δ(β) = 2.65; 95% HDI: − 0.58, 5.95), we could not conclusively reject zero under this first measure of informativeness. The results in terms of the preregistered measure of informativeness, communicative cost, are presented in Fig. 11C.Here we find a nonzero effect of both epoch (β) and generation (γ) on cost, as well as a difference between conditions in terms of epoch: Δ(β) = − 0.2 (95% HDI: − 0.4, − 0.003).Like the previous measure, the γ slopes were in close alignment, so Δ(γ) is highly compatible with zero difference.This suggests that the effect of generational turnover is very similar between conditions and that the difference between conditions is mostly driven by the increases in homophony induced in each epoch.The overall result is that, in Transmission + Communication, the increase in cost is linear across the nine generations, whereas in the Transmission-only condition, the increase in cost follows something more akin to a step function, with sudden increases in cost in response to each additional bout of homophony.In other words, in the Transmission-only condition, the orthographic systems respond rapidly to the changing spoken forms, while in the Transmission + Communication condition, the orthographic systems are more resistant to the homophony. For completeness, Fig. 11D also plots transmission error; however, we did not hypothesize any particular differences in learnability in Experiment 2, either over time or by condition.The expressive orthographic systems used to initialize the chains start out very easy to learn, and learnability remains fairly consistent throughout the experiment in both conditions, albeit with some constant level of change over time as the systems gradually come into alignment with the spoken forms. Summary Experiment 1 asked whether an informative, heterographic orthography may be created de novo under pressure from homophony. Experiment 2, by contrast, asked whether an informative, heterographic orthography can simply be maintained, even under the same levels of homophony encountered in Experiment 1.In the Transmission-only condition, only one chain (Chain D) remained expressive into the fully homophonous Epoch III, while in Transmission + Communication, five chains (O, P, Q, R, and T) remained expressive, albeit not necessarily all the way to Generation 9.The fact that expressive spellings persisted longer and across more chains under communicative pressure suggests that an informative orthography-despite running contrary to the spoken language-may be maintained when it serves a useful purpose.That being said, the fact that informativeness could, in principle, be maintained without communicative pressure (most notably in Chain D) suggests that a strong communicative pressure is not a strictly necessary condition for conservation: Learning alone can, to a limited extent, maintain informative heterography. Many of the chains did, however, eschew informativeness entirely in favor of greater transparency, and the inevitable long-term consequence for all chains appears to be degeneracy.This is to be expected under Transmission-only, where the systems are adapting under learnability pressure, but is somewhat surprising in Transmission + Communication.Our findings ultimately suggest that, in the long term, alphabetic orthographic systems might favor the faithful encoding of speech over the useful encoding of meaning, although there may exist brief windows of time during which informative heterography can resist the spoken language.Interestingly, although participants were resistant to encoding into writing something that is not encoded in speech, they were-at the same time-content to conserve spelling forms that were internally inconsistent and unusual.Tradition has a powerful hold over writing systems. Discussion The written and spoken forms of a language are never perfectly identical; they diverge in many ways as a result of the differing constraints relevant to each.Spacing between words for example does not exist in speech but constitutes a useful innovation in writing that permits rapid reading (Rayner, Fischer, & Pollatsek, 1998;Sainio, Hyönä, Bingushi, & Bertram, 2007;Zang, Liang, Bai, Yan, & Liversedge, 2013).Similarly, the consistent spelling of affixes, such as the English pasttense marker 〈-ed〉, which diverges from its spoken realization (/d/, /t/, or /ɪd/ depending on the preceding sound), permits faster access to meaning (Ulicheva et al., 2020).Might it be the case that-left to evolve freely-the written form of a language will become better adapted to the needs of writers and readers to the detriment of its alignment with the spoken form of the language?Do writing systems adapt to the affordances and constraints of the written modality (Rastle, 2019)? We addressed these questions by focusing on the particular case of heterographic homophones-morphemes that sound the same but that are spelled differently.Heterographic homophones permit the written language to be more informative than the spoken language; the spellings 〈knight〉 and 〈night〉, for example, convey a distinction in meaning that cannot be conveyed in speech without supplying additional information.We investigate whether heterography might arise for functional reasons by experimentally simulating the cultural evolution of orthography under two distinct mechanisms, differentiation and conservation, as described by Berg and Aronoff (2021). Experiment 1: Differentiation In our first experiment, we focused on the differentiation mechanism: Might variant spellings be used to differentiate meanings that are otherwise identical in speech?If so, we would expect levels of spelling differentiation to be greater when there is greater pressure for disambiguation, which we induced through the addition of a communication game.Importantly, the initial randomly generated orthographic systems that we used to seed the transmission chains contained high variation-that is, multiple ways of spelling the same sound.We included this variation because, for the differentiation mechanism to be viable, the writing system has to be receptive to spelling variation.A writing system that does not permit a one-to-many phoneme-tographeme mapping would not be capable of differentiating homophonous words.Only when spelling variation is permitted and available can variants be conditioned on meaning. Although the emergent orthographies tended to be slightly more informative under communicative pressure, systematic differentiation was rare, unstable, and fleeting, be it through implicit generalization of the supplied variants or explicit innovation of new variants.This is not because participants were unable to learn variant spellings; in many cases variant spellings were retained but ineffectually conditioned on shape.Nor was it because participants were unable to learn a system of color marking that is not expressed in the spoken language; we know from Experiment 2 that participants can learn and reproduce such systems.Instead, it seems that, in Experiment 1, differentiation could not get off the ground.We see two reasons for this.First, participants seemed disinclined to directly encode meaning in how they chose to spell, preferring instead to "write by ear" (Frith, 1979).When asked to type in a word for a pink pentagon called /buvɪkəʊ/, participants were inclined to type a sequence of graphemes that reflected the sound they heard, without encoding meaning.This behavior might be connected to the concept of "functional fixedness" (German & Defeyter, 2000), which states that learners find it difficult to adduce a new function (e.g., writing meaning) when they are accustomed to another function (e.g., writing sound), which highlights a potentially important role for generational turnover in the development of writing systems, since new learners will be more receptive to new functions.Second, even when participants did appreciate the need to differentiate the written forms to be successful, they often appeared reluctant or unable to do so, perhaps because they viewed the spelling as immutable or because the problem of aligning with a partner-even in a synchronous setting-was too difficult to overcome without the ability to coordinate over an extended period of time.It is notable, for example, that in the communication games, many attempts to differentiate using English color letters were not reciprocated. This conclusion is in partial agreement with work by Treiman, Seidenberg, and Kessler (2015).In this study, participants were asked to provide spellings for novel English words (e.g., /haef/ meaning alehouse) that were homophonous with preexisting English words (in this case, half).Participants tended to provide the same spelling as the preexisting word (i.e., 〈half〉) rather than other possible alternatives that would have had the benefit of differentiating meaning (e.g., 〈haf〉, 〈haff〉, 〈haph〉).The authors argue that participants prefer the "lesser effort that is required to use a familiar whole-word orthographic form compared to that needed for assembling a novel spelling" (p.544), which aligns with our findings.Treiman et al. (2015) also found, however, that, when given two alternatives to choose from (e.g., 〈half〉 vs. 〈haff〉), participants generally did prefer the novel spelling.This runs contrary to our first experiment, since our participants are similarly provided with multiple possible spellings of the sound /kəʊ/ (〈coe〉, 〈koh〉, 〈qo〉, etc.), but they nevertheless tended not to condition these on the color dimension, even under communicative pressure with financial incentive.Thus, although the preference for simplicity might be relatively weak at the individual level, it might nevertheless be amplified by the iterated learning process at the population level. During the review process, a concern was raised that our participants might not have fully understood the communicative nature of the task, thus explaining why we did not observe the emergence of systematic differentiation in spelling.Our position, as outlined above, was that the lack of differentiation was due to the very strong homophony pressure.To test whether the lack of informativeness might be attributed to the homophony and to check that the design of our experiment and implementation of the communicative pressure was sufficient to promote more informative systems, we ran an additional experiment.This experiment was identical to the communicative condition of Experiment 1, except that we removed the spoken forms (thereby replicating previous iterated learning experiments, which are generally only orthographic in nature; e.g., Beckner et al., 2017;Kirby et al., 2008Kirby et al., , 2015) ) and altered the orthographic forms so that no homophony was implied in the spellings.This experiment is described in Appendix C of the supplementary material, but in short, we observed much greater levels of innovation and informativeness in this new experiment, with a clear statistical difference between it and Experiment 1. Broadly, the effect of removing the homophony pressure was that the systems remained more informative compared to Experiment 1 (in terms of both the proportion of systems classified as informative and communicative cost).There was also a commensurate increase in communicative success as a result of the emergence of these more informative systems (a point we return to shortly).Importantly, this suggests that the participants in Experiment 1 did indeed understand the communicative imperative, but nevertheless preferred their spellings to encode sound rather than meaning.Or, rather, the cultural evolutionary process ultimately tended to favor simplicity over informativeness in this particular domain. Experiment 2: Conservation In our second experiment, we tested a different mechanism by which orthographies may end up possessing additional informativeness beyond that of the spoken form of the language: conservation.Under this mechanism, expressive forms do not emerge but are simply fossils representing an earlier form of the spoken language that was expressive of a particular meaning distinction.Over time, the orthography may experience a ratcheting effect, in which heterographic forms accumulate (due to successive sound changes) but rarely recede (due to the informativeness they provide).Over longer periods of time, this mechanism might even shift an orthography from a phonographic principle to a logographic one.This parallels what we know about many of the heterographic homophones in English, which arose as byproducts of either the preservation of etymology or phonological changes that were never assimilated into written forms (Berg & Aronoff, 2021), and which are sometimes argued to give English a semi-logographic character (Chomsky & Halle, 1968;Coulmas, 1991;DeFransis, 1989;Zachrisson, 1931).To be clear, this is not to say that such heterography is nonadaptive or an accident of history; rather, such heterographs may have been preserved precisely because of the informativeness they inadvertently provide in reading.Thus, just as the spoken language avoids sound mergers that increase ambiguity (e.g., Wedel, Kaplan, & Jackson, 2013), so the written language might likewise avoid spelling mergers that increase ambiguity.If correct, this would predict that expressive orthography should be preserved preferentially under communicative pressure. Our findings did indeed show that informative heterography may be conserved more frequently and for longer periods under communicative pressure for disambiguation.There are two important caveats, however.First, we found that cultural transmission alone-that is, blind learning and reproduction-will result in at least some conservation, not only in form but also in the conditioning of form on meaning.Cases such as Chain D correspond to the "accident of history" explanation: Expressive orthography is preserved not because it serves any useful purpose (recall that in Transmission-only there is no functional need for the language to be informative), but because participants are simply reproducing what they learned, and what they learned has not (yet) placed a significant enough burden on learning for simplification to kick in.The additional level of conservation that occurs in Transmission + Communication corresponds to the repurposing explanation; that is, expressive orthography that originally served one purpose (representing speech) is maintained for a new purpose (representing meaning directly).The second caveat is that, in the long term, it appears that transparency might ultimately win the day, even under communicative pressure.Chain O, for example, went from expressive to degenerate in two generations under full homophony pressure, and based on the trajectories of the communicative cost curves (Fig. 11C), it seems likely that all chains will ultimately undergo the same transformation eventually.Informative heterography that arises through conservation is but a temporary oasis on the march toward transparency. Differentiation or conservation? It is important to note, at this point, that the two experiments cannot be compared directly, although we made every effort to keep the two as close as possible.Fundamentally, participants-or more generally, the evolutionary systems-are being asked to do something quite different across the two experiments: create in Experiment 1 and maintain in Experiment 2. The demands of these two tasks are different, and one task or the other may be better suited to our experimental paradigm.However, our experiments do serve to highlight the comparative difficulties involved in differentiation vs. conservation.For differentiation to operate, participants must overcome several challenging hurdles: They must J.W. Carr and K. Rastle grasp the mechanics of the game and its incentive structure (the apprehension problem), they must be able to put themselves in the shoes of their partners (the theory of mind problem), they must be capable of devising a linguistic solution (the innovation problem), they must be able to align with an interlocutor separated in time and space (the alignment problem), and they must be prepared to rebel against their input, overcoming social stigma in the process (the social problem).Furthermore, once a system has been created, it needs to be reliably transmitted across multiple generations (the learnability problem).The conservation of an expressive orthography is, by comparison, plain sailing-it is only the learnability problem that applies. In general, it might be said that the maintenance of an optimal system is easier than the construction of a new one (see also Smith, 2002).This is made particularly salient by Fig. 12, which compares the experiments in terms of communicative success (the proportion of trials in which the comprehending participant selected the correct target item in response to their partner).In Experiment 1, communicative success remains around chance level (a one in three probability of selecting the right color) because the orthographic systems tend to become uninformative, mirroring the spoken form of the language.In Experiment 2, by comparison, communicative success remains high in several of the chains (i.e., those chains that preserved the expressive system).This suggests that, while it may be difficult for participants to establish an informative writing system, it is comparatively easy to preserve an informative system that offers a clear advantage.It is also interesting to observe what happens to communicative success in Experiment 3 where the homophony pressure is removed.Here, communicative success does increase over time, as the participants-unencumbered by having to represent sound-find communicative strategies to differentiate meaning.Taken together, these results suggest that it is difficult for the differentiation mechanism to operate in the face of homophony, but comparatively easy for the conservation mechanism to operate under these same circumstances. We emphasize, however, that we did not make a-priori predictions about which of the two mechanisms might represent a better theory of the emergence of informative heterography and our experiments were not designed to test the two theories against each other.Instead, we draw this conclusion on the basis that it was difficult for informative heterography to get off the ground in Experiment 1 due to the homophony present in the spoken form (as clarified by Experiment 3), while some degree of informative heterography did persist in Experiment 2 in the face of increasing homophony.A useful way that future work could explore this hypothesis would be to fit models of differentiation and conservation to historical data on spelling change to see which model offers a better fit to the data. Limitations These conclusions must be interpreted within the limitations of these experiments, which are, after all, highly simplified simulacra of realworld processes.Besides the general scaling-down of orthography, phonology, morphology, and semantics to an experimentally tractable test case, one notable issue we faced was how to induce pressure for informativeness in the written modality.We follow a large body of recent studies by using a real-time communication game (e.g., Carr et al., 2017;Kanwal, Smith, Culbertson, & Kirby, 2017;Kirby et al., 2015;Raviv, Meyer, & Lev-Ari, 2018;Saldana et al., 2019;Silvey, Kirby, & Smith, 2019;Winters, Kirby, & Smith, 2015), but such games are not very representative of the dynamics involved in asynchronous written communication (although see Winters & Morin, 2019, for some approaches).That being said, much written communication in the present day is indeed synchronous (e.g., text messaging), potentially allowing the dynamics typically associated with synchronous communication, such as feedback, to play a role in the development of written forms of the language (Lupyan & Dale, 2016). Another limitation of this work is the extent to which lexical disambiguation really matters in real-world reading scenarios, since the syntactic and semantic context usually makes the meaning clear.If knight and night were spelled the same way, it is hard to imagine a context in which they might be confused.That being said, cultural transmission has been argued to have a strong amplifying effect on small cognitive biases (Thompson, Kirby, & Smith, 2016), so perhaps even a minor benefit in reading could have a large effect on orthography.An important issue in pursuit of this hypothesis will be to better understand the mechanism by which the biases of readers might place selective pressure on a writing system that is primarily shaped by the needs and preferences of writers, especially given that writing systems are often fixed cultural fossils that do not readily adapt to external pressures.It is also important to note that the functional explanation for heterography advanced here (i.e., ambiguity avoidance) is likely to be one of many.For example, Stenroos and Smith (2016) take the view that English spelling has generally remained opaque with respect to phonology because its primary function was to be a record keeper across time and space.From this perspective, written forms of language resist change because they need to be accessible across decades or centuries and across different jurisdictions or dialect areas. Fig. 12. Communicative success by generation in the communicative conditions of all three experiments.The dotted line shows chance level if the comprehending participant selects from the array of nine items at random (i.e., 1/9), and the dashed line shows chance level if the comprehending participant knows the correct shape but selects color at random (i.e., 1/3).Experiment 3 is a replication of Experiment 1 without any auditory component (see Appendix C in the supplementary material). The results of both experiments were relatively weak statistically, with the differences in terms of informativeness between the Transmission-only and Transmission + Communication conditions only just (or not quite) meeting the 95% criterion.This is likely to be related to a combination of two factors: a relatively small effect size combined with a relatively small sample size, with only ten chains-ten independent sampling units-in each condition.Our decision to run ten chains per condition was primarily based on norms in the field (e.g., Kempe, Gauvrit, & Forsyth, 2015;Kirby et al., 2008Kirby et al., , 2015;;Raviv et al., 2018;Roberts & Fedzechkina, 2018;Smith & Wonnacott, 2010;Tamariz & Kirby, 2015), since we had little idea of what effect size we could expect to find when designing the studies.Nevertheless, the small effect sizes we observed do suggest some risk of type one error and future work with this paradigm would benefit from increasing the number of chains in light of these relatively small effect sizes. Lastly, one important thing to note is that the participant population we draw from (native English speakers) is already accustomed to heterography and opacity; informative orthography might be even less forthcoming in other populations used to more transparent writing systems.This brings us to a much deeper issue with iterated learning experiments in general: We cannot avoid the fact that our participants come into the lab with prior linguistic baggage, whether that baggage is for the encoding of sound or the encoding of meaning.Ideally, our experiments would be performed with participants who have no writing experience at all, but since that would be very difficult to achieve, perhaps the second-best option is a participant population that is relatively open-minded to both types of writing systems.In this sense, our use of English speakers is actually quite appropriate, since the English writing system is neither fully phonographic nor fully logographic. Conclusion It has long been known that heterography makes reading and learning to read difficult (Pexman, Lupker, & Jared, 2001;Seymour et al., 2003).As a result, heterography has often been derided as a source of unnecessary complexity, and the orthographic reforms implemented in many languages have tended to focus on its elimination.However, recent research suggests that heterography may in some circumstances be functional because it permits rapid access to meaning (Rastle, 2019;Ulicheva et al., 2020).The novel research presented in this article suggests that the cultural evolution of writing systems may prefer to trade some simplicity for greater informativeness when the communicative need for disambiguation is strong enough.These results imply that writing systems may, under some circumstances, evolve to fill a "reading niche."However, our research also shows that creating heterography, and even maintaining it, is challenging given the demands it poses on learning.These findings raise the prospect of a third major issue relevant to the cultural evolution of writing systems: education.Instead of yielding to the pressure of learnability, orthographies like English and Chinese have developed and maintained a high degree of informativeness because those societies have invested in education systems that spend many years teaching children to read (e.g., Wu, Li, & Anderson, 1999).Thus, informative writing systems that contribute to rapid, skilled reading may not only impose learning costs, but may also require ongoing economic investment. Fig Fig.1.Two models of heterography.A In the differentiation model, two meanings are, at time T 1 , expressed by a single phonetic form P 1 and a single orthographic form O 1 ; however, by time T 2 , two orthographic forms have emerged to differentiate the meanings in writing.B In the conservation model, the two distinct phonetic forms that existed at time T 1 have become homophonous by time T 2 , but the two corresponding orthographic forms have been conserved, resulting in the same state of heterography as in the differentiation model.Adapted fromBerg and Aronoff (2021, pp.325-326) with permission. Fig. 7 . Fig. 7. Results of Experiment 1.A Typological distribution by generation and condition over the four typological categories: holistic (H; purple), expressive (E; blue), redundant (R, green), and degenerate (D; yellow).B Proportion of systems classified as informative (holistic or expressive) by generation.The dots show the observed proportions and the curves show logistic regression models fit to the data.C Communicative cost by generation along with regression models fit to the data.D Transmission error by generation along with regression models fit to the data.The panels on the right show the posterior estimates of the slope (β) parameters by condition as well as the posterior differences between conditions.The green, blue, and red bars indicate respectively the 95%, 90%, and 85% HDIs (credible intervals). Fig. 10 . Fig. 10.Results from the Transmission + Communication condition in Experiment 2 (conservation).Each matrix shows the suffix spelling system in use at a particular generation (shape on the rows, color on the columns, as in Fig.2).Chains are labeled K-T and generations are labeled 0-9 (0 is the randomly generated seed system).Each chain uses an independent color palette, with each color representing a particular suffix spelling; similar colors indicate similar spellings.Spellings in bold-italic are the generalizations on unseen items.Five chains (O, P, Q, R, T) remain fully expressive into the final epoch, in most cases conserving the original forms. Fig. 11 . Fig. 11.Results of Experiment 2. A Typological distribution by generation and condition over the four typological categories: holistic (H; purple), expressive (E; blue), redundant (R, green), and degenerate (D; yellow).B Proportion of systems classified as informative (holistic or expressive) by generation.The dots show the observed proportions and the curves show logistic regression models fit to the data.C Communicative cost by generation along with regression models fit to the data.D Transmission error by generation along with regression models fit to the data.The panels on the right show the posterior estimates of the slope (β and γ) parameters by condition as well as the posterior differences between conditions.β represents the effect of epoch and γ represents the effect of generation number within epoch.The green, blue, and red bars indicate respectively the 95%, 90%, and 85% HDIs (credible intervals).
2024-05-23T13:20:19.241Z
2024-05-22T00:00:00.000
{ "year": 2024, "sha1": "1b408a9e29c47940166a42c3a8b4142cfb4a7be5", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1016/j.cognition.2024.105809", "oa_status": "HYBRID", "pdf_src": "ScienceParsePlus", "pdf_hash": "9f96bac1e812de0b8120deb68c81279eab3b0c9e", "s2fieldsofstudy": [ "Linguistics" ], "extfieldsofstudy": [ "Medicine" ] }
237889158
pes2o/s2orc
v3-fos-license
A Novel Receiving End Grid Planning Method with Mutually Exclusive Constraints in Alternating Current/Direct Current Lines : The large-scale application of high-voltage direct current (HVDC) transmission technology introduces mutually exclusive constraints (MEC) into the power grid planning, which deepens the complexity of power grid planning. The MECs decrease the planning efficiency and effectiveness of the conventional method. This paper proposes a novel hybrid alternating current (AC)/direct current (DC) receiving end grid planning method with MECs in AC/DC lines. The constraint satisfaction problem (CSP) is utilized to model the MECs in candidate lines and then the detailed planning model, in which mutually exclusive candidate lines are described by mutually exclusive variable and constraint sets. Additionally, the proposed planning model takes the hybrid AC/DC power system stability into consideration by introducing the multi-infeed short circuit ratio (MISCR). After establishing the hybrid AC/DC receiving end grid planning model with MECs, the backtracking search algorithm (BSA) is used to solve the optimal planning. The effectiveness of the proposed hybrid AC/DC grid planning method with MECs is verified by case studies. Motivation and Background To solve the energy crisis, countries around the world are actively promoting renewable energy (RE) and increasing the proportion of RE in primary energy consumption [1][2][3]. However, most RE generation plants are remotely located and far away from the load centers due to the characteristics of their primary energy, which leads to the application of long-distance power transmission [4,5]. For example, the abundant RE in western China is transmitted to eastern China by long-distance transmission technologies; several longdistance transmission lines are going to be built in Europe to transmit the vast offshore wind power in the North Sea and Baltic Sea to continental Europe [6]. In long-distance transmission, the line commutated converter high-voltage direct current (LCC-HVDC) transmission technology has a dominant position due to its salient characteristics in economy and technology compared to the high-voltage alternating current (HVAC) transmission [7,8]. Though inferior to LCC-HVDC in economy and transmission capacity, the newly developed voltage source converters high-voltage direct current (VSC-HVDC) transmission technology has a good performance in dealing with the intermittency of RE [9,10]. As the key technologies to achieve an efficient integration and use of RE, the LCC-HVDC, VSC-HVDC, and AC/DC hybrid technologies have acquired a great importance worldwide. Accordingly, hybrid AC/DC grid planning research has become a hot issue in power systems research. In hybrid AC/DC grid planning, LCC-HVDC lines are dependent on their receiving AC grid because they require the receiving grid to have sufficient voltage support capacity. Otherwise, it may cause DC commutation failure [11]. Due to their high transmission power, the commutation failure may cause serious consequences [12]. Therefore, the AC/DC grid planning requires the collaborative optimization of AC and DC lines. The conventional stepwise planning method, which determines whether and how to build a DC line in the early stage and then plan an AC grid, needs to be improved. In the AC/DC grid planning, some special mutually exclusive phenomena appear. For example, a DC converter station may have different candidate locations, which have different influences on the AC grid. However, one DC line can only have one drop point. The selection in multiple drop points constitutes a mutually exclusive problem. For another example, in the selection of construction lines' type, DC lines can replace AC lines and m DC lines can replace at least n AC lines (m ≤ n) between two nodes. That is, whether to build DC lines or AC lines is also a mutually exclusive problem. In addition, the operation characteristics of DC lines and AC lines are quite different. The different operating characteristics of AC and DC lines, which have different impacts on the planning result, also have a feature of mutual exclusivity. It can be seen from the above that the AC/DC grid planning is a coordinated optimization planning of AC and DC lines, and the mutual exclusivity in AC and DC lines needs to be considered. Related Work At present, the research on transmission grid planning with high proportion RE injection is relatively mature. Related studies are mainly focused on the following aspects: (1) multi-scenario-based stochastic transmission grid planning method [13,14]; (2) probability-driven robust planning method [15]; (3) coordinated planning method of network-generation considering multi-source complementarity [16,17]; (4) transmission grid planning method coordinated with distribution network planning [18,19]. With the widespread application of HVDC transmission technology, the power systems present the complex characteristics of AC/DC hybrid technologies, and the planning research concerning hybrid AC/DC grids deserves attention. Considering the optimal power flow, reference [20] decomposes the optimal AC/DC grid planning into two nested problems, including the optimal AC/DC line configuration problem and the optimal power flow problem. In [21], a bi-level model is established to determine an optimal configuration of the hybrid AC/DC distribution system, where the upper level is the optimal configuration model of the hybrid distribution system and the lower level is established as a robust dispatch model to minimize the curtailment of RE when N-1 contingency happens. Most hybrid AC/DC grid planning studies focus on distribution grids and microgrids. In these studies, the planning of the power grid is generally coupled with the planning of the generation due to the scale and integrity of the grid, which is quite different from the transmission grid. The planning study of the hybrid AC/DC transmission grid is still in the preliminary stage. In [22], a hybrid AC/DC transmission grid expansion planning algorithm is presented for a system operator to choose the appropriate AC/DC lines, and the outages of the generation units and transmission lines are taken into account to ensure the safety of the system. In addition to a new AC/DC line, reference [23] puts the conversion plan of existing HVAC lines to HVDC lines into the planning scheme. Meanwhile, large-scale energy storage devices are planned to handle the intermittency of RE. Under the market environment, a market-based VSC-HVDC lines expanding approach, which connects different regional electricity markets, is proposed in [24]. As to the receiving end grid, reference [25] analyzes the difficulties and risks of grid planning under multiple AC/DC lines feed-in, and a VSC-HVDC lines planning model is proposed to maximize the transmission section capacity. The above studies have carried out hybrid AC/DC transmission grid planning from the perspective of ensuring safety, handling intermittency of RE, and considering the market. However, the planning studies are limited. They have not fully considered the mutually exclusive constraints, with which the line type should be put into the planning process and the conventional methods are unable to get the optimal planning scheme. This paper considers the mutually exclusive phenomena of AC/DC lines in hybrid AC/DC grid planning and proposes a hybrid AC/DC receiving end grid planning method considering MECs in AC/DC lines. The CSP in the field of artificial intelligence is applied to modify the general planning model and to deal with MECs. The model employs mutually exclusive variables (MEV) to represent the decision of the mutually exclusive candidate lines, which have their own constraint sets. Through the MEV sets, range sets, and constraint sets, the MECs are fully embodied in the planning model. The BSA is used to solve the model. Finally, case studies are set to verify the effectiveness of the proposed method. The rest of the paper is organized as follows. The mutually exclusive problem in AC/DC grid planning and its mathematical description by CSP are shown in Section 2. The CSP planning model of the AC/DC receiving end grid considering MECs is established in Section 3. Sections 4 and 5 present the solution method and the case study. Section 6 is the discussion part. Finally, Section 7 concludes the paper. The MEC and CSP The mutually exclusive problem in AC/DC grid planning considered in this paper mainly refers to: (1) Between the two nodes, only one type of HVAC line, LCC-HVDC line, and VSC-HVDC line can be built. Between the two nodes, the selection of the newly added LCC-HVDC line, VSC-HVDC line, and HVAC line constitutes a mutually exclusive problem; (2) The different line types correspond to different operating characteristics, which also constitute mutually exclusive characteristics. The mutually exclusive problem of AC/DC grid planning usually appears in the optimization model as a constraint, and so it is called MEC. As to the MECs in the selection of the DC converter station's location, we do not consider them in this paper, because the decision of the DC converter station's location should take multiple environmental and social factors into account, which will greatly increase the complexity of the planning model. The mutually exclusive problem can be expressed as a CSP. CSPs are a popular topic in artificial intelligence and operations research. In the framework of CSP, the entities in problems are represented as a set of homogeneous sets with finite conditions on variables, which provides a common basis for analyzing and solving many seemingly different related issues. CSP is defined as a triple < X, D, C >, where X is the set of variables, D is the set of each variable's range, and C is the set of constraints. Each variable in X corresponds to a range in D, and each constraint in C is composed of a subset of X and an incompatible assignment set. The solution of an CSP is the assignment of all the variables in the X set from their domain, and this assignment satisfies all the constraints simultaneously [26]. The Description of the Mutually Exclusive Problem in AC/DC Grid Planning by CSP The mutually exclusive problem in AC/DC grid planning can be described by MEV set X, range set D, and line's constraint set C. In AC/DC grid planning, one MEV is the construction type of the candidate lines, α i . α i is an integer, and its range is {1, 2, 3}. The different values of α i correspond to different line types, which is shown in (1). where K is the number of mutually exclusive lines. In addition to the type variable, the construction state variable γ i is also the MEV of mutually exclusive lines, and its range is {0,1}. γ i = 0 and γ i = 1 respectively represent line i when it is not put into construction and when it is put into construction. After defining the MEVs, the MEV set X 1 is shown in (2). The MEV range set is D 1 , shown in (3). where d α i is the range of construction type, and d γ i is the range of construction state variable. They satisfy (4) and (5): In AC/DC grid planning, the line's constraint set C 1 is listed as follows (6). where S i is the line i's constraint set. s 1 , s 2 , s 3 respectively refer to the planning and operating constraint sets of the HVAC lines, the LCC-HVDC lines, and the VSC-HVDC lines. S i corresponds to the different constraint set when α i takes 1, 2, and 3, respectively. In addition to the power flow constraints, constraints such as the DC converter station capacity constraint and the DC line operating constraint should be taken into account as well when applying the VSC-HVDC and LCC-HVDC transmission technologies. The detailed planning and operating constraint sets of different lines are shown below. (1) Branch power formula In the DC power flow model, the branch power has a linear relationship with the node phase angle. The branch power formula is shown in (7). where P L l is the power flow of line l. B l is the susceptance of line l. θ s l ,θ r l represent the the head nodes and the tail nodes of line l (2) Branch power flow limit constraint Network security constraints should be considered in the planning model. The power flow of each line cannot exceed its capacity (8). where P L,max l is the capacity of line l. (3) Node phase angle constraint To ensure operation security, the node phase angle cannot exceed its range. s 2 , the Planning and Operating Constraint Set of LCC-HVDC Lines The operation mode of DC lines is quite different from that of AC lines [27]. The LCC-HVDC lines run in controlled mode, while the power flow of AC lines can be optimized freely within the capacity limit. The unique planning and operational constraints as to LCC-HVDC lines are listed as follows. (1) DC line's transmission power constraint The DC line's transmission power shall not exceed the maximum power limit or be lower than the minimum power limit. Otherwise, the DC line will fail and quit operation. where P DC (2) Converter stations capacity constraint The power of DC converter stations should not exceed their capacity. where S i , S max i are the power and capacity of the converter station respectively. (3) MISCR constraint As an important index measuring the stability and security of power systems, the MISCR index is considered in the proposed planning model to ensure the feasibility of the planning results, and it should not be less than three in (12). where MISCR i is the MISCR of DC line i. S Con i is the short circuit capacity of the converter station's bus. F j-i is the impact factor between line i and j. s 3 , The Planning and Operating Constraint Set of VSC-HVDC Lines In addition to the converter station capacity limit, the power flow direction change limit should be considered when selecting a VSC-HVDC line. (1) Power flow direction change constraint The frequent change of power flow direction is not allowed in the operation of VSC-HVDC lines, so the time interval of the direction change should be limited. where t' represents the time at which the power flow direction of the DC line changes. T represents the minimum time interval of direction change. The constraints (11) and (13) constitute the constraint set of VSC-HVDC lines. CSP Planning Model of the AC/DC Receiving End Grid Considering Mecs In this section, the CSP description of MECs is introduced into the general planning model, and the CSP planning model of the AC/DC receiving end grid considering the MECs, including the objective function, the CSP description of candidate lines, and the general grid planning technical constraints, is as specified in the following explanations. Objective Function The objective of the planning problem is formulated in (14) to minimize investment and operation costs in both AC and DC systems. The operation cost mainly includes the maintenance cost, network loss cost, and network congestion cost. where F IV is the investment cost of construction lines. F M , F LO , F RG respectively describe the maintenance cost, the network loss cost, and the network congestion cost. Investment Cost In power grid planning, the investment cost is mainly the investment cost of transmission lines. It is the construction cost multiplied by the capital recovery rate (15): where r is the annual interest rate. LT is the lifetime of the construction line. N is the number of candidate lines. C i is the unit investment cost of new lines. L i is the line length. What needs to be emphasized here is that the transmission capacities of the candidate lines are determined in advance according to the construction type, and they are not optimized in this model. Maintenance Cost It is necessary to regularly maintain the grid or take emergency measures in case of failure. The maintenance cost is proportional to the new lines' investment cost, not including the existing line. The maintenance cost is described as (16): where ε i represents the maintenance cost ratio. Network Loss Cost The network loss is a significant factor in grid planning. To decrease network loss, the network loss cost is added to the objective function. Since it is difficult to calculate the total network loss, the maximum network loss is introduced for simplicity. The network loss cost is determined as follows: where C LOSS represents the network loss cost factor. E LOSS indicates the total network loss power. T max is the maximum network loss duration time. t 0 is the maximum network loss moment. N L-All is the operation line set. P L i,t 0 is the power of line i at maximum network loss moment. U n i ,t 0 is the voltage of the node connected to line i. r i r i is the resistance of the line. Network Congestion Cost When network congestion happens, the RE in some nodes may be unable to put into the network and may be curtailed. In order to keep balance, the power transfer exists in different energy suppliers, leading inevitably to operational cost change. Equations (19)- (23) are the description of network congestion cost. where f RG , f RW , f RH respectively describe the congestion cost of the thermal unit, the wind farm, and the hydropower station. N G , N W , N H are the set of the thermal unit, the wind farm, and the hydropower station, respectively. T RG is the maximum congestion duration time. C G−G is the internal power transfer cost of the thermal unit. C G is the power transfer cost of the wind power and hydropower to the thermal unit. t 1 is the maximum network congestion moment. stand for the scheduled and actual thermal power in case of congestion. ∆P W i,t 1 , ∆P H i,t 1 are the power transfer of wind power and hydropower. The CSP Description of Candidate Lines The constraints of the planning problem contain two parts: (1) the CSP description of all candidate lines, with the lines variable set X, range set D, and line's constraint set C included; (2) general technical constraints of the AC/DC grid planning. Based on Section 2.2, this section talks about the CSP description of all candidate lines, and the next section indicates the general constraints of the AC/DC grid planning. The variables, the variable ranges, and the constraint set of all candidate lines can be described as follows ((24)- (26), in which both the mutually exclusive candidate lines and the normal candidate lines are described). In the variable set X and the variable range set D, the first K subsets refer to candidate lines with MECs. The γ i and the α i are all variables, and each can choose any value in its range set, which means that the mutual exclusivity exists in the planning process. The other N-K candidate lines do not have MECs, and for each line i only the construction state variable is the decision variable, while the construction state variable is certain in the planning model, which is determined in advance in reality. As shown in the range set D, the range of the construction state variable of the other N-K is merely one value. What needs to be emphasized is that in the constraint set C, the j is indeed not a variable and it is certain for each i from K + 1 to N. General AC/DC Grid Planning Technical Constraints B These constraints must be followed for grid planning, and have nothing to do with the construction type of the line. The general constraints considered in this paper are listed as follows. (1) Line's construction time constraint According to the construction plan of generation plants and substations, certain lines have restrictions on commissioning stages in the planning scheme. The constraint is shown in (27), which restricts the lines' construction time. where t i,min , t i,max respectively represent the earliest and latest construction time of line i. (2) Line's construction sequence constraint In reality, line i must be constructed before line j, and restrictions are given on the construction sequence of some AC/DC lines. (3) The power balance constraint In operation, the power systems must keep the power balance. The power injected in the node is equal to the power flowing out. where α n is the generator set connected to node n. ϕ n , ρ n represent the line sets flow in and out node n, and the line sets contain not only the initial lines but also the planned lines. g is the generator type index, which includes thermal power, wind power, and hydropower. The output of each unit should be within its maximum and minimum limit. where P g,min i , P g,max i stand for the minimum output and capacity of generator, respectively. (5) The load supply constraint To ensure the power supply quality, it is not allowed to curtail the load when the system operates normally or the N-1 contingency happens. Solution Technique As to the CSP grid planning model, the BSA with independent pattern coding can solve it efficiently [28][29][30]. The characteristic of the BSA lies in using the deep search first algorithm to search the subtree, and its biggest advantage is that the subtree can use a constraint set to prune the constraint of a subtree without a solution. Additionally, the characteristics of mutually exclusive resource allocation and independence of the MECs have been completely proved. Therefore, the BSA is a potential method to solve the CSP planning model with MECs. The detailed process of the BSA in solving the CSP grid planning above is shown in Algorithm 1. If the result violates the safety index or the network is not disconnected Step 3 Else Step7 7: Bring back cut line 8: If the feasibility check of the combination tree of candidate lines finished Step 3 Else Step 9 9: Output the planning scheme Case Study The case study uses the 18-node test system in reference [31] and adds the DC line. The load data and generation data are listed in [31] as well. The system has 10 nodes and 9 lines, including 2 DC lines. In a future level year, the system will expand to 18 nodes, a total of 32 expandable lines. Node 10, 11, 14, and 16 have wind power access, and the capacities are 200, 200, 100, and 100 MW, respectively. The initial state of the system is shown in Figure 1, where the solid lines stand for the existing lines and the dotted lines are the candidate lines. The construction types of candidate lines contain HVAC lines, LCC-HVDC lines, and VSC-HVDC lines. If the candidate lines have MECs, they can be expanded in three different types. Otherwise, the construction type is determined in advance, and the construction type is not put into optimization. Due to the uncertainty of wind power, the planning scheme needs to meet the safety constraint under multiple wind power output scenarios. This article assumes that the output characteristics of the four wind farms are the same and are all similar to a typical wind farm in western China. According to the wind power output and local maximum load date in the past two years, four typical scenarios of the continuous output of wind farms are obtained by the feature extraction and the cluster analysis of the K-means method, which are shown in Figure 2. The probabilities of scenarios 1-4 are 0.16, 0.44, 0.15, and 0.25, respectively. The maximum wind power (standard unit) in different scenarios is 0.58, 0.5, 0.8, 0.7 respectively. What needs to be emphasized here is that the method dealing with the uncertainty is a simplified method. The topic of this paper is not to deal with uncertainty, so this simplification is acceptable. total of 32 expandable lines. Node 10, 11, 14, and 16 have wind power access, and the capacities are 200, 200, 100, and 100 MW, respectively. The initial state of the system is shown in Figure 1, where the solid lines stand for the existing lines and the dotted lines are the candidate lines. The construction types of candidate lines contain HVAC lines, LCC-HVDC lines, and VSC-HVDC lines. If the candidate lines have MECs, they can be expanded in three different types. Otherwise, the construction type is determined in advance, and the construction type is not put into optimization. Due to the uncertainty of wind power, the planning scheme needs to meet the safety constraint under multiple wind power output scenarios. This article assumes that the output characteristics of the four wind farms are the same and are all similar to a typical wind farm in western China. According to the wind power output and local maximum load date in the past two years, four typical scenarios of the continuous output of wind farms are obtained by the feature extraction and the cluster analysis of the K-means method, which are shown in Figure 2. The probabilities of scenarios 1-4 are 0.16, 0.44, 0.15, and 0.25, respectively. The maximum wind power (standard unit) in different scenarios is 0.58, 0.5, 0.8, 0.7 respectively. What needs to be emphasized here is that the method dealing with the uncertainty is a simplified method. The topic of this paper is not to deal with uncertainty, so this simplification is acceptable. Due to the uncertainty of wind power, the planning scheme needs to meet the safety constraint under multiple wind power output scenarios. This article assumes that the output characteristics of the four wind farms are the same and are all similar to a typical wind farm in western China. According to the wind power output and local maximum load date in the past two years, four typical scenarios of the continuous output of wind farms are obtained by the feature extraction and the cluster analysis of the K-means method, which are shown in Figure 2. The probabilities of scenarios 1-4 are 0.16, 0.44, 0.15, and 0.25, respectively. The maximum wind power (standard unit) in different scenarios is 0.58, 0.5, 0.8, 0.7 respectively. What needs to be emphasized here is that the method dealing with the uncertainty is a simplified method. The topic of this paper is not to deal with uncertainty, so this simplification is acceptable. Planning Results with Different Mutually Exclusive Line Sets When the mutually exclusive line sets are different, the complexity of grid planning varies to some extent. This section compares the results of the CSP grid planning model proposed in this paper with different mutually exclusive line sets. Two line sets containing different mutually exclusive lines are used to test the proposed CSP planning method, as shown in Case 1 and Case 2. Case 1: In the candidate line set, one line from node 11 to 12, the line from node 7 to 9, the line from node 10 to 18, and the lines from node 4 to 16 have MECs. The proposed CSP planning method is used to select expanding lines. Case 2: In the candidate line set, one line from node 11 to 12, one line from node 9 to 10, the line from node 7 to 9, the line from node 10 to 18, the line from node 1 to 11, and the lines from node 4 to 16 have MECs. The proposed planning method is used to select expanding lines. The planning schemes of Case 1 and Case 2, obtained by the CSP planning method with different mutually exclusive line sets, are shown in Figures 3 and 4. The detailed parameter comparison of the planning results is shown in Tables 1 and 2. Case 2: In the candidate line set, one line from node 11 to 12, one line from node 9 to 10, the line from node 7 to 9, the line from node 10 to 18, the line from node 1 to 11, and the lines from node 4 to 16 have MECs. The proposed planning method is used to select expanding lines. The planning schemes of Case 1 and Case 2, obtained by the CSP planning method with different mutually exclusive line sets, are shown in Figures 3 and 4. The detailed parameter comparison of the planning results is shown in Tables 1 and 2. Case 2: In the candidate line set, one line from node 11 to 12, one line from node 9 to 10, the line from node 7 to 9, the line from node 10 to 18, the line from node 1 to 11, and the lines from node 4 to 16 have MECs. The proposed planning method is used to select expanding lines. The planning schemes of Case 1 and Case 2, obtained by the CSP planning method with different mutually exclusive line sets, are shown in Figures 3 and 4. The detailed parameter comparison of the planning results is shown in Tables 1 and 2. Difference of the MEC Lines in Two Cases One Lines 11-12, Lines 7-9, Lines 10-18, Lines 4-16 One Lines 11-12, One Lines 9-10, Lines 7-9, Lines 10-18, Lines From the figures, though the MECs line sets are different, the right-hand networks of the two cases are still the same. This indicates that when the MEC line set changes, the planning result is relatively stable. However, this does make a difference. In Case 1, 20 new lines will be built, including a VSC-HVDC line from node 11 to 12. The total length is about 2005 km. The planning scheme of Case 2 will construct 19 new lines, with a VSC-HVDC line from node 11 to 12 and an LCC-HVDC line from node 9 to 10 included, and the total length is 1830 km. The red box in Figure 4 indicates its difference with Figure 3. Due to the existence of MECs in lines from node 9 to 10, one LCC-HVDC line with a larger transmission capacity is chosen in Case 2 after optimization. While in Case 1, two HVAC lines are planned because there is no MECs in lines from node 9 to 10. The application of the LCC-HVDC line from node 9 to 10 in Case 2 enhances the transmission capacity and increases the wind power absorption capacity of node 10. Therefore, compared with Case 1, Case 2 has a lower network loss rate and a lower power abandonment rate, shown in Tables 1 and 2. Additionally, this LCC-HVDC line avoids the frequent direction change of the power flow in the 9-10-18-17-16-9 ring network when the operation mode changes. The power flow direction is 10-9 when the operation mode changes. With the analysis above and the data in Table 1, it can be found that the parameter indexes of Case 1 and Case 2 are within a reasonable range. However, compared with Case 1, the planning result in Case 2 is better, because there are more MEC lines in Case 2 and more lines can be optimized. Comparison with the Conventional Stepwise Expansion Method At present, the common method in the planning of AC/DC grid is the stepwise expansion method. Its main idea is to plan other AC lines based on the determined DC lines, in which DC lines are not put into optimization. This section compares the AC/DC grid planning method presented in this paper with the stepwise expansion method by numerical analysis. The stepwise expansion method is applied to plan the 18-node test system, as shown in Case 3. Case 3: In the stepwise expansion method, it is artificially determined in advance that one VSC-HVDC line will be built from node 11-12, and one LCC-HVDC line will be built from node 9-10. Other AC lines are planned freely. The planning scheme of Case 3 is shown in Figure 5. A total of 19 new lines will be built, with a total length of about 1810 km. Tables 3 and 4 compare the detailed planning results of Case 2 and Case 3. Case 3: In the stepwise expansion method, it is artificially determined in advance that one VSC-HVDC line will be built from node 11-12, and one LCC-HVDC line will be built from node 9-10. Other AC lines are planned freely. The planning scheme of Case 3 is shown in Figure 5. A total of 19 new lines will be built, with a total length of about 1810 km. Tables 3 and 4 compare the detailed planning results of Case 2 and Case 3. As can be observed in the comparison, Case 2 optimizes the DC line. However, the stepwise expansion method used in Case 3 does not optimize the DC line, as it is unable to consider the supporting role of the AC grid in the LCC-HVDC line. The MISCR of node 12, which is the drop point of a planned LCC-HVDC line, is merely 1.87. It does not meet the voltage stability requirements and is not conducive to the stability of power As can be observed in the comparison, Case 2 optimizes the DC line. However, the stepwise expansion method used in Case 3 does not optimize the DC line, as it is unable to consider the supporting role of the AC grid in the LCC-HVDC line. The MISCR of node 12, which is the drop point of a planned LCC-HVDC line, is merely 1.87. It does not meet the voltage stability requirements and is not conducive to the stability of power systems. Additionally, the AC grid between node 5 and node 12 is a relatively independent isolated grid. This will affect the operation safety, because in the isolated grid, a power outage of the whole local power grid will happen when line 5-12 fails. As to the wind power consumption, one transmission channel (node 12-13) of wind power at node 11 is reduced in Case 3, which leads to a great increase of the wind power curtailment rate compared to Case 2. Although there is no advantage in cost, Case 2 has obvious advantages over Case 3 in terms of power grid security and wind power consumption. This disadvantage in cost is mainly due to the fact that Case 2 is an optimal planning of the global hybrid AC/DC grid, while Case 3 is merely an optimal planning of the AC grid and it sacrifices the security of the hybrid AC/DC grid. It is suggested that a better planning result can be obtained by adding the selection of DC lines into the optimization process, which is the advantage of the proposed method compared with the conventional method. Influence of the Wind Power Capacity Change on the Planning Scheme To study the impact of wind power capacity on the planning scheme, based on Case 2, we consider the capacity of the wind farm connected to node 11 as 100 MW, 150 MW, 200 MW, 250 MW, and 300 MW, respectively. The comparison of the planning results is shown in Table 5. It can be seen from Table 5 that with the wind farms' capacity increasing, the obvious change in the planning scheme concerns the lines near the wind farm integration points. This is mainly due to the increase transmission demand for wind power. Additionally, the line investment costs of the grid planning results also increase accordingly. Due to the large transmission ability of HVDC lines, the application of HVDC lines is more technically conducive when the capacity of wind farms connected to node 11 exceeds 200 MW. Although the unit price of HVDC lines may be higher than that of the HVAC lines, it is better in terms of overall economic benefits. In this example, when the wind farm capacity is greater than or equal to 200 MW, the planning plan selects HVDC lines between node 11 and 12; when the capacity is less than 200 MW, the planning plan adds HVAC lines. Discussion Due to the importance of the receiving end grid in supplying loads and the simplicity of the traditional sending end grid, the AC/DC grid planning in this article is to study the receiving end grid. At present, the connection within the wind farm cluster at the sending end and the bundling of wind power, solar power, and thermal power have made the sending-end grid increasingly complicated, and the planning of the sending end AC/DC grid has gradually received attention. The method proposed in this paper also has a certain applicability to the sending-end grid planning, and the short-circuit ratio can still measure the stability of the power grid. However, because of the richness and complexity of RE at the sending end, the simulation of the volatility of multiple RE sources should be focused and the modeling of the complementarity of different energy sources needs to be considered. The AC/DC grid planning described in this paper is carried out under the assumption that the government is responsible for the unified operation of the grid. When multiple investment and construction operators are involved, the grid planning should take competition among multiple investment entities into account, and the method described in this paper is no longer used. But when it comes to planning, the government-led mode is actually more beneficial to the social welfare and security of the grid compared to the multi-operators mode. Conclusions The integration of large-scale RE introduces a widespread application of HVDC lines into the AC grid. The planning of the complex hybrid AC/DC grid faces big challenges, and the MECs in grid planning exacerbate the difficulty. In this paper, an optimal planning method for the AC/DC receiving end grid with MECs in candidate lines is proposed. The CSP theory is used to describe a hybrid AC/DC grid planning model and the MECs in the candidate lines, where MECs in the candidate lines are described by an MEV set, a range set, and a constraint set. Then, the BSA is used to solve the planning model. Through case studies, the feasibility of the proposed approach has been verified with major conclusions, as follows. (1) The proposed model outperforms the conventional planning method in wind power consumption and network safety because it takes the selection of the line's type into optimization and then successfully solves the problem of MECs. Additionally, the proposed model can get stable and reasonable expanding plans when the MEC line sets change. Thus the planning method modeled by CSP can effectively handle the planning issues of a hybrid AC/DC grid with MECs. (2) The proposed model introduces a key index, the MISCR, into the hybrid AC/DC grid planning, by which the AC grid can be optimized to meet the voltage support demand of the DC lines. The obtained scheme performs better in terms of the stability of the power systems than the conventional method. The introduction of MISCR shows the adaptive change in this paper from the traditional AC grid planning to the hybrid AC/DC grid planning. This paper only considers the MECs of AC and DC lines. Further research can take the MECs of the different DC converter station locations into account. Due to the large integration capacity of RE and its intermittency problem, it is of great significance to consider the multiple uncertainties and the coupling relationships within them in planning models.
2021-09-01T15:08:54.891Z
2021-06-25T00:00:00.000
{ "year": 2021, "sha1": "63803b6d497737fe2e50110d6d05db64876bfb20", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2071-1050/13/13/7141/pdf", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "9f660c041cb550a8ae4616d66d873e8903e39d68", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [ "Computer Science" ] }
1454093
pes2o/s2orc
v3-fos-license
Regulation of Platelet Derived Growth Factor Signaling by Leukocyte Common Antigen-related (LAR) Protein Tyrosine Phosphatase: A Quantitative Phosphoproteomics Study* Intracellular signaling pathways are reliant on protein phosphorylation events that are controlled by a balance of kinase and phosphatase activity. Although kinases have been extensively studied, the role of phosphatases in controlling specific cell signaling pathways has been less so. Leukocyte common antigen-related protein (LAR) is a member of the LAR subfamily of receptor-like protein tyrosine phosphatases (RPTPs). LAR is known to regulate the activity of a number of receptor tyrosine kinases, including platelet-derived growth factor receptor (PDGFR). To gain insight into the signaling pathways regulated by LAR, including those that are PDGF-dependent, we have carried out the first systematic analysis of LAR-regulated signal transduction using SILAC-based quantitative proteomic and phosphoproteomic techniques. We haveanalyzed differential phosphorylation between wild-type mouse embryo fibroblasts (MEFs) and MEFs in which the LAR cytoplasmic phosphatase domains had been deleted (LARΔP), and found a significant change in abundance of phosphorylation on 270 phosphosites from 205 proteins because of the absence of the phosphatase domains of LAR. Further investigation of specific LAR-dependent phosphorylation sites and enriched biological processes reveal that LAR phosphatase activity impacts on a variety of cellular processes, most notably regulation of the actin cytoskeleton. Analysis of putative upstream kinases that may play an intermediary role between LAR and the identified LAR-dependent phosphorylation events has revealed a role for LAR in regulating mTOR and JNK signaling. Intracellular signaling pathways are reliant on protein phosphorylation events that are controlled by a balance of kinase and phosphatase activity. Although kinases have been extensively studied, the role of phosphatases in controlling specific cell signaling pathways has been less so. Leukocyte common antigen-related protein (LAR) is a member of the LAR subfamily of receptor-like protein tyrosine phosphatases (RPTPs). LAR is known to regulate the activity of a number of receptor tyrosine kinases, including platelet-derived growth factor receptor (PDGFR). To gain insight into the signaling pathways regulated by LAR, including those that are PDGF-dependent, we have carried out the first systematic analysis of LAR-regulated signal transduction using SILAC-based quantitative proteomic and phosphoproteomic techniques. We have analyzed differential phosphorylation between wild-type mouse embryo fibroblasts (MEFs) and MEFs in which the LAR cytoplasmic phosphatase domains had been deleted (LAR⌬P), and found a significant change in abundance of phosphorylation on 270 phosphosites from 205 proteins because of the absence of the phosphatase domains of LAR. Further investigation of specific LAR-dependent phosphorylation sites and enriched biological processes reveal that LAR phosphatase activity impacts on a variety of cellular processes, most notably regulation of the actin cytoskeleton. Analysis of putative upstream kinases that may play an intermediary role between LAR and the identified LAR-dependent phosphorylation events has revealed a role for LAR in regulating mTOR and JNK signaling. Phosphorylation is a key post-translational modification involved in the regulation of cell signaling. Control of phosphorylation is vital in maintaining normal biological processes, and dysregulation is implicated in many diseases. Kinases and phosphatases have opposing roles in modulating levels of phosphorylation, acting in a coordinated manner within cells to maintain cellular homeostasis via their regulation of cell signaling pathways. Historically phosphatases were viewed as being promiscuous enzymes whose role was simply to dephosphorylate their substrates in order to terminate signal transduction pathways. It is now evident that phosphatases display selectivity and are not simply 'off switches' but can contribute to both deactivation and activation of signaling pathways (1). Although the role of kinases has been extensively studied, much less is known about phosphatases and their specific contributions to cell signaling. Leukocyte common antigen-related protein (LAR) 1 belongs to the LAR subfamily of receptor-like protein tyrosine phosphatases (RPTPs). It is composed of an extracellular domain containing three immunoglobulin domains (Ig), a fibronectin type III domain (FNIII), and cytoplasmic domains, D1 and D2, that are essential for phosphatase activity (2)(3)(4). LAR is widely expressed in a variety of cell types, such as neuronal cells, epithelial cells and fibroblasts (5). Several disorders are associated with LAR including defective development of mammary glands, abnormal neuronal development and function, diabetes and cancer (6,7). Signal transduction regulated by LAR has thus far predominantly been studied in neuronal cells, where it participates in axonal outgrowth, nerve regeneration and orchestration of synapse development (6,8). LAR regulates tyrosine kinase receptor growth factor signaling by either dephosphorylating negative regulatory tyrosine residues to enhance receptor activation (9), or by dephosphorylating activating tyrosine residues to deactivate the receptor (10,11). Platelet-derived growth factor (PDGF) signaling is involved in many cellular processes such as cell growth, survival and motility (14). Overexpression of the PDGF receptor is associated with diseases such as atherosclerosis and cancer, signifying it as a target for therapeutic interventions (15)(16)(17). PDGF isoforms act as dimers composed of interacting A, B, C, and D polypeptide chains. These can be homodimeric or heterodimeric isoforms that can interact with PDGF ␣ and PDGF ␤ receptors leading to receptor dimerization and activation of kinase activity via autophosphorylation (18). This results in the recruitment and activation of signaling pathways that culminate in transcriptional responses and the promotion of cell proliferation and survival (18,19). Phosphatases are generally considered as negative regulators of signaling pathways. A number of protein tyrosine phosphatases (PTPs) have been reported to dephosphorylate tyrosine residues (Tyr) on PDGFR␤ thereby deactivating the receptor and inhibiting downstream signaling. For example, dephosphorylation of Tyr 857 on PDGFR␤ by low molecular weight protein tyrosine phosphatase (LMW-PTP) inhibits the receptor kinase activity and subsequent downstream signaling via PI-3 kinase (20). T-cell protein tyrosine phosphatase (TC-PTP) has been shown to inhibit binding of phospholipase C ␥1 (PLC ␥1) through dephosphorylation of Tyr 1021 that results in altered cell migration in response to PDGF (21). SHP-2 can inhibit binding of Ras-GAP to PDGFR␤ by dephosphorylation of PDGFR␤ Tyr 771 , which results in enhanced activity of the Ras signaling pathway (22). By contrast, LAR promotes PDGF signaling by inhibiting activity of the cytoplasmic tyrosine kinase, c-Abl (23). In the absence of LAR phosphatase activity c-Abl inhibits PDGFR␤ signaling by phosphorylating and inhibiting the receptor (23). In this study, we set out to gain insight into the landscape of cell signaling events regulated by LAR. In the first systematic analysis of LAR-regulated signal transduction we have used stable isotope labeling by amino acids in cell culture (SILAC) (24,25) to analyze differential phosphorylation in wild-type (WT) mouse embryo fibroblasts (MEFs) and MEFs in which the LAR cytoplasmic phosphatase domains had been deleted (LAR⌬P) (26). Although LAR is known to promote PDGFR activation in fibroblasts (23), the signaling consequences of this regulation have not been fully studied, thus we carried out these studies in the absence and presence of PDGF. We identified 270 LAR-dependent phosphorylation events on 205 proteins, including known LAR interactors, kinases, guanine nucleotide exchange factors (GEFs), and GTPase activating proteins (GAPs). Subsequent functional classification revealed an enrichment of LAR-mediated phosphorylation events on proteins involved in cytoskeletal organization. Further kinase prediction analysis revealed a role for LAR in regulating both mTOR and JNK signaling pathways, both of which play a role in regulation of the actin cytoskeleton. These results significantly expand our understanding of signaling events downstream of LAR. This approach has enabled us to identify LAR-dependent changes in phosphorylation within the entire signaling network, highlighting the role of LAR as a key regulator of growth factor-dependent cell signaling pathways. Transfection-The FLAG-LAR expression vector was kindly provided by Ruey-Hwa Chen (National Taiwan University, Taipei, Taiwan). LAR⌬P cells were transfected using Lipofectamine 2000 (Thermo Fisher Scientific) according to manufacturer's instructions. Cell Stimulation, Cell Lysis, and Immunoblotting-Cells were serum starved for 16 h prior to stimulation with 20 ng/ml PDGF-BB for the indicated times. Treated cells were placed on ice and washed twice with ice-cold phosphate buffered saline (PBS). Cells were then lysed with lysis buffer (20 mM Tris-HCl, pH 7.5, 0.5% Triton X-100, 0.5% deoxycholate, 150 mM NaCl, 10 mM EDTA, 0.5 mM Na 3 VO 4 , and 1% Trasylol) for 15 min on ice. Lysed cells were centrifuged at 15,000 ϫ g for 15 min at 4°C and the supernatant (WCL) was collected. Protein concentrations were determined using the BCA protein assay (Thermo Fisher Scientific) as per the manufacturer's instructions. An equal volume of 2ϫ sample buffer (1.0 M Tris-HCl pH 8.8, 0.5% Bromphenol blue, 43.5% glycerol, 10% SDS, 1.3% ␤-mercaptoethanol) was added to the WCL, and the sample was boiled at 95°C for 6 min. Samples were run on SDS-PAGE gels and transferred to nitrocellulose membranes. The membranes were blocked with 5% bovine serum albumin (BSA) (Sigma-Aldrich) at room temperature for one hour and incubated in 5% BSA in TBS-T (20 mM Tris-HCl, pH 7.5, 0.1% Tween 20, 150 mM NaCl) containing primary antibody overnight at 4°C. Following 3 ϫ 10 min washes in TBS-T, the membrane was incubated in TBS-T containing IRDye conjugated secondary antibody (LI-COR Biosciences) for 1 h at room temperature. The membranes were washed again as above and proteins were visualized using fluorescence detection on the Odyssey Infrared Imaging System (LI-COR Biosciences). Following quantitation of immunoblots (n ϭ 3) statistical analysis was performed using a two-way ANOVA, Sidak's multiple comparison test. Trypsin Digestion, Sample Fractionation, and Phosphopeptide Enrichment of Samples-For the proteome analysis, 5 g of light, medium, and heavy lysates were mixed, run on a 10% SDS-PAGE gel, and Coomassie stained. Each lane was cut into 10 bands. In-gel digestion using Trypsin Gold (Promega, Southampton, UK) was carried out as previously described (28). For the phosphoproteome analysis, 10 mg of light, medium, and heavy lysates were pooled prior to trypsin digestion. Proteins were reduced with 8 mM DTT, alkylated with 20 mM iodoacetamide in 50 mM ammonium bicarbonate and digested with Trypsin Gold (1:100 enzyme/protein ratio) at 37°C overnight. Digested samples were acidified by addition of 0.5% TFA. Peptides were desalted using Sep-Pak C18 Cartridges (Waters, Milford, MA) according to manufacturer's instructions. Desalted and dried peptides were resuspended in 100 l mobile phase A (10 mM KH 3 PO 4 , 20% acetonitrile, pH 3) and loaded onto a 100 ϫ 4.6 mm polysulfoethyl A column (5 m particle size, 200 nm pore size, PolyLC, Columbia, MD). Separation used a gradient elution profile that started with 100% mobile phase A, increased from 0 to 50% mobile phase B (10 mM KH 3 PO 4 , 20% acetonitrile, 500 mM KCl, pH 3) over 30 min, increased to 100% B over 5 min, and then returned to 100% A. Each of the 20 resulting fractions was desalted using a C8 macrotrap cartridge (Michrom BioResources, Auburn, CA) according to manufacturer's instructions. Phosphopeptides were enriched using TiO 2 tips (Titansphere TM Phos-TiO kit, GL Sciences, Torrance, CA). . Phosphopeptides were resuspended in buffer B and loaded onto the tips, washed once in buffer B and twice in buffer A before being eluted sequentially in 5% ammonia solution followed by 5% pyrrolidine. Phosphopeptide-enriched samples were desalted on reverse-phase C18 ZipTips (Millipore, Nottingham, UK). Peptides were eluted in 50% (v/v) ACN, 0.1% (v/v) formic acid (FA), dried to completion and resuspended in 0.1% FA. All resulting peptide mixtures were analyzed in duplicate by liquid chromatography tandem mass spectrometry (LC-MS/MS). Mass Spectrometry-On-line liquid chromatography was performed by use of a Dionex Ultimate 3000 NSLC system (Thermo Fisher Scientific). Peptides were loaded onto an Acclaim PepMap 100 C18 resolving column (15 cm length; 75 m internal diameter; LC Packings Sunnyvale, CA) and separated over a 30 min gradient from 3.2% to 44% acetonitrile (J.T. Baker (Avantor Performance Materials), Deventer, The Netherlands). Peptides were eluted directly (350 nL/ min) via a Triversa nanospray source (Advion Biosciences, Ithaca, NY) into a LTQ Orbitrap Elite mass spectrometer (Thermo Fisher Scientific). The mass spectrometer alternated between a full FT-MS scan (m/z 380 -1600) and subsequent CID MS/MS scans of the seven most abundant ions. Survey scans were acquired in the Orbitrap cell with a resolution of 60,000 at m/z 200. Precursor ions were isolated and subjected to CID in the linear ion trap. Isolation width was 2 Th and only multiply charged precursor ions were selected for MS/MS. The MS1 maximum ion inject time was 1000 ms with an AGC target of 1e 6 charges. The MS2 ion inject time was 50ms with an AGC target of 2e 5 charges. Dynamic exclusion was utilized, fragmented ions were excluded for 60 s with an exclusion list size of 500. CID was performed with helium gas at a normalized collision energy of 35%. Precursor ions were activated for 10 ms. Data acquisition was controlled by Xcalibur 3.0.63 software. Identification and Quantification of Peptide and Proteins-Mass spectra were processed using the MaxQuant software (version 1.5.3.8) (29). Data were searched, using the Andromeda search engine within MaxQuant (30), against the mouse Swiss-Prot database (downloaded 6.10.15). The mouse database contained 16,719 re-viewed protein entries. The search parameters were: minimum peptide length 7, peptide tolerance 20 ppm (first search) and 6 ppm (second search), mass tolerance 0.5 Da, cleavage enzyme trypsin/P, and 2 missed cleavages were allowed. Carbamidomethyl (C) was set as a fixed modification. Oxidation (M), acetylation (Protein Nterm), and phospho (STY) were set as variable modifications. The appropriate SILAC labels were selected and the maximum labeled amino acids was set to 3. All experiments were filtered to have a peptide and protein false-discovery rate (FDR) below 1% and the match between runs featured was enabled. All raw files from both the phosphoproteome and proteome pipeline were analyzed together in MaxQuant. Within the MaxQuant output, phosphorylation sites were considered to be localized correctly if the localization probability was at least 0.75 (75%) and the score difference at least 5. Bioinformatics analysis was performed in the Perseus software environment, which is part of MaxQuant (Perseus version 1.5.0.15; www.perseus.framework.org). Significance testing was carried out using a Student's t test on log 2 transformed ratios and controlled with a Benjamini-Hochberg FDR threshold of 0.05. Peptides quantified in three or more experimental repeats were deemed significantly changed and regulated by LAR phosphatase activity if they had a p value of Ͻ 0.05 and a ratio of Ͻ 0.667 or Ͼ 1.5 (at least a 1.5-fold change in abundance). The mass spectrometry proteomics data, including the MaxQuant output, have been deposited to the ProteomeXchange Consortium via the PRIDE partner repository with the data set identifier PXD002545 (31). Cluster Analysis, GO Analysis, and Kinase Motif Analysis-GProX software (32) was used to perform clustering of log2 transformed ratios from the MaxQuant output. Unsupervised fuzzy c-means clustering was used with an upper regulation threshold of 1 and a lower regulation threshold of Ϫ1. Overrepresentation of GO terms in the clusters was performed within GProX using a binomial statistical test with a Benjamini-Hochberg p value adjustment, a p value threshold of 0.05, and a minimum occurrence of 2. DAVID (Database for Annotation, Visualization and Integrated Discovery) (33) was used to identify over-represented GO terms in the phosphoproteome data set. The background list comprised of all of the proteins identified across our experiments. The threshold count and EASE score were set to 2 and 0.05 respectively. Phosphopeptides containing well localized phosphosites were analyzed for predicted kinase motifs using GPS (34) with a high stringency setting. Protein network visualization was performed using Cytoscape (35) with WordCloud plugin (36). Absence of LAR Phosphatase Activity Leads to Alterations in the Global Phosphoproteome and Proteome-Our aim was to gain insight into the protein signaling networks downstream of LAR. We utilized SILAC (24,25) to quantitatively compare levels of protein expression (proteome analysis) and phosphorylation (phosphoproteome analysis) in PDGF-stimulated wild-type (WT) and LAR⌬P (lacking cytoplasmic phosphatase domains) MEFs. Three populations of WT and LAR⌬P cells were SILAC-labeled by culturing them in light R0K0, medium R6K4, or heavy R10K8 SILAC media. Cells were left untreated or stimulated with PDGF-BB for 7 min as indicated in Fig. 1A. Within the phosphoproteome data set, 2559 unique phosphosites from 1311 proteins were identified with high localization scores (localization probability Ͼ0.75; score difference Ͼ5) in one or more experimental replicates. These phosphosites were comprised of 2125 (83%) serine, 260 (10%) threonine, and 174 (7%) tyrosine phosphorylation sites. Of these, 266 (10%) are not listed in PhosphoSitePlus (37) and are considered novel. To compare the phosphoproteome of PDGF stimulated WT and LAR⌬P cells four biological replicates, including a label swap control, were incorporated into the experimental design (Fig. 1A). The overlap between the four biological replicates is shown in supplemental Fig. S2A: 54% of the phosphopeptides were identified in two or more replicates, and 27% were identified in three or more replicates. The Pearson's correlation coefficient for the peptide ratios measured across the four biological replicates, including the label swap experiment, ranged from 0.64 -0.86 indicating good biological reproducibility (supplemental Fig. S2B). In cells lacking LAR phosphatase activity, a total of 270 phosphopeptides from 205 proteins showed a significant change in abundance (p Ͻ 0.05; Ͼ 1.5-fold change) ( Fig. 1B; supplemental Table S1). Of these, 255 (95%) contained serine phosphorylation sites, 9 (3%) threonine, and 6 (2%) tyrosine. A total of 103 phosphosites were up-regulated and 167 down-regulated. Within our phosphoproteome data set we identified serine, threonine, and tyrosine phosphorylation events mediated by LAR allowing us to gain an understanding of the global signaling landscape. LAR could contribute to the regulation of phosphorylation on these sites via modulation of the activity of specific kinases and phosphatases, or in the case of tyrosine, via direct dephosphorylation, given that LAR is a tyrosine phosphatase. These LAR-dependent changes in phosphopeptide abundance could be caused by alterations in regulation of specific phosphorylation events or changes in protein abundance, hence our combined proteomic and phosphoproteomic approach. Within the proteomic data set (comparing PDGF treated WT cells and LAR⌬P cells) a total of 2939 proteins were identified; 1150 with associated quantitation data in two or more biological replicates. Of these, 147 proteins (47 upregulated; 100 down-regulated) showed a significant change (p Ͻ 0.05; Ͼ 1.5-fold change) in abundance in the LAR⌬P cells compared with the WT cells ( Fig. 1C; supplemental Table S2). This is a significant finding as 13% of the quantified proteome was changed because of the absence of LAR phos- phatase activity, suggesting that LAR may be involved in regulating protein turnover. Merging the phosphoproteome and proteome data sets resulted in a measure of corresponding protein abundance for 23% of the quantified phosphorylation sites. Of the 270 LAR-dependent phosphorylation events, 11% changed significantly at the level of the proteome indicating regulation at the protein level rather than the peptide level. Tyrosine Phosphorylation Regulated by LAR-Considering potential direct LAR substrates, a loss of LAR phosphatase activity would lead to an increase in tyrosine phosphorylation of these proteins, hence we looked for tyrosine phosphorylated peptides within the data set that increased in LAR⌬P cells. Only one tyrosine phosphorylated peptide, belonging to the protein Lcp2 (SLP76), increased in abundance in the absence of LAR activity (LAR⌬P cells) (supplemental Table S1). SLP76 is an adaptor protein, mostly studied in T cells, that relays signals from activated receptors to the cytoskeleton (38). We have identified an increase in Tyr 465 phosphorylation which is a tyrosine residue located in the C-terminal SH2 domain of SLP76. The remainder of the tyrosine phosphopeptides decrease in abundance in LAR⌬P cells, which suggests that the regulation of phosphorylation on these sites is via an indirect LAR-regulated mechanism. Biological Processes Regulated by LAR-Gene Ontology (GO) analysis of the phosphoproteins regulated by LAR revealed a number of enriched GO biological processes, molecular functions, and cellular components. The most significantly enriched terms are shown in Figs. 2A, 2B, and 2C (for full DAVID output see supplemental Table S3). The dominant enriched GO terms were associated with cytoskeletal organization and cell adhesion. LAR has been shown to regulate the cytoskeleton in conjunction with TRIO, a guanine nucleotide exchange factor for small GTPases (39). Here we have identified a 3-fold increase in the phosphorylation of TRIO on Ser 2458 and Ser 2462 in LAR⌬P cells, indicating that LAR dependent signaling networks are regulating its phosphorylation status. Despite this link to cytoskeletal organization, the extent of the LAR-dependent cytoskeletal regulatory network has not been previously studied. It is evident from our phosphoproteome data set that a large number of cytoskeletal proteins are dependent upon LAR phosphatase activity to regulate their phosphorylation (Fig. 2D). LAR has also been shown to interact with cadherin, ␤-catenin, and plakoglobin to regulate adherens junctions and desmosomes (40 -42). Here, we have identified specific LAR-dependent phosphorylation sites on these proteins and discovered additional LAR regulated cell junction proteins (Fig. 2D). In LAR⌬P cells we have identified a decrease in phosphorylation on cadherin-11 (Ser 714 ), ␣-catenin (Ser 641 ), ␤-catenin (Ser 191 ; Ser 675 ), ␦catenin (Ser 864 ), and plakoglobin (Ser 665 ), all proteins present at sites of cell-cell adhesion. ␤-catenin is a reported substrate for LAR, and tyrosine dephosphorylation has been linked to inhibition of epithelial cell migration (43). Here, we have not identified specific tyrosine phosphorylation sites on ␤-catenin that may be directly dephosphorylated by LAR, but instead we identified two serine residues with altered phosphorylation. Phosphorylation of one of these, Ser 191 , by JNK2 has been shown to be essential for nuclear accumulation of ␤-catenin in response to Wnt (44). It is possible that LAR is capable of regulating ␤-catenin phosphorylation indirectly by regulating the activity of kinases that phosphorylate ␤-catenin, such as JNK2, as well as directly by dephosphorylating specific tyrosine residues (43). In addition, we have evidence that LAR also regulates tight junctions with phosphorylation of two key proteins, ZO-1 (Tjp1) and ZO-2 (Tjp2), decreased in LAR⌬P cells (ZO-1 Ser 1614 ; ZO-2 Ser 107 ; Ser 239 ; Ser 1136 ). LAR-regulated Phosphorylation Events Downstream of PDGF-In order to identify differential changes in abundance within the phosphoproteome data set, phosphopeptides were clustered according to their response to PDGF stimulation versus unstimulated cells. The comparison of PDGF stimulated WT and LAR⌬P cells versus unstimulated WT cells allowed the evaluation of the comparative end point of phosphopeptide abundance. This is the phosphorylation signal that the cells would ultimately respond to in the presence of PDGF. This may be due to a differential response to PDGF or constitutive down-or up-regulation in unstimulated LAR⌬P cells, hence we also included a comparison of PDGF stimulated LAR⌬P cells versus unstimulated LAR⌬P cells. Our experimental design included three biological replicates for the ratio between PDGF treated and unstimulated WT cells, and two biological replicates for the ratios between PDGF treated LAR⌬P cells and unstimulated WT or LAR⌬P cells (Fig. 1A). We obtained ratios for 375 peptides, each of which had been quantified in two biological replicates. Six clusters were identified ( Fig. 3A and supplemental Table S4). Clusters 2, 3, 4, and 6 contained those phosphopeptides that, in the presence of PDGF, showed a LAR phosphatase-dependent alteration in relative abundance when compared with basal levels in WT cells. This is not true for those phosphopeptides in clusters 1 and 5 where similar levels were observed in both PDGF stimulated WT and LAR⌬P cells when compared with unstimulated WT cells. Phosphopeptides in clusters 2 and 3 exhibited similar fold changes in phosphopeptide abundance because of PDGF stimulation in both WT and LAR⌬P cells compared with their basal levels, however, the abundance in PDGF stimulated LAR⌬P cells compared with unstimulated WT cells was significantly different. This indicated that the absence of LAR phosphatase activity causes changes in basal levels of phosphorylation on these phosphoproteins. Phosphopeptides in clusters 4 and 6 have a similar fold change in LAR⌬P cells in response to PDGF, whether this is compared with unstimulated WT or LAR⌬P cells. However, the fold change is different to that observed in WT cells. Enrichment analysis for GO terms over-represented in each cluster showed a clear distinction between the biological roles Table S4). This is highlighted in Fig. 3B which is focused on Cellular Component GO Terms. There is a clear distinction between the discrete cellular components within which the differentially regulated phosphopeptides reside. The majority of the enriched terms are cytoskeletal and vesicular compartments. With regards to PDGF stimulation, cluster 4 is perhaps the most interesting as these proteins contain phosphosites that are rapidly phosphorylated in response to PDGF; however, this is not the case when LAR phosphatase activity is reduced. These responses are not caused by constitutive down-regulation in LAR⌬P cells. One of the enriched components in this cluster is the late endosome compartment (GO:0005770), which contains a Rab7a peptide phosphorylated on Ser 72 . Phosphorylation of this residue on Rab7a plays a regulatory role in late endosome maturation (45) and our data revealed a 14-fold increase in response to PDGF in WT cells; however, this was reduced over 2-fold in LAR⌬P cells (supplemental Table S4). Within cluster 4 there was also an enrichment of cytoskeletal proteins (GO: 0005856) including Sorbs3 (vinexin), a protein involved in regulation of actin stress fiber formation (46), and Add3 (gamma-adducin), a protein that promotes assembly of the spectrin-actin network which plays a role in regulating both adhe- FIG. 3. Phosphorylation events in WT and LAR⌬P cells show clusters of regulation correlated to distinct biological processes. A, GProX clustering of phosphopeptide abundance changes. Ratios of PDGF (7 min) stimulated WT and LAR⌬P cells over WT unstimulated cells and PDGF (7 min) stimulated LAR⌬P cells over LAR⌬P unstimulated cells were subjected to unsupervised clustering using the fuzzy c means alogrithm. The number of phosphopeptides in each cluster is indicated. B, Overrepresentation of GO terms in the clusters was performed within GProX using a binomial statistical test with a Benjamini-Hochberg p value adjustment (p value threshold 0.05). Enriched categories for GO Cellular Components are represented as a heat map. rens and tight junctions (47). We have identified phosphorylation events on both Sorbs3 and Add3 that are reduced by up to 3-fold in the absence of LAR activity, hence this activity is a requirement for PDGF-regulated phosphorylation of these proteins (supplemental Table S4). These data highlight the interplay between LAR and PDGF in regulation of the cytoskeleton and protein transport. Regulation of Kinase Activity by LAR-LAR-dependent phosphorylation of several kinases has been identified (Fig. 2D). These include Braf (B-Raf) and Mapk1 (ERK2), both members of the Ras-MAPK signaling pathway. An increase in B-Raf Ser 484 and a decrease in ERK2 Tyr 185 was observed in LAR⌬P cells. Phosphorylation of ERK1 and 2 on Thr 183 and Tyr 185 (Thr 202 /Tyr 204 in human) occurs during MAPK signaling and activates the ERK kinases, which in turn can phosphorylate their many substrates. PDGF-dependent ERK1/2 phosphorylation at these activating sites has previously been shown to be reduced in the absence of LAR phosphatase activity (23) and this was verified here. Analysis of PDGF-dependent ERK phosphorylation in WT and LAR⌬P cells confirms that ERK activity is significantly reduced in LAR⌬P cells treated with PDGF when compared with WT cells (Figs. 4A and 4B). Re-expression of WT LAR in LAR⌬P cells increased ERK phosphorylation to levels resembling those observed in WT cells, confirming that LAR phosphatase activity is required for ERK activation (Figs. 4C and 4D). With an aim to delineate further signaling pathways regulated by LAR we sought to identify those kinases which may be responsible for inducing phosphorylation of substrates within our phosphoproteomic data set. The kinase prediction tool GPS (34) was used to identify predicted kinases upstream of substrate motifs containing a phosphorylation site showing differential abundance between WT and LAR⌬P (270 phosphopeptides). Our proteome data set allowed the identification of instances where phosphopeptide abundance was a result of proteome regulation rather than control of specific phosphorylation sites by regulatory kinases and phosphatases. In order to control for these effects, any proteins found to have a similar fold change in expression to the change in phosphopeptide abundance were not included in our analysis. Of the remaining 240 LAR-regulated phosphorylation sites, 223 were identified as putative substrates for a particular kinase (supplemental Table S5). Members of the CMGC family (includes Cyclin-dependent kinases, Mitogen-activated protein kinases, Glycogen synthase kinases and CDK-like kinases) were predicted to phosphorylate the majority of sites (Fig. 5). The most predominant predicted kinase subfamily was the CMGC/CDK family followed by the CMGC/MAPK subfamily, including ERK, JNK, and p38 kinases. Other predominant kinases were MAPKAPK and mTOR (Fig. 5). LAR Regulates mTOR Signaling-Our kinase prediction analysis revealed mTOR as a prominent node of regulation (Fig. 5). The mTOR signaling pathway is known to regulate protein synthesis via the mTORC1 complex and cytoskeletal organization via the mTORC2 complex (48). Considering the significant changes in protein abundance in LAR⌬P cells and also the number of LAR-regulated cytoskeletal proteins identified it was hypothesized that LAR may be regulating the mTOR pathway. In order to further analyze the role of LAR in regulating the activity of mTOR we used antibodies recognizing Ser 2448 phosphorylated mTOR and Thr 389 phosphorylated P70S6 kinase, both of which are indicators of active mTOR signaling. In WT cells, phosphorylation of mTOR on Ser 2448 increased following stimulation with PDGF (Figs. 6A and 6B). However, the absence of LAR phosphatase activity in LAR⌬P cells resulted in a significant decrease in PDGF-dependent phosphorylation of this residue establishing a role for LAR in mTOR signaling (Figs. 6A and 6B). Analysis of P70S6 kinase Thr 389 phosphorylation revealed a similar response to PDGF to that seen with mTOR Ser 2448 in WT cells and reduced phosphorylation in LAR⌬P cells (Figs. 6A and 6C). Re-expression of WT LAR in LAR⌬P cells resulted in an increase in mTOR Ser 2448 phosphorylation to levels resembling those observed in WT cells ( Fig. 6D and 6E). Taken together, these results confirm a novel role for LAR phosphatase in the regulation of mTOR signaling. JNK is a Key Node of Kinase Regulation by LAR-JNK kinases are involved in regulation of the actin cytoskeleton, a role also played by LAR. Predicted substrates for JNK kinases were found enriched within our phosphoproteomic data set (Fig. 5). Using GPS (34), predicted JNK targets identified in our data set of LAR-dependent phosphosites included: Eps8, a highly phosphorylated signaling adaptor protein that regulates actin dynamics and architecture (49 -53); Stathmin1 and Stathmin2, both involved in microtubule disassembly (54); Tjp1, involved in tight junction assembly (55); and Tenc1 (Tns2), a focal adhesion protein that binds actin FIG. 5. LAR Regulates Distinct Kinase Nodes. Phosphorylation sites regulated by LAR were searched using the kinase prediction tool GPS. All kinases predicted to phosphorylate at least five identified phosphorylation sites are displayed. Each node represents an individual kinase, and nodes are colored according to kinase group (see key for details). An edge connecting two nodes indicates that the corresponding kinase groups were predicted to phosphorylate at least one common residue. Node size corresponds to the total number of LAR regulated phosphorylation sites that were predicted to be phosphorylated by the corresponding kinase. filaments (56). JNK is known to phosphorylate Ser 62 of Stathmin 2 (57) and phosphorylation of this residue was significantly reduced in LAR⌬P cells compared with WT. These data prompted us to investigate whether LAR phosphatase regulates JNK phosphorylation. In the absence of LAR phosphatase activity, we observed significantly reduced JNK activity upon stimulation with PDGF (Figs. 7A and 7B). Consistent with this, we also observed a significant decrease in activity of MKK7, an upstream kinase known to activate JNK (Figs. 7A and 7C), and also a JNK downstream effector, c-Jun ( Fig. 7A and 7D), in LAR⌬P cells. Re-expression of WT LAR in LAR⌬P cells restored JNK phosphorylation (Figs. 7E and 7F) demonstrating that LAR phosphatase domains are required for regulation of JNK activity. These data show LAR plays a role in regulating PDGF-mediated activation of the JNK signaling pathway. DISCUSSION Phosphorylation events are crucial for the regulation of cell signaling networks and, consequently, the cells response to a biological outcome. The regulatory role of kinases in specific cell signaling pathways has been long established. In more recent years it has been realized that phosphatases can be viewed in a similar manner and can regulate specific cell signaling events rather than acting as generic dephosphorylation enzymes as was once thought (1). The breadth of cell signaling pathways regulated by LAR has not previously been investigated. Using combined global quantitative phosphoproteomics and proteomics we have provided a comprehensive analysis of signaling events regulated by LAR phosphatase. The phosphorylation of 270 sites on 205 proteins was significantly up or down-regulated in LAR⌬P cells compared with WT cells. Our data establish that LAR phosphatase activity is essential for the regulation of many phosphorylation events within the cell that impact on a variety of cellular processes, particularly regulation of the cytoskeleton and cellcell interactions. Our data set significantly expands the number of proteins regulated by LAR that are involved in these biological functions, and identifies specific regulatory phos- phosites for future scrutiny. It is likely that LAR regulates phosphorylation via a number of mechanisms; via direct dephosphorylation, via regulation of activity of other phosphatases or kinases that can directly modulate the specific site, or via alterations in protein abundance. As well as regulation at the phosphoproteome level, the absence of LAR also caused considerable changes to the identi-fied proteome. These results highlight a possible role for LAR phosphatase activity in maintaining levels of proteins within the cell, either via regulation of protein degradation or protein synthesis. We have evidence that LAR may be regulating both processes. Protein degradation is controlled via two major pathways: lysosomal proteolysis and the ubiquitin-proteasome pathway. A number of proteins with roles in these two pathways have significant changes in phosphorylation levels because of the inactivity of LAR. The phosphorylation of Ser 72 on Rab7a, a small GTPase, was decreased in LAR⌬P cells. Dephosphorylation of this residue is necessary for late endosome maturation in preparation for lysosomal fusion and protein degradation (45). This is one example of LAR-dependent regulation of a serine residue that is likely to occur indirectly via modulation of the activity of critical serine/threonine kinases or phosphatases upstream of Rab7a phosphorylation. Evidence for LARs involvement in protein ubiquitination is the identification of LAR-regulated phosphorylation sites on three E3 ubiquitin-protein ligases: Rffl (Ser 254 ), Rlim (Ser 229 ), and Dtx3l (Ser 9 ). It is possible that the ubiquitin ligase activity of these proteins is regulated via these phosphorylation events. We have also identified LAR as a regulator of mTOR signaling. mTOR is a serine/threonine protein kinase that regulates numerous cellular functions including protein synthesis and consequently, cell growth (58). Additionally, LAR-regulated phosphoproteins include those involved in translation of mRNA and protein synthesis. Hence, LAR may contribute to the maintenance of protein levels via the regulation of protein synthesis and mTOR signaling. Within the data set are TRIO, and ␤-catenin, proteins known to interact with, and in the case of ␤-catenin be a substrate for LAR (13, 39, 43, 59 -61). TRIO is a multi-domain protein that acts as a guanine-nucleotide exchange factor for Rac and Rho small GTPases (39) and ␤-catenin an important protein involved in regulation of cell-cell junctions (62). In addition to localizing LAR-regulated sites of phosphorylation on these proteins, we have also expanded the protein networks around these two proteins that also contain LAR-regulated phosphosites. For each of these proteins we identified LAR-mediated changes in serine phosphorylation which could result from an alteration in activity of a serine/threonine kinase or a serine/ threonine phosphatase. These proteins may need to be localized in the vicinity of LAR via a direct interaction with TRIO or ␤-catenin in order to be regulated by these intermediate regulatory kinases or phosphatases. Also within our data set are IRS1 and IRS2, adaptor proteins that bind to the insulin receptor and regulate insulin sensitivity (63). Both proteins are reported to interact with LAR (59,60). LAR is known to regulate insulin dependent signaling, however, there is some debate in the literature as to whether this is because of direct dephosphorylation of the insulin receptor or a consequence of regulation of the pathway further downstream of the receptor (64,65). IRS1 has been reported to be direct substrate for LAR, however, there is some controversy over whether this is the case (64,65). Despite evidence that serine and threonine phosphorylation of IRS1 and IRS2 is important for regulation of insulin sensitivity (63) previous work has concentrated on identifying LAR-dependent tyrosine phosphorylation of IRS proteins. To date there has been no analysis of indirect, LAR-mediated, phosphorylation events on IRS1 or IRS2 that contribute to modulation of the cells response to insulin. Here, we have identified a reduction in serine phosphorylation of both IRS1 (Ser 265 ) and IRS2 (Ser 362 ) in LAR⌬P cells. Significantly, both phosphorylation events are reported to be insulin dependent (63). Grouping the phosphopeptides according to their relative abundance in PDGF stimulated cells resulted in six distinct clusters. These clusters can be differentiated on their response to PDGF and also on their functional subclasses. There are three possible scenarios that may cause relative changes in abundance of the phosphopeptides between PDGF-stimulated LAR⌬P cells and unstimulated WT cells: (1) the levels of phosphorylation in unstimulated WT and LAR⌬P are similar, however, the response to PDGF is altered; (2) the basal level of phosphorylation of the specific residue in unstimulated LAR⌬P cells has changed, coupled with an absence of PDGF response or a similar fold response to wild-type cells; or (3) there is a change at the level of the proteome, i.e. a change in protein abundance in LAR⌬P cells. In each case the result would still be differential phosphorylation in PDGF-stimulated cells because of the absence of the phosphatase domains of LAR, which would ultimately lead to changes in signaling pathways reliant on the specific phosphorylation events. Using cluster analysis, we have identified those phosphoproteins regulated by both LAR and PDGF, and these include Rab7a and a number of cytoskeletal proteins. c-Jun N-terminal kinase (JNK) is serine/threonine kinase that is activated by a broad range of external stimuli including PDGF, transforming growth factor-␤, and environmental stress (66). Signaling via JNK regulates cell migration and enhances chemotaxis in response to PDGF stimulation (67). Several strands of evidence supporting a role for LAR in regulating JNK signaling are present within our data. First, a member of the JNK signaling pathway, Zak, is present within the LAR-regulated phosphoproteomic data set. JNK can be activated via phosphorylation of Thr 183 and Tyr 185 via the action of MKK4 and MKK7 kinases (68,69). Zak is a stressactivated kinase upstream of both MKK4 and MKK7 (70) and phosphorylation of Ser 638 of Zak was increased 3.9-fold in LAR⌬P cells. This is the first strand of evidence that links LAR to JNK signaling. The second piece of evidence is the fact that we have identified specific JNK regulated phosphosites within the data that are regulated by LAR, including Ser 191 of ␤-catenin, and Ser 62 of Stathmin 2 (44,57). In addition to this, using kinase motif predictions, we have identified JNK as a key node of regulation of a number of additional phosphosites within the LAR-regulated phosphoproteomics data set. LARregulated PDGF-dependent phosphorylation of JNK on Thr 183 and Tyr 185 has been verified by Western blotting. This demonstrates the strength of our approach in identifying novel signaling pathways regulated by LAR and has highlighted a novel role for LAR in regulating JNK signaling. CONCLUSIONS We have employed a global quantitative phosphoproteomics approach for the interrogation of LAR-mediated cell sig-naling events. We have focused on obtaining information pertaining to both direct and indirect phosphorylation events to increase our knowledge of the complete landscape of LAR-regulated signaling. The study has identified LAR as a regulator of key signaling pathways, including mTOR and JNK, and has significantly expanded the number of proteins regulated downstream of LAR phosphatase activity.
2017-10-27T23:16:51.288Z
2016-04-13T00:00:00.000
{ "year": 2016, "sha1": "f2797995089aceb582c66c4f4c73e915686e7558", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1074/mcp.m115.053652", "oa_status": "HYBRID", "pdf_src": "Highwire", "pdf_hash": "6be60953c6bcd45102492ea2c07ed9ad2552570b", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine", "Chemistry" ] }
24239481
pes2o/s2orc
v3-fos-license
Linking Quantum Discord to Entanglement in a Measurement We show that a von Neumann measurement on a part of a composite quantum system unavoidably creates distillable entanglement between the measurement apparatus and the system if the state has nonzero quantum discord. The minimal distillable entanglement is equal to the one-way information deficit. The quantum discord is shown to be equal to the minimal partial distillable entanglement that is the part of entanglement which is lost, when we ignore the subsystem which is not measured. We then show that any entanglement measure corresponds to some measure of quantum correlations. This powerful correspondence also yields necessary properties for quantum correlations. We generalize the results to multipartite measurements on a part of the system and on the total system. In this Letter, we introduce an alternative approach to quantum correlations via an interpretation of a measurement. In order to perform a von Neumann measurement on a system S in the quantum state ρ S , correlations between the system and the measurement apparatus M must be created. As a simple example we consider a von Neumann measurement in the eigenbasis |i S of the mixed state ρ S = i p i |i S i S | with the eigenvalues p i . Correlations between the measurement apparatus M and the system are found in the final state of the total system ρ final = i p i |i M i M | ⊗ |i S i S |, where |i M are orthogonal states of the measurement apparatus M . In this state ρ final the correlations between M and the system S are purely classical, and no entanglement is created. The situation changes completely if we consider partial von Neumann measurements; that is, they are restricted to a part of the system. In our main result in Theorem 1 we will show that in this case creation of entanglement is usually unavoidable. We use this result to show the close connection of our approach to the one-way information deficit [8] before we extend our ideas to the quantum discord [6] in Theorem 2 and following. If we consider bipartite quantum states ρ AB , and von Neumann measurements on A with a complete set of orthogonal rank one projectors Π being the probability of the outcome i, and ρ i = Π A i ρ AB Π A i /p i being the corresponding state after the measurement. The quantum discord is nonnegative and zero if and only if the state ρ AB has the Recently an interpretation of the quantum discord was found using a connection to extended state merging [11,12]. Another interpretation was given earlier in [13]. A closely related quantity is the one-way information deficit [8,14]. For a bipartite state ρ AB it is defined as the minimal increase of entropy after a von Neumann measurement on A: where the minimum is taken over Π A i as defined above Eq. (1). The one-way information deficit is non-negative and zero only on states with zero quantum discord. It can be interpreted as the amount of information in the state ρ AB , which cannot be localized via a classical communication channel from A to B [14]. Given a bipartite quantum state ρ AB , we recall that a partial von Neumann measurement on A can be described by coupling the system in the state ρ AB to the measurement apparatus M in a pure initial state |0 M , ρ 1 = |0 M 0 M | ⊗ ρ AB , and applying a unitary on the total state [15], ρ 2 = U ρ 1 U † . This situation is illustrated in Fig. 1 Figure 1. A measurement apparatus M is used for a von Neumann measurement on A (green colored area), which is part of the total quantum system AB. The measurement implies a unitary evolution on the system M A, which can create entanglement E M |AB between the apparatus and the system. The partial entanglement PE = E M |AB −E M |A quantifies the part of entanglement which is lost when ignoring B. The measurement outcome is then obtained by measuring the apparatus M in its eigenbasis. The entanglement between the apparatus M and the system AB in the state ρ 2 will be called entanglement created in the von Neumann measurement Π A i on A. Given a state ρ AB , we want to quantify the minimal entanglement created in a von Neumann measurement on A, minimized over all complete sets of rank one projectors Π A i . The minimal amount will be called E meas , and it will depend on the entanglement measure used. In the following, the entanglement measure of interest will be the distillable entanglement E D , which is defined in [16,17]. Thus, we define E meas as follows: where the minimization is done over all unitaries which realize some von Neumann measurement on A. Recalling the definition of the one-way information deficit in (2), we present one of our main results. Theorem 1. If a bipartite state ρ AB has nonzero quantum discord δ → ρ AB > 0, any von Neumann measurement on A creates distillable entanglement between the measurement apparatus and the total system AB. The minimal distillable entanglement created in a von Neumann measurement on A is equal to the one-way information deficit: E meas ρ AB = ∆ → ρ AB . Proof. As pointed out in [18], the unitary U must act on states of the form |0 M ⊗ |i A as follows: where |i A is the measurement basis, and |i M are orthogonal states of the measurement apparatus. In general we can always write From [19] we know that the distillable entanglement is bounded from below as E , and the von Neumann entropy S (ρ) = −Tr [ρ log 2 ρ]. We mention that the same inequality holds for the relative entropy of entanglement defined in [20] as E R = min σ∈S S (ρ||σ) with the quantum relative entropy S (ρ||σ) = −Tr [ρ log 2 σ] + Tr [ρ log 2 ρ]; see [21] for details. Noting that ρ AB On the other hand, we know that E R is an upper bound on the distillable entanglement [22]. Consider the state From the definition of the relative entropy of entanglement follows: holds for any measurement basis |i A . If we minimize this equation over all von Neumann measurements on A, we get the desired result. Note that from the above proof we conclude that min and thus there does not exist bound entanglement in a partial measurement. The approach presented so far can also be applied to any other measure of entanglement E, which satisfies the basic axiom to be nonincreasing under local operations and classical communication (LOCC) [20]. In this way we introduce the generalized one-way information deficit as follows: where U realizes a von Neumann measurement on A and ρ 1 = |0 M 0 M | ⊗ ρ AB . Using Theorem 1 it is easy to see that the generalized one-way information deficit is zero if and only if the state ρ AB has zero quantum discord. This holds if E is zero on separable states only. In the same way as different measures of entanglement capture different aspects of entanglement, the correspondence (3) can be used to capture different aspects of quantum correlations. Let us demonstrate this by using the geometric measure of entanglement E G [23] on the right-hand side of (3). As the corresponding measure of quantum correlations, we obtain [24]. The minimization is done over all states σ AB with zero quantum discord. Thus, this measure captures the geometric aspect of quantum correlations, similarly to the geometric measure of discord presented in [9]. The correspondence (3) also implies that certain properties of entanglement measures are transferred to corresponding properties of quantum correlation measures. This will be demonstrated in the following by finding a class of quantum operations which do not increase ∆ → E . This class cannot be equal to the class of LOCC, since ∆ → E can increase under local operations on A. This can be seen by considering the classically correlated state Using only local operations on A it is possible to create states with nonzero deficit ∆ → E . Demanding that the subsystem A is unchanged, we are left with quantum operations on B only. In the following we will show that ∆ → E does not increase under arbitrary quantum operations on B, denoted by Λ B : Inequality (4) is seen to be true by noting that the entanglement E M |AB does not increase under Λ B , as it does not increase under LOCC. We can go one step further by noting that the distillable entanglement is also nonincreasing on average under stochastic LOCC. This captures the idea that two parties cannot share more entanglement on average, if they perform local generalized measurements on their subsystems and communicate the outcomes classically; see [17] for more details. Defining the global Kraus operators describing some LOCC protocol by , the probability of the outcome i is given by q i = Tr V i ρV † i , and the state after the measurement with the outcome i is given by σ i = V i ρV † i /q i . Then for the distillable entanglement [25] and the relative entropy of entanglement holds [26] Inequality (5) implies that the corresponding quantity ∆ → E satisfies the related property where q i , σ AB i are defined as above Eq. (5), and now {V i } are Kraus operators describing a local quantum operation on B. Inequality (6) is seen to be true by using (5) in the definition (3). In the following we will include the quantum discord δ → into our approach. We call the non-negative quantity the partial entanglement. It quantifies the part of entanglement which is lost when the subsystem B is ignored; see also Fig. 1. The following theorem establishes a connection between the partial entanglement and the quantum discord. Proof. We note that for any state ρ AB the quantum discord can be written as with the minimization over all von Neumann measurements on A. To see this we start with the definition of the discord in (1). Then it is sufficient to show that for , which can be seen by inspection using the fact that Using the same arguments as in the proof of Theorem 1 the desired result follows. Using Theorem 2 we will show that the properties (4) and (6) are also satisfied by the quantum discord. Inequality (4) can be seen to be true by noting that E D does not increase under LOCC and that Λ B does not change the state Tr B U ρ 1 U † . To see that (6) also holds for the quantum discord note that, using the same arguments as in the proof of Theorem 1, we can replace the distillable entanglement E D by the relative entropy of entanglement E R in Theorem 2 without changing the statement. Because of convexity of E R [26], the entanglement E M |A R is nondecreasing on average under quantum operations on B: ρ M A is nonincreasing on average under quantum operations on B. Using this result we see that (6) also holds for the quantum discord. Theorem 2 allows us to generalize the quantum discord to arbitrary measures of entanglement E in the same way as it was done for the one-way information deficit in (3): Using the same arguments as above Eq. (8) we see that the generalized quantum discord δ → E satisfies the properties (4) and (6) for all measures of entanglement E which are convex and obey (5). So far we have only considered von Neumann measurements. In the following we will show that our approach is also valid with an alternative definition of the quantum discord [11,12,27] [28]. With this observation we see that all results presented for the quantum discord also hold for the alternative definition of the quantum discord. In the following we will generalize our approach to multipartite von Neumann measurements on A. We split the system A into n subsystems: A = ∪ n i=1 A i . A von Neumann measurement Λ will be called n-partite, if it can be expressed as a sequence of von Neumann measurements Λ i on each subsystem A i : Λ (ρ) = Λ 1 (. . . Λ n (ρ)). Now we can introduce the n-partite one-way information deficit ∆ → n and the n-partite quantum discord δ → n as follows: Using the same arguments as in the proof of Theorems 1 and 2, we see that ∆ → n quantifies the minimal distillable entanglement between M and AB created in an n-partite von Neumann measurement on A. δ → n can be interpreted as the corresponding minimal partial distillable entanglement P E D . We also note that this generalization includes n-partite von Neumann measurements on the total system. This can be achieved by defining A to be the total system. Since δ → n = 0 in this case, the only nontrivial quantity is the generalized information deficit ∆ → n . A different approach to extend the quantum discord to multipartite settings was introduced in [29]. In this work we showed that the one-way information deficit is equal to the minimal distillable entanglement between the measurement apparatus M and the system AB which has to be created in a von Neumann measurement on A. The quantum discord is equal to the corresponding minimal partial distillable entanglement. Our approach can also be applied to any other measure of entanglement, thus defining a class of quantum correlation measures. This correspondence allows us to translate certain properties of entanglement measures to corresponding properties of quantum correlation measures. It may lead to a better understanding of the quantum discord and related measures of quantum correlations, since it allows us to use the great variety of powerful tools developed for quantum entanglement. We found a class of quantum operations which do not increase the generalized versions of the one-way information deficit and the quantum discord. We also generalized our results to multipartite settings. We thank Sevag Gharibian for interesting discussions and an anonymous referee for constructive suggestions. We acknowledge partial financial support by Deutsche Forschungsgemeinschaft. Note added. Recently an alternative approach to connect the entanglement to quantum correlation measures was presented in [30]. There the authors show that nonclassical correlations in a multipartite state can be used to create entanglement in an activation protocol.
2011-05-02T11:11:39.000Z
2010-12-22T00:00:00.000
{ "year": 2010, "sha1": "9faf6a9be8e2610f8a06d996c1d2c9d7c973d84c", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1012.4903", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "a68b184f0c55dd986966d950fbb04e85e4d03d6f", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Medicine", "Physics" ] }
211479120
pes2o/s2orc
v3-fos-license
Proportion of Streptococcus agalactiae vertical transmission and associated risk factors among Ethiopian mother-newborn dyads, Northwest Ethiopia Group B Streptococcus (GBS) vertical transmission causes fetal and neonatal colonization and diseases. However, there is scarcity of data in low-income countries including Ethiopia. We conducted a cross-sectional study on 98 GBS positive mothers, and their newborns to find proportion of vertical transmission. GBS was identified from swabs by using recommended methods and vertical transmission at birth was confirmed by the culture of body surface swabs of newborns within 30 minutes following birth. GBS positivity among swabbed specimens collected for other purposes was 160/1540 (10.4%); 98 were from 385 recto-vaginal swabs of pregnant women, and 62 were from 1,155 swabs of the 385 births. Of the 98 GBS positive cases, 62 newborns were GBS colonized with vertical transmission proportion of 63.3%(95% CI: 54.1–72.4%). We identified that the proportion of vertical transmission in this study was within the range of other many global studies, but higher than recently published data in Ethiopia. Maternal educational level, employment and lower ANC visit were significantly associated risk factors to GBS vertical transmission. Efforts need to be made to screen pregnant women during antenatal care and to provide IAP to GBS positive cases to reduce mother to newborn vertical transmission. www.nature.com/scientificreports www.nature.com/scientificreports/ to newborn GBS transmission and its associated risk factors in Ethiopia, particularly, in the study area is scarce though one published report from the Eastern part of Ethiopia showed 45.02%.rate of vertical transmission 12 . Though IAP lowers the risk of neonatal colonization and subsequent neonatal infections in the Western countries, culture-based screening of pregnant women at >35 weeks of gestation and provision of IAP to positive cases is not practiced in Ethiopia. The value of understanding the proportion of vertical transmission from pregnant women to their newborns right following birth is crucial to devise control and preventive strategies. In addition, to advance knowledge about the preventive strategies of vertically transmitting colonizing GBS, it is definitely worth making the effort to investigate the risk factors associated to vertical transmission in Ethiopia. This study, therefore, aimed to determine the proportion of vertical transmission of colonizing GBS and its associated risk factors among GBS positive pregnant women and their newborns in Northwest Ethiopia. Results A total of 1,540 swab specimens from the 770 study participants were collected. Of these, 385 were the recto-vaginal swabs from the pregnant women (with >35 weeks of gestation), and the remaining 1,155 swabs were from the three body sites (ear, nasal and umbilicus) of the 385 newborns. The overall GBS positive rate among all the swabbed specimens was 160/1540 (10.4%). Out of the positive cases, 98 GBS-colonized cases were from 385 recto-vaginal swabs of pregnant women ( Table 1). As shown in Table 2, the remaining 62/385 (16.1%) of the total newborns tested in this study were GBS colonized and of the total swabs processed, 62/1,155 (5.4%) were positive for GBS. Among the three newborn body surface sites swabbed, 19 (4.9%), 14 (3.6%), and 15 (3.9%) were only from ear, nasal and umbilical swabs of the 385 newborns ( Table 2). The 14 newborns had GBS colonization on more than one body sites in which 7 (1.8%) were from nasal and umbilicus swabs, 2 (0.5%) from ear and umbilicus and 5 (1.3%) were from the nasal, ear and umbilicus swabs (Table 2). Demography, obstetric characteristics and maternal GBS colonization. In this study, 98 asymptomatically colonized pregnant women (≥35 gestational week of pregnancy) with GBS were analyzed to identify proportion of vertical transmission and its associated risk factors. As it is shown in Table 1, among the 98 GBS colonized mothers, 74.5% were below the age of 25 years old, 78.6% were urban dwellers, and 73.5% were house wives. Majority of the mothers had secondary education (40.8%) followed by those with non-formal education (30.6%; 57.1% were multigravida, and 68.4% had three or less times ANC follow up (Table 1). Table 1, among the total of 98 newborns participated in this study, 57.1% were males, 99.0% were delivered at >37 gestational weeks of pregnancy, 88.8% newborns were weighed 2.5 kg or more, 91.8% and 98.0% of the newborn had APGAR score of 7-10 at one and five minutes respectively. About 48 (65.8%), 51 (66.2%), 46 (63.9%) of the newborns colonized with GBS were born from colonized mothers with the age of <25 years old, urban dwellers, and housewives respectively. Those newborns with APGAR score of <7 at five minutes and those with weight of 2500 gram or more had more colonization rate ( Table 1). Demography and newborn GBS colonization. As shown in proportion of vertical transmission and its predictors. The proportion of vertical transmission of GBS from the pregnant women to the newborn was 62/98 (63.3%). To estimate the relative contribution of each factor to vertical transmission of GBS, adjusted odds ratio was calculated by using the multivariable logistic regression. This analysis showed that maternal educational status, maternal occupation, and maternal ANC follow up were significantly associated to vertical transmission Newborns born to those mothers who had primary and secondary educational level were 32.7 times (AOR = 32.657; 95% CI: 2.271, 469.541), and 18.8 times (AOR = 18.849; 95% CI: 1.276, 278.483) more likely to have increased risk of being colonized respectively. This wide confidence interval might be owing to the sample size. Moreover, those newborns who were delivered from the employed mothers had about 5 times (AOR = 4.599; 95% CI: 1.096, 19.297) and those who were from pregnant women who had 4 to 5 times of ANC visit were less likely 0.21 time (AOR = 0.209; 95% CI: 0.063, 0.696) to transmit GBS to their neonate (Table 3). were less likely (0.21 AOR…) to transmit GBS to their neonate' . Discussion Streptococcus agalactiae has remained as an important cause of infection in the perinatal period. GBS is of particular interest because of the fact that the IAP given to colonized mothers can reduce the burden of early-onset neonatal diseases though it has limited impact on the late onset GBS associated neonatal diseases. This requires screening of pregnant women at the third trimester of pregnancy or in labor and administration of antibiotics to those colonized. The reported maternal carriage rate of GBS when multiple sites were cultured ranges from 10 to 41%, while vertical transmission varies from 40% to 70% 13,14 . This variation in the prevalence of colonization might be associated with geographic region, socio-demographic status, and, sexual activity. Reduction of this vertical transmission of GBS to the newborn has been a priority over the past three decades. The method that has proved the most successful has been screening of all pregnant women during pregnancy, and provision of intrapartum antibiotics to colonized women in labor. However, the GBS screening and IAP provision service is not practiced particularly in the study site and in Ethiopia at large. Using this strategy, GBS infection among newborn in the USA has been reduced from 1.7-1.9 per 1000 live births in the early 1990s, to 0.34-0.37 per 1000 newborn in 2008 15 . We assumed that this was vertical transmission since the collection of specimens was made immediately after birth without any further handling and/or wiping of the newborns makes acquisition of GBS from other sources unlikely. The proportion of vertical transmission of GBS from pregnant mother to the newborn in the current study (63.3%.) is therefore within the ranges of global reports. However, when it is compared to the individual reports generated around the world, this proportion of vertical transmission is higher than studies conducted in USA (53.8%) 16 ; China (7.6% to 16.7%) [17][18][19] ; Bangladesh 38.0% 20 ; Kuwait (35.5%) 13 ; Eastern Ethiopia (45.02%) 21 ; Central and Southern Ethiopia where the overall vertical transmission rate was 54.1% in which, 56.8% was recorded from the Adama Hospital Medical College; and 49.2% from Tikur Anbessa Specialized Hospital, and 59.1% from Hawassa Referral Hospital 22 . These discrepancies might be due to variations in demographic characteristics, geographic location, and service availability as described in a report from elsewhere in the world 7 . On the other hand, the proportion of vertical transmission recorded in the current study was comparable with other 23 ; USA (61.5%) 24 ; and Ethiopia, Hawassa Referral Hospital (unpublished data) (59.1%) 22 . High rate of vertical transmission contributes to high neonatal and maternal morbidity and mortality due to GBS. Vertical transmission of GBS is preventable and thus, health care providers and policymakers need to consider this in their maternal and neonatal mortality reduction strategies 25 . Factors which may determine vertical transmission of GBS from colonized mother to newborn is not clearly identified, mainly in low-income countries. Thus, investigating the risk factors which might be associated to vertical transmission would be useful to devise preventive strategies. The three risk factors that were significantly associated with vertical transmission of GBS from the asymptomatically colonized mother to newborn we found in the current study were maternal educational status; maternal occupation and ANC follow up. Those mothers who had primary and secondary educational level and those with employed mothers had 65.7%, 84.9% and 59.9% more likely to have a risk to transmit the colonizing GBS vertically to their newborns as compared to their counterparts' respectively. The pregnant women who had 4-5 times of ANC visit during their current pregnancy had 20.9% less likely to transmit the colonizing GBS to their newborns vertically. Various studies showed that maternal age, parity, marital status, education, occupation, and high body mass index might be associated with GBS colonization in pregnancy 26,27 and this would cause for the vertical transmission to newborns. It is because colonization of the maternal birth canal with is assumed to be a major source of newborn colonization which in turn is an important risk factor for the morbidity and mortality of neonates with early-onset GBS disease. conclusion The current study revealed that the proportion of vertical transmission of GBS from asymptomatic mother to newborn was found to be within the ranges of reports of the world. Maternal primary and secondary school educational level, maternal employment, and 0 to 3 times of ANC visit during the current pregnancy were significant risk factors for GBS vertical transmission from colonized mothers to their newborns. More studies in different parts of the country are needed to establish guidelines for GBS screening and IAP use in Ethiopia. Prospective studies are also needed to evaluate the GBS disease burden in Ethiopian mothers and infants. Materials and Methods ethical considerations. It was conducted after we secured the ethical approval by the Ethical Review Committees of the University of Gondar (IRB) (R.No. O/V/P/RCS/05/478/2015) as per the declaration and regulation of Helsinki as a statement of ethical principles. Permission was obtained from the Hospitals administrative bodies. The study participants were informed about the value of the study before we are gaining to collect any data or samples. Informed consent and/or assent were obtained from the study participants. The ear, nasal and umbilicus swabs were collected by the experienced midwives and processed at the bacteriology laboratory by using the CDC 2010 guideline of prevention of perinatal GBS disease 15 . Participants (mothers) had full right to continue or withdraw their newborns from the study. Confidentiality of all the participants' information were maintained throughout the study Study area. The study was conducted at the University of Gondar Comprehensive Specialized Hospital in the Amhara National Regional State, Northwest Ethiopia. The reports of the Central Statistical Agency of Ethiopia (CSA) population projection and the Amhara National Regional State Health Bureau showed that the region has about 20,018,988 people and nearly half of it is the females. The University of Gondar Comprehensive Specialized Hospital is one of the oldest hospitals in Ethiopia, and has around 450 to 600 pregnant women admission services per month. The hospital has the service and teaching laboratory facilities where microbiological identification takes place 28,29 . Study design and period. A hospital based prospective cross-sectional study was conducted from December 2016 to November 2017. population. Source population. All pregnant women and their newborns that were attend at the University of Gondar Comprehensive Specialized Hospital in the Amhara National Regional State, Northwest Ethiopia. www.nature.com/scientificreports www.nature.com/scientificreports/ inclusion and exclusion criteria. Inclusion criteria. Pregnant women who were colonized with GBS at their labor with >35 weeks of gestational age of pregnancy, and their corresponding newborns delivered via vagina Exclusion criteria. Pregnant women who were/had severely ill, mentally unstable, emergency room, current vaginal bleeding, current use of systemic antibiotics active to GBS within two weeks prior to data collection, and newborns delivered other than through birth canal. Study variables. Dependent variable. Proportion of GBS vertical transmission. Independent variables. The socio-demographic factors such as age, education, address and occupation, and the obstetrics/ gynecology like the use of contraceptive methods, ANC follow up, gravidity, parity, breast feed (close contact with the mother), prolonged labor, Prolonged rupture of membrane, premature ROM, pre-term delivery, low birth weight; intrapartum fever ≥38 °C, history of foetal losses, history of neonatal deaths. Sample size determination. Three hundred eight five pregnant women who were in active labor at their gestational age of 35 weeks or more and 385 newborns following delivery were investigated for their GBS colonization that was designed for a different study. In such a study, 98 women were found to be GBS postive and all these GBS positive pregnant women and their corresponding newborns (98) were included in this study to determine the proportion of vertical transission. Data collection, sampling technique and laboratory procedures. The socio-demographic and biological data were collected from the pregnant women (with >35 gestational weeks of pregnancy) at the point of labor and their newborns by trained midwives and laboratory technologists from the maternity ward of the hospital until the prespecified sample size was reached. Data collection tools. Questionnaire. Socio-demographic data were collected from the study participants by using the semi-structured questionnaire to investigate risk factors associated to newborns GBS colonization (vertical transmission). Questionnaires were prepared by using the published studies tailored based on our objective. The questionnaire were first prepared in English and translated into Amharic, the language which the study participants speak. After the data were collected by using the Amharic language, the response of each questionnaire was re-translated into English for analysis and report. Swab culture. The rectovaginal, ear, nasal, and umbilicus swabs collected were transported by using the Aims transport medium and placed into the Todd-Hewitt selective enrichment broth containing colistin (10 µg/ml) and naldixic acid (15 µg/mL) (Cart Roth GmbH + Co. KG-Schoemperlensrr. 3-5-D-76185 Karisruhe, Germany). It was incubated at 37 °C in 5% CO 2 for 24 hours. The growth was sub-cultured onto 5% defibrinated sheep-blood agar and the isolates were identified by using the colony morphology, Gram staining reaction, β -hemolytic features, and CAMP test. The culture plates were re-incubated for another 24 hours and inspected again as the β -hemolytic colonies were not observed during the first 24 hour incubation time. The β -hemolytic colonies morphologically consistent with GBS were sub-cultured onto 5% defibrinated sheep blood agar and subjected to CAMP test for presumptively identification. The CAMP test was done by inoculating the known Staphylococcus aureus onto 5% defibrinated sheep blood agar down the center of the plate with a wire loop. GBS was streaked in a straight line perpendicular to the S. aureus within 2 mm far. The plate was then incubated at 35 °C for 24 hours. A positive CAMP result was indicated by an arrowhead-shaped enhanced zone of beta-hemolysis in the area between the GBS and S. aureus with the arrow-point towards the S. aureus streak. Quality control. About 5% of the questionnaire and the protocol were pre-tested to check their suitability. Data cleaning was done daily, and Streptococcus agalactiae (ATCC 12386), Enterococcus faecalis (ATCC 29212); Streptococcus pyogenes (ATCC 19615), Staphylococcus aureus (ATCC 29213) and Escherichia coli (ATCC 25922) were used as quality control. Data analysis. The excel spread sheet was used to enter the data, and was cleaned and exported to IBM SPSS version 20 (Chicago, IL, USA) and analyzed. Results were reported by using words, and tables. Descriptive statistics was used to summarize characteristics of the study participants. The association between the outcome variable (proportion of vertical transmission of GBS) and each independent variable (demography and clinical factors) was analyzed by using the multivariable logistic regression model. All the variables were entered into the multivariable logistic regression by using backward LR selection procedure to control the confounding effect and to retain only the statistically significant variables in the final model. Association between the outcome and the independent variables was calculated by using the adjusted odds ratio at a p-value < 0.05 and 95% confidence interval. Assumption of goodness of the model was checked by Hosmer-lemeshow test (p = 0.828).
2020-02-26T16:31:51.067Z
2020-02-26T00:00:00.000
{ "year": 2020, "sha1": "dedc15ac08d6e97ef68ee782a6028a258588c0e1", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/s41598-020-60447-y.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "dedc15ac08d6e97ef68ee782a6028a258588c0e1", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
56259488
pes2o/s2orc
v3-fos-license
HEALTH CONTROL OF PIG HERDS ON COMMERCIAL FARMS The concept of modern industrial production of pigs on commercial farms is based, among other things, on the implementation of biosecurity measures as well as solving problems of environmental protection, which greatly burden the production. It is well known that good health is a prerequisite for good pig reproduction, that is, a successful and profitable production. The health status of the herd depends on many factors such as the maintenance technology, nursing, nutrition, organization, level of staff training and systematic implementation of good health care policies. Today, we are witnessing high incidence of bacterial diseases, viral etiology and certain parasites that seriously affect the pig production in intensive farming conditions. Keeping such diseases under control is possible only by applying appropriate prophylactic and therapeutic measures, as well as by increased monitoring by professional services. INTRODUCTION In intensive pig farming, is more valid parameters that can be used to show the success and profitability of production, such as the number of live or weaned piglets, length of fattening period, the number of non-reproductive days annual etc.Today it is common pig production on commercial farms would present a number of breeding piglets and fattening pigs per sow delivered during calendar year.The production parameters vary considerably between countries with more or less developed production pigs (Radojičić at al. 2002).To be able to work on improving the production of pigs on the farm it is important to assure good health of breeding sows and piglets in the first days after farrowing ( Bojkovski et.al. 2005( Bojkovski et.al. , 2011( Bojkovski et.al. , 2013b)). In this review paper presents a summary of our research that are related to solving reproductive health and biosecurity issues on commercial swine farms as well as an overview of environmental contaminants that were present on farms with some of the possible solutions. Flexible cooperation of farm breeding with professional services, with respect to the implementation of technical knowledge and application of range of biotechnical measures and putting the emphasis on prevention of disease in order to promote the good health of pigs.It is possible to improve the welfare of pigs an actual production (Hristov et. al. 2008) COMMON HEALTH -REPRODUCTIVE PROBLEMS ON COMMERCIAL PIG FARMS In intensive pig production, the control of the reproduction herd is the primary task.It is well known that in comparison to other breed of domestic animals, swine are characterized by a very high reproductive potential, given that the early sexually mature, have a high value of ovulation, the period of gestation and the lactation period is relatively short and can quickly establish a pregnancy after weaning the previous litter.From an economic point of view, proper, regular reproductive activity of pigs is of great importanceThe reproductive efficiency of a herd is usually estimated on the basis of: age of female animals at first farrowing, the length of their reproductive exploitation, the duration of the interval between individual farrowing, and the size of the litter at weaning.Reproductive activity of pigs is influenced by the number of factors including hereditary, endogenous (hormones , immunoglobulins , enzymes) a dn environmental ones, the presence of pathogens as well as management and production technology ( Uzelac , Vasiljević , 2011. ).Reproductive efficiency is further Bojkovski J. et al.: Health control of pig ... determined by the system of keeping, diet , season , farm location, microclimate, implementation of biosecurity measures , herd size , herd health status (presence of breeding , parasitic and infectious diseases ) , body score condition , and methods of artificial insemination (Lončarević et al . , 1997Petrujkić et al . 2011) . Infertility is a common problem on commercial farms.The causes of infertility are diverse and numerous.Current problem at most of our farms is the emergence of seasonal infertility that is present during summer months and is a serious impediment to producers who want to maximize reproductive efficiency of the herd ( Petrujkić et al., 2009( Petrujkić et al., , 2010( Petrujkić et al., , 2011) ) .In this sense, in intensive pig production today much attention is paid to optimizing the microclimate conditions in housing facilities by using computerized ventilation systems and automated equipment for air conditioning , lighting, feeding, manure disposal , etc. Programming of desired parameters provides favorable conditions for the animals and maximize expressing of their genetic potential and increase their reproductive productivity while greatly reducing the stress. Adequate health care for farm animals , high level of hygiene of animals , vehicles and staff , as well as the strict application of all desired methods in the technology of artificial insemination are the primary requirements to accomplish high reproductive efficiency of breeding animals ( Stančić et al . 2012) . Conventional assessment of semen quality in boars as an important segment in the technology of artificial insemination is widely practiced on our commercial farms.Classical procedure for evaluation of semen quality in condition of commercial breeding can identify the ejaculates with low fertilization potential; however , it did not prove effective in predicting fertility parameters in the field conditions (Tsakmakidis , 2011) .Therefore, in order to overcome the problem of infertility and to control the reproductive efficiency of pigs, a cooperation with the veterinary institute offers a range of novel laboratory methods such as motion estimation using computer analyzer (CASA ) , automated sperm morphology analysis ( ASMA ) , flow cytometry for determination of chromatin integrity , HOS test etc.Thus, the fertility of boars can be continuously monitored enabling a prompt response to the immediate production.Technology of the preparation of heterosperm insemination doses involving sperm of two or more terminal boars has been applied in the process of artificial insemination at our commercial farms to produce a large number of piglets per sow ( Vasiljević , 2012) Use of deep-frozen semen is common on industrial swine farms .The advantage of frozen semen is its ability to preserve the genetic material for a longer period, and significantly reduce the risk of introduction of the disease in the herd (Stanković et al ., 2007).However, the deep freezing is not the part of common practice because of certain technological drawbacks of deep-freezing procedure and low rates of pregnancy and litter size ( Vidović et al . , 2011 ) . The phenomenon of stress is also one of the serious problems on commercial farms.The farms that are still developing their management strategies have a mor pronounced problem with stress than farms with well-organized production system.The requirements of modern pig production today have reduced the stress to a minimum and provided maximum comfort for the animals ( welfare ) .In this regard, it is important to understand the mechanisms of adaptation syndrome and stress reactions.Providing adequate living conditions for the animals positively affects their productivity that reaches the expected and desired levels.The high level of corticosteroids in the blood of animals exposed to stress results in the reduction in their resistance thus making them highly susceptible to various infections.Therefore, it is very important to improve the welfare of animals at farms through the development and promotion Arhiv veterinarske medicine, Vol. 7, No. 1, 59-69, 2014. Bojkovski J. et al.: Health control of pig ... of human consciousness in view ofof the respect, care and responsibility towards animals.The application of technical and technological solutions in animal production can provide maximum comfort and convenience to animals.Technology of feeding farm animals takes an important place in the prevention of stress and is a very important factor in maintaining good health and reproductive status .Fattened sows , which carry a large number of fetuses and consume large amounts of feed in facilities with increased humidity and temperature , are more susceptible to stress and often manifest signs of respiratory distress .It is one of the reasons for the introduction of new dietary guidelines according to specific production stages and animal categories.The dietary curve for breeding sows is precisely defined for each particular stage of production in order to enable early estrus after weaning of piglets , to maximize the number of ovulated and implanted embryos , increase the number of delivering live and vital piglets , and to increase the milk yield during lactation while preserving good condition and health status of female animals.All this would prolong the life and productivity of animals and decrease the administration of drugs.Thanks to this approach, modern commercial farms are characterized by 35 or more weaned piglets per sow per year .Pig production on commercial farms is heavily burdened by a range of diseases affecting piglets .Piglet pathology is highly dynamic within the herd as a whole, as substantial agglomeration of animals in a limited space enhances both horizontal and vertical transmission of the infection.Intensive production conditions may result in the occurrence of so called production, i.e. technological diseases caused by certain microorganisms.Variations of pathogenic organisms that commonly affect piglets are of great importance , not only in view of their resistance to drugs , but because of their potential genetic recombination affecting the clinical picture and course of the disease that may complicate the diagnosis as well as the therapeutic and prophylactic management ( Blackburn , 1995Bojkovski et al . 1997, 2005 ) .The following diseases were observed on our pig farms: neonatal colibacillosis , edema disease , necrotic enteritis , dysentery, circovirus infection , and respiratory disease complex (PRDC) . In recent years, massive outbreaks ofrespiratory disease complex ( PRDC ) were recorded on pig farms in both our country and worldwide, which is becoming a serious health problem at all technological stages of production .PRDC of pigs is a simultaneous infection of lung tissue with a number of respiratory pathogens and is a common term for pig pneumonia characterized by multifactorial etiology.Isolated pathogens may vary between and within production herds ( Honnold 1999, Ivetić et al . 2005Golinar et al . 2006) .The control of PRDC is difficult and complicated .The importance of respiratory disease complex is based on the interaction between multiple respiratory pathogens .Knowledge on such interactions must be taken into consideration in order to accomplish implementation of effective control measures .Respiratory disease of pigs develops as a consequence of the presence of living infective agents in the immediate surroundings of the animal or is due to a sudden drop of immunoprotective mechanism of the respiratory system ( Ivetić et al. 2005).Contrary to the control of common diffuse infectious diseases of pigs that persist in our country and are encompassed in national legislation on mandatory suppression of infectious diseases, the detection and suppression of technopathies rather represents an economic need of the producers themselves.FARMS Bojkovski J. et al.: Health control of pig ... In modern pig production, thegenetics is applied with an aim of improving productive capacity of existing breeds on commercial farms by creating breed varieties with higher genetic potential, animal growing in pure breed or cross-breeding for commercial purposes.A part of our research was focused on the investigation of karyotype changes in intensive breeding.We established that karyotype changes may occur under the influence of chemical substances, which can originate from feed, water or the environment of the investigated animals (Bojkovski et.al.2010).Our recommendation to commercial-type farms and reperoduction and artificial insermination centers implicate application of the results of cytogenetic methods that enable detection of carriers of hereditary anomalies.Implementation of these methods into the biosecurity plans on the farms will positively affect the health status of the herd and thus improve the production results. ECOLOGICAL ISSUES ON COMMERCIAL PIG FARMS The presence of chemical pollutants (heavy metals ) and their impact on animal health on commercial farms has been monitored for a prolonged period of time.Heavy metals, which react with organic molecules and alter their structure and function, are particularly hazardous for all living systems.Heavy metals are absorbed into the body via respiratory and digestive systems, and skin.The results of a several-year long research have pointed out the risk of feed contamination with heavy metals and its deposition in the body of animals, with consequences on the health and reproductive capacity of domestic animals.The toxicity of heavy metals results in the formation of free radicals by inhibiting the activity of antioxidant enzymes and glutathione oxidation, and resulting in the creation of malonyl -dialdehyde ( MDA ) as a marker of oxidative stress .Their toxicity is derived from the tendency to form covalent links with sulfhydryl groups of biomacromolecules, or displace certain cofactors thus inhibiting the activity of particular enzymes ( Bojkovski et al . 2008a( Bojkovski et al . , b , 2010a) ) .Our recommendation for commercial farms is to apply measures for reducing the risk of toxic heavy metals , to implement multiple monitoring of the quality of raw materials and finished products as well as adequate protector of the toxic effects of these agents ( Bojkovski et al , 2010b , c ) BIOSECURITY ON COMMERCIAL PIG FARMS Biosecurity plans are critical in the prevention of disease and unwanted situations as well as in business improvement ( Uhlehoop , 2007) .The global objective of contemporary swine production in developed countries is to prevent the entry of disease in the herd, that is, maximally protect pigs from the contact with infectious agents from the environment and prevent or minimize the transfer of pathogenic organisms between certain categories of animals within the herd.. Therefore, special attention is paid to technical solutions that enable protection of pig herds from harmful external influences.Such measures include the construction of a quarantine for newly purchased animals, establishment of a separate department for the delivery of animals and special entrance for the personnel organized in line with the prescribed hygienic measures and strictly defined behavior protocol.All measures aimed at protecting the herd from infection are known as biosecurity measures and include Bojkovski J. et al.: Health control of pig ... measures of external and internal biosecurity defined in a biosecurity protocol.Outside biosecurity measures include the multi-side housing system , access control ( personnel , feed , equipment , materials, semen for artificial insemination, control of vehicles accessing the farm , control of rodents , insects and birds , entry protocols for the staff , control of animal delivery, disposal of dead animals , quarantine for newly purchased animals)and are aimed at preventing the transmission of infectious agents from the environment and other herds in the region. .Internal biosecurity pertains to procedures regulating personnel access and behavior on the farm ( shower , farm clothing, circulation of people and animals through the farm, the use of tools and equipment for the work, etc. ) , applyingthe principle "all in-allout" protocols for cleaning , washing and disinfection as well as infection control applying a program of preventive and curative health care of animals ( Uzelac , Vasiljević , 2011).Assessment of biosecurity based on relevant indicators should be a routinemechanism for the evaluation of biosecurity on farms, indicating the direction of future operations and, possibly, their improvement (Lončarević et al, 1997, Stankovićet al., 2008).For example, based on consideration of the failures in providing biosecurity Stanković and Hristov (2009) reported the level of biosecurity on one of the investigated pig farms to be 3.96 ( very good) .This result indicates the current biosecurity status of the farm , but one should always keep in mind the mutual interaction and overall activity of all biosecurity parameters (Stanković and Hristov , 2009) .The farmers bear the primary responsibility to protect their herds in terms of introduction of the disease by applying movementcontrol, by following proper procedures for housing of particular animal groups and appropriate sanitation.The personnell on the farm and visitors must be aware of their role in maintaining safe health status on the farms (Stanković and Hristov , 2009) . CONCLUSION The aim of intensive pig farming on commercial farms is to produce a large number of weaned piglets and fatteners per sow per year .To achieve this goal, it is necessary to establish high reproductive efficiency of breeding animals .This can be achieved by adequate health care, modern technology and good organization of the production applying appropriate procedures in the technology of artificial insemination.The main goal of modern production on commercial farms is to reduce the phenomenon of stress to the lowest possible level .High-health status and health control applying appropriate healthcare programs along with preventive and curative measures and protocols for external and internal biosecurity are an imperative of modern pig farming.Our recommendation for commercial farms and centers for the reproduction and artificial insemination is to apply cytogenetic methods that enable detection of carriers of hereditary anomalies. In order to reduce the risk from the chemical pollutants, implementation of multiple monitoring of the quality of raw materials and finished products is highly required, as well as the application of adequate protective agents that minimize toxic effects of these pollutants .
2018-12-18T05:17:01.298Z
2014-01-01T00:00:00.000
{ "year": 2015, "sha1": "ecfae1470e0cf89eddf607a3545d6fd7b5fceaa9", "oa_license": "CCBY", "oa_url": "https://niv.ns.ac.rs/e-avm/index.php/e-avm/article/download/125/99", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "ecfae1470e0cf89eddf607a3545d6fd7b5fceaa9", "s2fieldsofstudy": [ "Agricultural and Food Sciences" ], "extfieldsofstudy": [ "Medicine" ] }
7731281
pes2o/s2orc
v3-fos-license
Systems Biology of Meridians, Acupoints, and Chinese Herbs in Disease Meridians, acupoints, and Chinese herbs are important components of traditional Chinese medicine (TCM). They have been used for disease treatment and prevention and as alternative and complementary therapies. Systems biology integrates omics data, such as transcriptional, proteomic, and metabolomics data, in order to obtain a more global and complete picture of biological activity. To further understand the existence and functions of the three components above, we reviewed relevant research in the systems biology literature and found many recent studies that indicate the value of acupuncture and Chinese herbs. Acupuncture is useful in pain moderation and relieves various symptoms arising from acute spinal cord injury and acute ischemic stroke. Moreover, Chinese herbal extracts have been linked to wound repair, the alleviation of postmenopausal osteoporosis severity, and anti-tumor effects, among others. Different acupoints, variations in treatment duration, and herbal extracts can be used to alleviate various symptoms and conditions and to regulate biological pathways by altering gene and protein expression. Our paper demonstrates how systems biology has helped to establish a platform for investigating the efficacy of TCM in treating different diseases and improving treatment strategies. Introduction According to traditional Chinese medicine (TCM), acupoints are linked in a network of meridians running along the surface of the body. The meridian system is a special channel network that consists of skin with a high concentration of nerves, various nociceptive receptors, and deeper connective tissues inside the body [1]. Moreover, "q i " (vital energy) in TCM is transferred by meridians, and its flow around the body can reflect the health status of individuals [2]. Acupoints are special locations in the body where the "q i " of viscera and meridians infuses and effuses. This phenomenon is thought to be similar to how signals are passed through neural networks. Acupoints are also considered reflection points (i.e., points on the body whose reflexes provide diagnostic information) for certain diseases and are the targets for clinical acupuncture [3]. Acupuncture is an alternative medicine methodology that originated in ancient China. It uses thin metal needles to pierce through skin into acupoints to regulate the flow of q i around the whole body [4]. Needling at appropriate points can induce effects in locations remote from the insertion site [5]. Many recent brain imaging studies have shown that acupuncture has a specific correlation with the human nervous system. Using functional magnetic resonance imaging (fMRI) to scan the brains of subjects undergoing needling, researchers found that different acupoints corresponded to different cerebral areas and conditioned reactions [3,6]. Therefore, it has the potential to provide therapy for many diseases. Recent studies have found that electroacupuncture (EA) improved the pathology of motor disorders in a Parkinsonian rat model by restoring homeostasis in the basal ganglia circuit [7,8]. In other reports, the effects of EA treatment on neuropathic pain [9], acute spinal cord injury [10], acute ischemic stroke [11], and reducing inflammation [12,13] were also rigorously studied. We review the relevant literature below and have depicted the human acupoints described in this paper in Figure 1. Another important aspect of TCM is Chinese herbal medicine (CHM), which has been used for thousands of years as a major preventive and therapeutic strategy against disease [14]. Currently, more than 3,200 species of medicinal plants are used in CHM treatments. A fundamental feature of TCM is TCM compound formulas, which are composed of many kinds of herbs and sometimes minerals or animal components, similar to a cocktail therapy [15]. Each TCM compound formula is usually designed to combat specific symptoms and combined with other herbs or prescriptions to tailor to individual needs. Herbal extracts have been investigated for use in treating various diseases and have been used as a complementary or alternative form of medical therapy for cancer patients [16][17][18]. In other studies that focus on chronic kidney disease [19], neurodegenerative disease [20], and diabetes mellitus [21], Chinese herbs have been reported to alleviate symptoms and mediate signal transduction. Systems biology, which combines computational and experimental approaches to analyze complex biological systems, focuses on understanding functional activities from a systems-wide perspective [22]. With the advent of high-throughput global gene expression, proteomics, and metabolomic technologies, systems biology has become a viable approach for improving our knowledge of health and disease [23,24]. In this paper, we review recent research articles on acupuncture and Chinese herbal therapy performed in conjunction with omics technologies to analyze and investigate the regulatory mechanisms of the treatments and their therapeutic applications. Systems Biology and Omics Data Systems biology is the computational integration of huge datasets to explore biomolecular functional networks [22,25]. The field offers many approaches and models for searching for biological pathways and predicting their effects and implications [26,27]. More recently, academic research has focused on developing basic informatics tools that can integrate large quantities of global gene expression, proteomic, and metabolomic data to mimic regulatory networks and cell function [26,28]. The principle of systems biology is to understand and compare physiology and disease, first from the level of molecular pathways and regulatory networks, then moving up through the cell, tissue, and organ, and finally whole organism levels [29,30]. It has the potential to provide new concepts to reveal unknown functions at all levels of the organism being studied. Omics data helps to explore the different levels in systems biology from a holistic perspective. The suffix "-omics" is added to the object of study or the level of biological process to form new terms to describe that informationfor example, genomics from gene data, proteomics from protein data, metabolomics from metabolic data [31]. With rapid progress in sequencing and computational methods, omics techniques have become powerful tools for researching biological mechanisms and diseases and for interdisciplinary applications, for example, biologics, mathematics, and informatics [32][33][34]. Genomic Studies of Acupuncture in Diseases Many individual transcriptional profiles of animals or patients have been mined to search for target molecules of acupuncture treatments. Candidate genes or pathways associated with the protective effect of acupuncture treatments have been revealed through genomic analysis for several diseases and symptoms. We list the acupoints and related symptoms in Table 1. 3.1. Analgesia. Acupuncture was reported to reduce pain during surgery in the 1950s [35] and can treat different types of pain in many cases [36,37]. Particularly, previous reports have indicated a major role for acupoint ST36, whose therapeutic properties include analgesia [38]. Gao et al. compared the hypothalamus transcriptional profiles of rats that responded or did not respond to EA analgesia treatment at ST36 [39]. They found that genes for glutamatergic receptors, ghrelin precursor peptides, the melanocortin 4 receptor, and neuroligin 1 may all be new targets for pain management. Moreover, Chae et al. found 375 genes showing significant variation in response to the analgesic effect of acupuncture (on LI4) [40]. Among these genes, cold shock domain protein A and kruppel-like factor 5 were identified as potential targets for investigating the mechanisms behind acupuncture-induced analgesia. Antiaging. In an anti-aging study, Ding et al. analyzed the hippocampus gene expression profiles in senescenceaccelerated mice given acupuncture treatment [41]. Following applied acupuncture at the CV17, CV12, CV6, ST36, and SP10 acupoints, the researchers observed eight genes affected by age. They found that acupuncture could completely or partially alter the gene expression of heatshock proteins (Hsp84 and Hsp86) and Y-box binding protein 1-genes related to oxidative damage-in senescenceaccelerated mice, indicating that acupuncture may show promise in retarding aging in mammals. Hypercholesterolemia. Li and Zhang stimulated C57BL/ 6j mice, which are used in diabetes and obesity research, at the ST40 acupoint and found that cholesterol 7α hydroxylase was upregulated following EA, while sodium taurocholate cotransporting polypeptide was downregulated [42]. Each of these variations in gene expression might alter the balance of cholesterol metabolism and reduce cholesterol by regulating bile salt biosynthesis and flux, respectively [43,44]. Neuronal Diseases. In a spinal cord injury (SCI) study, EA treatment on rats at four acupoints-ST36, GB39, ST32, and SP6-was seen to restore sensory function by microarray analysis [46]. In this study, they found that extending the EA treatment time for SCI rats could result in more changes in gene expression. After EA stimulation, the gene expression of calcitonin gene-related polypeptide (CGRP) and neuropeptide Y (NPY) were up-regulated and functionally annotated with the recovery of sensory functions. Acupuncture treatment at ST36 has also been reported to reduce neuropathic pain. Using microarray analysis, opioid receptor sigma was one of differentially expressed genes and involved in opioid signaling which has been implicated in neuropathic pain and the analgesic effects of EA at ST36 in neuropathic pain model rats [47]. Parkinson's disease is a neurodegenerative disease caused by the death of dopaminergic neurons [48]. Several studies have performed acupuncture treatments on different acupoints in 1-methyl-4-phenyl-1, 2, 3, 6-tetrahydropyridine-(MPTP-) induced Parkinson's models. These models were used to evaluate the effects of acupuncture treatment at the GB34 and LR3 acupoints by analyzing the transcriptional profiles from cervical spinal cord or brain bilateral striatal tissues [49,50]. Among the genes downregulated by acupuncture treatment, proplatelet basic protein (Ppbp) was functionally annotated with cytokine-cytokine receptor interaction pathways, and cytotoxic T lymphocyte-associated protein 2 alpha (Ctla2a) was associated with a pathway relevant to Parkinson's disease according to KEGG analysis [49]. Similar results were observed from substantia nigra tissue of MPTP mice following treatment at the GB34 acupoint [51]. In previous studies, myelin basic protein, a major constituent of the axonal myelin sheath, has been reported to be up-regulated in MPTP mice and Parkinson's patients stimulated at the GB34 acupoint [52]. In the bilateral striatal tissue of mice brain following acupuncture at the GB34 and LR3 acupoints, the gene expression levels of gap junction alpha 4 protein (Gja4) and tubulin alpha 8 (Tuba8) were down-regulated and annotated with the cell communication and gap junction pathway, respectively. Furthermore, an up-regulated gene, neurotrophin-3 (Ntf3), was annotated with the mitogen-activated protein kinases (MAPK) signaling pathway [50]. Immune Modulation and Allergic Rhinitis. Evidence from mouse models has demonstrated that EA or acupuncture stimulation at ST36 can modulate the immune response [53][54][55]. Two studies using a 2,4-dinitrophenylated keyhole limpet protein (DNP-KLH) immunized mouse model have shown that gene expression patterns can change in the spleen [55] and hypothalamus [54] after EA treatment. Using Sprague Dawley (SD) rats, Kim et al. found that the genes altered following EA at ST36 play crucial roles in natural killer cell activation in spleen tissue [53]. However, the effects of EA treatments remain diverse, partly because of variation in immune responses when triggered in different types of tissues or models (e.g., DNP-KLH immunized mice versus SD rats). Acupuncture treatment has also demonstrated some effectiveness against allergic rhinitis in clinical settings, presumably through immune modulation [56,57]. Allergic rhinitis occurs when an allergen is inhaled and generates an immune response in an individual. Its symptoms can be reduced by acupuncture treatment at the EX-HN3, LI4, LI20, and ST36 acupoints [56]. The effectiveness of the treatment could be accounted for by modulation of proand anti-inflammatory genes observed through microarray analyses of blood samples. Immune responses to allergic rhinitis may vary depending on types of allergen. Shiue et al. have used the Phadiatop (Ph) assay, an effective screening tool that detects a diverse range of allergens, to demonstrate that patients with Ph-positive (+) and Phnegative (−) allergic rhinitis display different gene expression profiles after acupuncture treatment [57]. A hierarchical clustering analysis revealed that three gene groups-those for active immune response, regulatory T cell differential, and apoptosis-were differentially expressed in the Ph (+) and Ph (−) patients after treatment [57]. These results also indicate the importance of personalized medicine for future investigations. Proteomic Studies of Acupuncture in Diseases Proteomic technologies, such as two-dimensional electrophoresis (2-DE), can screen for proteins differentially expressed between individuals responsive and nonresponsive to acupuncture treatments. These proteins were identified by various mass spectrometry (MS) techniques, and their related pathways were explored to determine the mechanism of the acupuncture treatments. The results clarify the relationships between different acupoints and functions (Table 1). Acute Ischemic Stroke. EA efficacy has been studied by comparing levels of serum proteins in response to EA or drug treatments in acute ischemic stroke patients [11]. Patients were treated with EA at eight acupoints (MS6, BL10, GB20, LI4, PC6, B40, SP6, and ST36) once daily for ten consecutive days. The protein profiles were analyzed using 2-DE; SerpinG1 was up-regulated, while gelsolin, complement component 1 (C1), C3, C4B, and beta-2-glycoprotein I were all down-regulated. Other studies have indicated that platelet C4 expression is associated with acute ischemic stroke by comparing serum proteins from healthy individuals and ischemic stroke patients [58]. Neuronal Diseases. Using matrix-assisted laser desorption/ionization time-of-flight mass spectroscopy (MALDI TOF-MS) and subsequent protein database mining, Li et al. identified fifteen candidate proteins whose expression levels varied with acupuncture treatment for SCI [59]. Among them, annexin A5 (ANXA5) and collapsin response modifier protein 2 (CRMP2) were determined to be beneficial for neuronal survival and axonal regeneration. ANXA5 is a member of the annexin superfamily of calcium-and phospholipid-binding proteins, which are related to apoptosis and inflammation [60]. CRMP2 is a member of the collapsin response mediator protein family and is expressed Evidence-Based Complementary and Alternative Medicine 5 exclusively in the nervous system, especially during development [61]. Additionally, among up-regulated proteins, heat shock protein beta-1 (HSPB1) has a reported role in cell stress/neuroprotection as well as in axonal regeneration [62]. These results reveal the potential for proteomics research to supplement and guide the treatment of SCI with acupuncture in future research. Moreover, Sung et al. applied EA at ST36 in a SD rat model, and subsequent 2-DE was used to identify signaling pathways involved in neuropathic pain [9]. Thirtysix differentially expressed proteins were identified in the neuropathic pain model, and the normal expression levels of their corresponding genes could be restored following EA treatment. Furthermore, Jeon et al. performed EA at GB34 in an MPTP mouse model of Parkinson's disease [63]. They observed restoration of behavioral impairment and rescued tyrosine hydroxylase-positive dopaminergic neurodegeneration after the treatment. In previous studies, the expression of myelin basic protein can also be restored to normal levels after EA treatment [63]. Kim et al. performed similar studies at GB34 and GB39 [64]. They identified thirteen differentially expressed proteins using MS, and four of these proteins-cytosolic malate dehydrogenase, munc18-1, hydroxyacylglutathione hydrolase, and cytochrome c oxidase subunit Vb-were restored to normal expression levels following EA treatment. These proteins are involved in cell metabolism, and may reduce MPTP-induced dopaminergic neuronal destruction by decreasing oxidative stress. Taken together, these results suggest that acupuncture treatment is likely to have a neuroprotective effect on neuronal diseases through cell metabolic and nervous tissue developmental pathways, among others. Genomic Studies of Chinese Herbs in Diseases Many studies have investigated genes impacted by Chinese herb extracts from transcriptional profiles after treatment and analyzed their functions using pathway databases. These results were used to evaluate whether Chinese herb extracts could be used as a complementary drug to treat specific symptoms. Here, we review Chinese herbal treatment-related studies that incorporated genomic analysis and interpret the efficacy of Chinese herb extracts for various symptoms and conditions ( Table 2). Immunomodulatory Function. It is generally agreed that the fungus Ganoderma lucidum contains an abundance of polysaccharides with immunostimulatory properties [65], and these have been investigated using human CD14 (+) (cluster of differentiation 14) derived dendritic cells [66,67]. Dendritic cells are antigen-presenting cells that play a critical role in the regulation of the adaptive immune response. A comparison of transcriptional profiles from the polysaccharide of G. lucidum-treated dendritic cells and untreated dendritic cells showed a decrease in the expression of some phagocytosis-related genes, for example, CD36, CD206, and CD209. The expression of proinflammatory chemokines was increased, that is, chemokine (C-C motif) ligands (CCL) CCL20, CCL5, and CCL19; interleukins (IL) IL-27, IL-23A, IL-12A, and IL-12B; costimulatory molecules CD40, CD54, CD80, and CD86 [67]. Additionally, altered expression levels of CD209, CCL20, CCL5, IL-27, CD54, CD80, and CD86 in cells after treatment with F3 (a polysaccharide fraction extracted from lingzhi) have been observed by Lai et al., with CD209 expression down-regulated proportional to treatment time [66]. Another study investigated the treatment of human CD14+ monocytes with polysaccharide fractions extracted from North American ginseng [68]. The MAPK (extracellular regulated protein kinases-1/2), phosphoinositide-3-kinase, p38, and nuclear factor-kappaB (NF-κB) cascades are key signaling pathways that may trigger immunomodulatory functions, as determined by Ingenuity Pathway Analysis [68]. With regard to other extracts, Cheng et al. reported that the NF-κB pathway was up-regulated in THP-1 by treatment with ethanol extracts of G. sinense, but not by G. lucidum [69]. Moreover, protosappanin A treatment (an ethanol extract of Caesalpinia sappan) has been reported to induce an immunosuppressive effect via the NF-κB pathway in a heart transplantation rat model [70]. Wound Repair. Herbal formulas containing extracts of Astragali radix, Rehmanniae radix, and Angelica sinensis show potential therapeutic benefits for wound repair [72]. For example, the formula NF3, which consists of a 2 : 1 ratio of A. radix and R. radix, has been found to affect cell proliferation, angiogenesis, extracellular matrix formation, and inflammation in the Hs27 skin fibroblast cell line through microarray analysis [72]. Another study in human skin substitutes revealed that SBD.4 extracts of A. sinensis possesses skin-and wound-healing activity [73]. After SBD.4 stimulation, the gene expression of collagen XVI and XVII, laminin γ-2 and 5, claudin 1 and 4, hyaluronan synthase 3, superoxide dismutase 2, and heparin-binding EGF was upregulated and ADAM 9 was down-regulated in EpiDermFT skin substitute tissues. The elevation of collagen XVI and XVII can enhance collagen fibril organization and the reduction of ADAM9 may stimulate collagen deposition [89,90]. These results suggest that these skin-and wound-healing related genes function in cell-substrate junction assembly. Postmenopausal Osteoporosis. Traditional Chinese herbalists have been treating patients with chronic kidney disease for thousands of years [19]. The effect of an herbal mixture consisting of Herba Epimedii, Fructus Ligustri Lucidi, and Fructus Psoraleae was investigated in aged ovariectomy and calcium deficiency-induced osteoporotic rats [74]. A comparison of the transcriptional profiles between nontreated and herbal formula-treated ovariectomized rats found that some genes specifically activated by the herbal mixture, such as prostaglandin EP3 receptor and osteoprotegerin, were involved in bone remodeling and bone protection. Moreover, they identified the involvement of estrogen-related proteins and suggested that the herbal formula may act like estrogens. [21]. They found that some differentially expressed genes in liver tissue were associated with glycolysis/gluconeogenesis and pentose phosphate pathways, for example, upregulation of aldolase 2, B isoform and phosphogluconate dehydrogenase for glycolysis and pentose phosphate pathways, and downregulation of fructose bisphosphatase 1 for gluconeogenesis pathways after CK treatment. In adipose tissue, on the other hand, they found differentially expressed genes there was linked to adipocytokine signaling and fatty acid synthesis/metabolism pathways, for example, upregulation of peroxisome proliferatoractivated receptor gamma for adipocytokine signaling and fatty acid synthase for fatty acid synthesis pathways after CK treatment. Plasma adiponectin, a hormone responsible for increasing neoglucogenesis, is secreted from adipocytes and might play a key role in linking obesity, insulin resistance, and the T2DM syndrome. A higher adiponectin expression has been observed in conjunction with lowered obesity levels in human studies [92]. Furthermore, Fu et al. have found that adiponectin increases total glucose transporter 4 expression, which can aid in the response to insulin at the plasma membrane [93]. These results suggest that CK might be a potential target for antidiabetic drugs. Antitumor Activity. Chinese herbs have been known to possess anti-tumor activity, and some studies have investigated this functionality and its mechanisms in cancer treatments. F3-treated human leukemia THP-1 cells have been found to undergo apoptosis through death receptor pathways as determined through microarray analysis [71]. Furthermore, F3 induces macrophage-like differentiation by caspase cleavage and p53 activation in THP-1 cells [94]. Another study using a microarray approach determined that 25% of genes regulated by two lingzhi (G. lucidum and G. sinense) was determined to be similar [69]. G. sinense was observed to regulate inflammation and immune response pathways, while G. lucidum appeared to increase the expression levels of NF-κB pathway genes. Therefore, lingzhi appears to have efficacy as an anti-tumor agent against THP-1 cells. Other Chinese herbs have also been associated with anti-tumor effects [95][96][97][98][99][100][101]. Two studies have shown that Evidence-Based Complementary and Alternative Medicine 7 American ginseng (Panax quinquefolius L.) extracts can inhibit tumor growth in HCT-116 [75] and MCF-7 cells [76]. A-kinase (PRKA) and anchor protein 8-like (AKAPA8L) gene expression were up-regulated, and phosphatidylinositol transfer protein alpha (PITPNA) gene expression was down-regulated after ginsenoside Rg3 treatment in HCT-116 cells [75]. The inhibition of the MAPK pathway and the up-regulation of Raf-1 kinase inhibitor protein (RKIP) expression were determined in the ginseng extract of hot water-extracted American ginseng-treated MCF-7 cells [76]. These genes have anticancer potential and are considered to be involved in anti-tumor mechanism of America ginseng. A comparison of transcriptional profiles between mouse macrophage RAW 264.7 cells before and after artemisinin treatments [83] found that the differentially expressed genes were most associated with the nitric oxide, cAMP, and Wnt/beta-catenin pathways. They suggested that the tumor regulation function of artemisinin might arise from its effect on nitric oxide biosynthesis. Nitric oxide has been proved that it can suppress tumorigenesis [102]. In another study, Hara et al. examined eight benzodixoloquinolizine alkaloids extracted from Coptidis rhizome and assessed the strength of their antiproliferative activity in eight human pancreatic cancer cell lines [77]. These results indicated that berberine is the major compound behind boosting the anti-proliferative response. However, the antitumor effect of berberine isolated from C. rhizome was poorer than that of whole C. rhizome, suggesting other components of the fungus are to some degree responsible as well [78]. MCF-7 cells treated with Coptidis extracts also displayed increased activation of anti-tumor pathways. Two critical anti-tumor cytokines were identified-interferon-β and tumor necrosis factor-α-and Coptidis extracts were also found to induce cell growth arrest and apoptosis [103]. These studies suggest that C. rhizome or Coptidis extracts are able to inhibit cell proliferation by reducing tumor cell growth and promoting apoptosis. 5.6. Angiogenesis. Some Chinese herbs have been reported to promote angiogenesis. Ginseng refers to both Panax ginseng C.A. Meyer and Panax quinquefolius L. (Araliaceae), which contain similar components. Rg1, a P. ginseng extract, can promote angiogenesis by modulating cytoskeletal-related genes and enhancing endothelial nitric oxide synthase activities in human umbilical vein endothelial cells (HUVEC) [82]. Chan et al. also illustrated that Rg1-induced downregulation of miR-214 led to an increase in the expression of eNOS in HUVEC through miRNA microarray analysis [104]. Taken together, the findings suggest that Rg1, the major component ginosenoside from P. ginseng, can promote angiogenesis in HUVEC. Cardiovascular Disease. Salvia miltiorrhiza is widely used for human cardiovascular disorders in Asia, but the cellular mechanism by which it attenuates the growth of aortic smooth muscle under oxidative stress remains unclear. Salvianolic acid (SAL) or tanshinone (TAN) purified from S. miltiorrhiza was used to treat acute myocardial infarction in Wistar rats [85]. SAL decreases the gene expression of apoptosis-related genes at in a later period after ischemia, for example, BCL2 modifying factor (Bmf). TAN decreases the gene expression of intracellular calcium pathways-related genes at an early stage after ischemic injury, for example, voltage-dependent calcium channel alpha 1 (CACNA1). Intracellular calcium and apoptosis pathways have been reported to be associated with ischemic cardiac injury and repair [105,106]. These results suggest that SAL and TAN could be used to prevent injury and involved in after injury repair of acute myocardial infarction. Neuronal Diseases. Su et al. compared gene expression profiles of H 2 O 2 -exposed human neuroblastoma SH-SY5Y cells following treatment with paeonol, which is extracted from Paeonia suffruticosa [20,84]. They identified that the extract up-regulated the mature T-cell gene set and found that paeonol was able to reduce H 2 O 2 -induced NF-κB activity. These data indicate that paeonol might have antioxidative related properties and could be used to treat neurodegenerative diseases, for example, Alzheimer's disease [107]. Proteomic Studies of Chinese Herbs in Diseases Compared with studies investigating Chinese herbs using genomic analysis, proteomic analytic studies have been performed far less and on fewer herbs. Tumors and convulsive disorders are the major subjects of analysis by proteomic technologies to date. The Chinese herbs used for these conditions and related references are documented in Table 2. 6.1. Antitumor Activity. HepG2 liver cancer cells were treated with oridonin extracted from Isodon rubescens and analyzed by 2-DE and MALDI-TOF-MS [79]. Proteomic data showed that expression levels of heat shock 70 kDa protein 1, Sti1h, and hnRNP-E1 were altered after treatment; these proteins are associated with apoptosis pathways. An extract from Franquet (Cucurbitaceae), tubeimoside-1 (TBMS1), has also been used as an anticancer treatment [108]. Xu et al. found 15 proteins differentially expressed between TBMS-treated or -untreated HeLa cells through MALDI-TOF-MS analysis [80]. These proteins were associated with mitochondrial dysfunction and ER stress-induced cell death pathways and participated in TBMS1-induced cytotoxicity [80]. The major component of Rhizoma paridis, Rhizoma paridis total saponin (RPTS), is responsible for the antitumor effects of this herb. Using MALDI-TOF-MS, Cheng et al. identified 15 proteins altered in HepG2 cells differentially expressed between RPTS-treated or untreated HepG2 cells, with most of them implicated in tumor initiation, promotion, and progression [81]. These results suggest that proteomic approaches could be useful tools to elucidate pharmacological mechanisms responsible for anti-cancer drug activities. Moreover, Hung et al. found that S. miltiorrhiza aqueous extract (SMAE) inhibited the proliferation of rat aortic 8 Evidence-Based Complementary and Alternative Medicine smooth muscle cell line A10 under homocysteine (Hcy)induced oxidative stress [87,88]. Furthermore, the intracellular reactive oxygen species concentration significantly decreased in A10 cells after SMAE treatment. Using MALDI-TOF-MS, the researchers suggested that the SMAE-induced inhibition of growth in Hcy-stimulated A10 cells occurred via the PKC/MAPK-dependent pathway [87,88]. 6.2. Convulsive Disorders. Uncaria rhynchophylla (UR) and its major component, rhynchophylline, have demonstrated effectiveness in treating convulsive disorders [109]. Lo et al. used SD rats with kainic acid (KA)-induced epileptic seizures and treated them with UR [86]. They then analyzed proteomic profiles from the frontal cortex and hippocampus of rat brain tissues and identified proteins differentially expressed between treated and untreated tissue. Macrophage migration inhibitory factor (MIF) and cyclophilin A were down-regulated and restored to normal levels in epileptic seizure rats after UR treatment. MIF has been considered as a counterregulator of normal neuronal actions and increases its expression level to reduce the chronotropic actions in SD rats [110]. Cyclophilin A is a phylogenetically-conserved protein and regulates immunosuppression [111]. These results showed that MIF and cyclophilin A involved in the mechanism of anticonvulsive effect of UR. Metabolomics Studies of Chinese Herbs in Diseases Another platform for systems biology research is metabolomics, involving the study of targeted small molecule metabolites (<1500 Da). In 1998, the metabolome was first introduced in the elucidation of yeast gene function [112]. However, the application of this concept can be traced back to the development of traditional medicine; while metabonomics chemically tracks metabolites in urine, feces, and so forth, traditional medicine used color, smell, and taste to facilitate diagnoses. Over genomics and proteomics, metabolomics can provide a more solid link between genotype and phenotype. NMR-Based Metabolomics Analysis. NMR-based metabolomics is an attractive method for the study of medicinal herb efficacy on disease symptoms. Using this method, nonselective and comprehensive analysis was performed on ginkgo extracts [113]. Moreover, Zhang et al. found that ginkgo extracts have multidirectional lipid-lowering effects on the rat metabonome [114]. They suggested that ginkgo extracts possess metabolomic functions, including limitation of cholesterol absorption, inactivation of HMGCoA, and favorable regulation of essential polyunsaturated fatty acid profiles [114]. [116]. They were able to identify differences in multiple changed agerelated metabolites in serum, including carnosine, ergothioneine, unsaturated fatty acids, saturated fatty acids, and nucleotides. The expression levels of these age-related proteins were restored to levels found in younger rats after Epimedium treatment. Terpenoids, a group of important secondary metabolites in plants, are found in Ganoderma sp., which have high cytotoxic and anti-tumor activity. Ganoderiol F, a tetracyclic triterpene, has been analyzed by LC/MS/MS and administered to rats for metabolomics and pharmacokinetics experiments [117]. Analysis of the metabolites of ganoderiol F by HPLC/MS/MS from orally or i.v.-treated rats showed good viability and low acute toxicity. According to these reports, ganoderiol F may show potential as an anti-cancer drug. Summary and Future Perspectives In this paper, we described and discussed omics research to date combined with acupuncture or CHM, performed at the systems biology level. We found that the ST36 acupoint is the most widely used acupoints, and that it has multiple therapeutic functions and targets, including spinal cord injury [46], allergic rhinitis [56], analgesia [38], neuropathic pain [9,47], antiaging [41], knee osteoarthritis [45], and acute ischemic stroke [11]. Different from other therapies, acupuncture treatments show variation in the transcriptional profiles and perceived effect of their subjects, whether rats [39] or humans [40]. Gao et al. reported that approximately 30% of rats demonstrated no analgesic effects during EA [39]. Similarly, Chae et al. reported that 40% of their participants felt only low analgesic effects during acupuncture, an observation that is more likely caused by genetic variation rather than differences in psychology [40]. Individual variance in treatment response is becoming an important issue when deciding whether it is suitable or not to administer acupuncture therapy. Based on this paper results, we suggest that system biology approaches can be performed to construct exhaustive clinical data of patients with acupuncture or CHM treatments, for example, upregulated or down-regulated genes of Ph (+) or Ph (−) allergic rhinitis patients with acupuncture treatments, and provide useful information for improving future therapeutic strategies. In term of acupuncture treatment, one acupoint commonly used for different symptoms, for example, LI4 acupoint could be performed for allergic rhinitis, analgesia, and acute ischemic stroke. Compared with acupuncture treatments, Chinese herbal treatments are more often investigated for their effectiveness on specific diseases, and studies focus on the pharmacological mechanisms of different herbal extracts, for example, benzodixoloquinolizine alkaloids has anti-tumor activity [77] or CK has anti-diabetic efficacy [21]. In summary, we suggest appropriate therapies-whether acupuncture, TCM, or a combination-can be personalized to individuals through analyzing their transcriptional or proteomic profiles ( Figure 2). "Personalized medicine in TCM" can be developed even further and provide important information for therapeutic strategies in managing various diseases and conditions.
2018-04-03T00:28:18.901Z
2012-10-18T00:00:00.000
{ "year": 2012, "sha1": "689ba572be03c304a1d0b65f6a4d3ab252b11232", "oa_license": "CCBY", "oa_url": "http://downloads.hindawi.com/journals/ecam/2012/372670.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "8b2acb11650fd07bf8be8b3c50351a2a061ed77a", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
225041709
pes2o/s2orc
v3-fos-license
Mitochondrial DNA Haplogroup M7 Confers Disability in a Chinese Aging Population Mitochondrial DNA (mtDNA) haplogroups have been associated with functional impairments (i.e., decreased gait speed and grip strength, frailty), which are risk factors of disability. However, the association between mtDNA haplogroups and ADL disability is still unclear. In this study, we conducted an investigation of 25 mtSNPs defining 17 major mtDNA haplogroups for ADL disability in an aging Chinese population. We found that mtDNA haplogroup M7 was associated with an increased risk of disability (OR = 3.18 [95% CI = 1.29–7.83], P = 0.012). The survival rate of the M7 haplogroup group (6.1%) was lower than that of the non-M7 haplogroup group (9.5%) after a 6-year follow-up. In cellular studies, cytoplasmic hybrid (cybrid) cells with the M7 haplogroup showed distinct mitochondrial functions from the M8 haplogroup. Specifically, the respiratory chain complex capacity was significantly lower in M7 haplogroup cybrids than in M8 haplogroup cybrids. Furthermore, an obvious decreased mitochondrial membrane potential and 40% reduced ATP-linked oxygen consumption were found in M7 haplogroup cybrids compared to M8 haplogroup cybrids. Notably, M7 haplogroup cybrids generated more reactive oxygen species (ROS) than M8 haplogroup cybrids. Therefore, the M7 haplogroup may contribute to the risk of disability via altering mitochondrial function to some extent, leading to decreased oxygen consumption, but increased ROS production, which may activate mitochondrial retrograde signaling pathways to impair cellular and tissue function. INTRODUCTION Mitochondria are "cellular powerhouses, " which provide > 90% of the cellular ATP through oxidative phosphorylation (OXPHOS). Human mitochondrial DNA (mtDNA) is matrilineal inherited, and encodes 13 essential polypeptides, 2 rRNAs, and 22 tRNAs (Wallace, 2013). mtDNA haplogroups are defined by special mtDNA variants that divide the population into discrete groups, each of which shares a common female ancestor (Sun et al., 2019). Notably, mtDNA variants are closely associated with many age-related diseases, especially common chronic diseases, such as diabetes Kuo et al., 2016), cardiovascular diseases (Sawabe et al., 2011;Qin et al., 2014), and degenerative diseases (van der Walt et al., 2004;Bi et al., 2015). These diseases may contribute to the prevalence of physical function impairments, such as agerelated disability. Characterized by multi-systemic decline, disability refers to the loss of daily life and social activities ability primarily attributable to both physiological and pathological aging, which means decreased physical functioning and cognitive performance, as well as other age-related chronic diseases (Rowe and Kahn, 2000;Gu et al., 2015). Evidence shows that the heritability of disability in populations older than 75 years can be estimated to be 28% (Mikhal'skii et al., 2009). Recently, an analysis of genetic polymorphisms on functional status at very old age (mean age 93.2 years) indicated that for genes in the oxidative stress pathway, several variants were associated with activities of daily living (i.e., TXNRD1-rs7310505) (Dato et al., 2014). Additionally, a number of investigations performed with mtDNA haplogroups and disability-related phenotypes, such as weak grip strength, slow gait speed, and frailty, revealed that mitochondrial haplogroups were associated with phenotypes of disability (Moore et al., 2010;Sun et al., 2018;Erlandson et al., 2020). However, the underlying mechanisms of the mtDNA haplogroups and disability remains unknown. In addition, whether mtDNA haplogroups are associated with ADL disability is also unknown. We thus sought to determine if mtDNA haplogroups contribute to ADL disability by altering mitochondrial function and intracellular mitochondrial signals to some extent. Therefore, we conducted an association study in a Chinese aging population to evaluate the potential correlation between mtDNA haplogroups and the prevalence of disability by ADL assessment. Moreover, we explored the possible effect of mtDNA haplogroups on the pathophysiology of disability by using a cellular model. Study Participants We used data from the longevity arm of the Rugao Longevity and Aging Study (RuLAS). As previously described (Liu et al., 2016) (Cai et al., 2009). Functional disability was assessed using the Katz Index of activities of daily living (basic activities of daily living, BADL) and the Lawton index of instrumental ADL (IADL) (Katz et al., 1963;Lawton and Brody, 1969). BADL is based on six basic daily tasks: eating, dressing, bathing, indoor transferring, going to the toilet and cleaning oneself afterward. IADL is based on eight instrumental daily task: cooking, doing housework, taking transportation, shopping, washing clothes, making a phone call, managing money and taking medicine. Each task has the following three response alternatives: strongly independent, somewhat dependent, and strongly dependent, with a score of 1, 2, and 3 points, respectively. Lower scores indicated better physical functioning. In this study, disability was defined by Katz Index of ADL, which was modified from the original scale and has been extensively used to evaluate the functional status of older adults. Based on the total summed scores (ranged from 6 to 18), an ADL score > 6 was defined as "disability, " an ADL score = 6 was defined as "normal." Generation of Cell Lines and Culture Conditions The 143B ρ0 human osteosarcoma cells lacking mtDNA were cultured in high-glucose Dulbecco's modified Eagle's medium (DMEM) containing 10% fetal bovine serum (FBS) (Thermo Fisher Scientific, Waltham, MA, United States), 100 µg/mL of pyruvate, and 50 µg/mL of uridine. Transmitochondrial cybrids were formed by fusing 143B ρ0 cells and platelets from healthy individuals with corresponding haplogroups, as described previously (King and Attardi, 1989). In this study, cybrids with M7 (n = 2) haplogroups and M8 (n = 2) haplogroups were constructed. All of the platelets were collected from Taizhou. Cybrids were cultured in highglucose DMEM containing 10% FBS at 37 • C and 5% CO 2 . The morphology of the four cybrids showed no difference (Supplementary Figure S1). Pathogenic mtDNA mutations and cross-contamination were ruled out through Sanger sequencing of the complete mitochondrial genome from all of the cybrids during culture (Supplementary Table S4). ATP Measurements ATP was measured using an ATP measurement kit (Thermo Fisher Scientific, Waltham, MA, United States) according to the manufacturer's instructions. Briefly, approximately 1 × 10 6 well-cultured cells were washed with pre-chilled phosphate buffered saline (PBS) buffer, and then boiled in 100 µL lysis buffer for 90 s. Supernatants were retrieved by centrifugation at 10,000 × g for 1 min. ATP content was determined by measuring the luminescence of supernatants mixed with Luciferase Assay buffer using a Spark Multimode Reader (Tecan, Männedorf, Switzerland). ATP luminescence was normalized by protein concentration. ROS Measurements Mitochondrial ROS was measured as described previously (Fang et al., 2015). Briefly, cells were washed in Hank's buffered salt solution (HBSS), resuspended in DMEM containing 5 µM Mito SOX (Thermo Fisher Scientific, Waltham, MA, United States) and then incubated at 37 • C for 5 min. Cells were then washed with HBSS and the fluorescence was recorded using a Spark Multimode Reader (Tecan, Männedorf, Switzerland). ROS fluorescence was normalized based on protein concentration for each sample. Mitochondrial Membrane Potential (MMP) Measurements MMP was determined using the cationic fluorescent redistribution dye tetramethylrhodamine, methyl ester (TMRM) (Thermo Fisher Scientific, Waltham, MA, United States) as previously described (Sun et al., 2016). Briefly, cells were incubated with a final concentration of 30 nM TMRM at 37 • C for 20 min. Cells were then washed with PBS and the fluorescence was recorded using a Spark Multimode Reader (Tecan, Männedorf, Switzerland). MMP fluorescence was normalized to the protein concentration. Statistical Analysis The data are presented here as either the mean ± SD or a percentage, with comparisons between different groups being performed using a t-test, a Mann-Whitney U-test, or a Chi-square test, when appropriate. Haplogroups featuring a frequency > 5% in either the normal participants or disability participants were analyzed in this study. Haplogroups that featured frequencies of < 5% were regarded as "other haplogroups." Haplogroup D4 was taken as reference class since no study associated with haplogroup D4 and disability related phenotypes, such as decreased gait speed and grip strength, and frailty was reported; in addition, haplogroup D4 was associated with longevity in Chinese (Cai et al., 2009) and Japanese population (Alexe et al., 2007;Bilal et al., 2008). Bonferroni correction indicating a P value of < 0.006 (0.05/9) was considered statistically significant when analyzing these 9 haplogroups (with the other haplogroup excluded). The significance of "other haplogroups" was not considered here because the group included multiple mtDNA haplogroups. Logistic regressions were performed to investigate the relationship between haplogroup M7 and disability. Odds ratios (OR) and 95% confidence intervals (CI) were documented. Kaplan-Meier survival analysis was applied to compare the survival of participants with different haplogroups. Cox proportional hazard regression models were used to calculate hazard ratios (HR) and 95% confidence intervals (CI) adjusting for multiple covariates. Statistical analyses were performed using SPSS statistical software 22.0 (IBM Corporation, Armonk, NY). A two-sided p < 0.05 was considered to be significant. RESULTS mtDNA Haplogroup M7 Is Associated With an Increased Risk of Disability In total, 463 participants aged over 95 years were included in this study (mean age: 97.42 ± 2.10 years). Overall 245 participants (52.9%) were defined as disability (ADL score > 6). The ADL and IADL score of disability group were significantly higher than normal group (11.06 vs. 6.00, p < 0.001; 14.55 vs. 10.15, p < 0.001, respectively). The survival rate of the disability group (5.5%) was significantly lower than that of the normal group (13.4%) after a 6-year follow-up (p = 0.003). Additionally, albumin levels significantly declined in the disability group (41.56 ± 8.86) compared to normal group (43.10 ± 4.23), while the platelets and phosphatase levels were increased in the disability group compared to normal group (Supplementary Table S1). The distributions of haplogroups M7 in the disability group were significantly higher than those in the normal group (11.0% vs. 3.7%, p = 0.002). In addition, we found an increased frequency of M7 (OR = 3.90, 95% CI = 1.42-7.22, p = 0.003) using multivariate logistic regression analysis with adjustment for age, sex, and haplogroups ( Table 1). Participants with M7 haplogroup had significantly poor performance in activities of daily living, including ADL and IADL, and the percentage of disability was higher than in non-M7 haplogroups (77.1 vs. 50.9%), and the M7 haplogroup groups presented significantly elevated levels of platelets and phosphatase compared to non-M7 haplogroups groups ( Table 2). In further analysis, we tested the associations of the M7 haplogroup with disability in 463 participants. The M7 haplogroup was significantly associated with disability (OR = 3.18, 95% CI = 1.29-7.83, P = 0.012) after adjusting for sex, age, marriage, smoking habits, drinking habits, BMI, SBP, DBP, hemoglobin, platelets, white blood cell, albumin, phosphatase, UA, CHOL, TG, LDL, and HDL (Supplementary Table S2). After a 6-year follow-up (median follow-up times: 30 months), 410 participants died. Kaplan-Meier survival analysis showed that the survival rate of the M7 haplogroup (6.1%) was lower than that of the non-M7 haplogroup (9.5%) after a 6-year follow-up, LogRank p = 0.188 (Supplementary Figure S3). Meanwhile, Cox proportional hazard regression models were used to calculate HR (HR = 1.27, 95% CI = 0.88-1.83, p = 0.196) after adjusting for sex and age. Albeit no significance was observed due to the small samples of M7 haplogroup, we still noticed that after 2 years, the survival rate difference between M7 haplogroup and non-M7 haplogroups was considerable. M7 Cybrids Exhibit Lower Respiration Chain Complex (RCC) Capacity Than M8 Cybrids To uncover the biological mechanism of the M7 haplogroup and disability, we generated cytoplasmic hybrid (cybrid) cell lines containing M7 or M8 haplogroups with an osteosarcoma 143B nuclear background. To analyze the effect of mtDNA haplogroups on the regulation of mitochondrial RCC, we first determined the mtDNA content in M7 and M8 cybrids. The mtDNA content in M8 cybrids was 20-30% higher than that in M7-1 cybrid, and a decreased tendency was also observed in M7-2 cybrid (Figure 1A). The lower mtDNA content measured in M7 cybrids was not because of the reduced number of mitochondria ( Figure 1B). Next, an examination of the whole complex amount by BN-PAGE revealed that complexes I, IV, and V abundant were significantly higher in M8 cybrids than in M7 cybrids ( Figure 1B). Collectively, our results demonstrated that M7 cybrids exhibited lower RCC capacity than M8 cybrids. Mitochondrial Function Is Lower in M7 Cybrids Than M8 Cybrids Next, we measured mitochondrial respiratory profiles of our cybrids using an Oroboros O2k instrument. A respiration assay of the cybrids showed higher total oxygen consumption levels in M7 cybrids than in M8 cybrids. M7 cybrids also presented about a 40% decline in mitochondrial ATP-linked oxygen consumption than M8 cybrids (Figure 2A). Furthermore, M7 cybrids displayed significantly decreased MMP levels (Figure 2B), and a 10% reduction in the ATP generation tendency but without significance ( Figure 2C). Taken together, these results indicated that M7 cybrids exhibited diminished mitochondrial function relative to M8 cybrids. Mitochondrial Oxidative Stress Is Activated in M7 Cybrids Than M8 Cybrids Mitochondrial dysfunction can activate mitochondrial signaling mediators to play a critical role in cellular physiology, such as ROS generation (Chae et al., 2013). As shown in Figure 3A, M7 cybrids generated approximately 20-30% enhanced ROS levels compared to M8 cybrids. Together with this, the detection of a significantly higher level of the mitochondrial quality control protein GRP75, which plays an important role in cell proliferation, stress response, and maintenance of the mitochondria, was observed in M7 cybrids relative to M8 cybrids. This supported the notion that M7 cybrids exhibit increased Values in parentheses are the percentage of samples. CI, confidence interval; OR, odds ratio. *represents haplogroup with frequencies <5% in normal and disability group. P < 0.005 (0.05/9), adjusted p-value with Bonferroni correction while 9 haplogroups studied in this study. Frontiers in Genetics | www.frontiersin.org mitochondrial unfolded protein response (mtUPR) as compared to M8 cybrids ( Figure 3B). This also suggested that M7 cybrids need to cope with increased ROS stress. However, we found that other mtUPR proteins did not differ between the M7 and M8 cybrids, and thus, the mtUPR level in M7 cybrids may be limited. DISCUSSION In the present study, we first investigated a population study to explore the effects of mtDNA haplogroups on disability in a Chinese aging population. We found that the mtDNA haplogroup M7 was associated with an increased risk of disability independent of several blood biomarkers and common diseases. Furthermore, the survival rate of the M7 haplogroup group was lower than that of M8 haplogroup group at 6 years' follow-up. Moreover, we observed decreased mitochondrial biogenesis and OXPHOS dysfunction, while elevated ROS production in M7 haplogroup cybrids compared to M8 haplogroup cybrids using a trans-mitochondrial cellular model. The non-synonymous mutations of haplogroups M7 cybrids located on subunits of complex I, III, IV, and V, such as G4048A (ND1, p. Asp248Asn), C14766T (Cytb, p. Thr7Ile), G7853A (COII, p. Val90Ile), A8701G (ATPase, p. Thr59Ala). Complex I is the largest and first complex of the mitochondrial respiration chain, and oxygen consumption happens in association with complex IV, and ATP finally generates at complex V. Mutations in these complexes lead to serious mtDNA-related diseases, such as mitochondrial encephalomyopathy, Leigh's syndrome, and Leber hereditary optic neuropathy (LHON) (Schon et al., 2012). Although the mutations in haplogroup M7 have not been reported with diseases, we cannot deny that these mutations may be functional to some extent. In our study, we found that mitochondrial complex I, IV, and V capacities were decreased in M7 haplogroup cybrids compared with M8 haplogroup cybrids. Possible explanations might include that these mutations alter complex structure and assembly to further affect complex capacity and function. Accumulating evidences demonstrated that mtDNA haplogroups were correlated with longevity. One study clarified that the mtDNA haplogroup frequency distribution between centenarians and younger individuals were obviously different. And the haplogroup J was significantly associated with centenarians in Italy (De Benedictis et al., 1999). Another study also demonstrated that mtDNA haplogroup J increased the individual chance to attain longevity in northern Italians, Northern Irish and Finns (Dato et al., 2004). In addition, mtDNA haplogroup D4a contributed to Japanese semi-supercentenarians (aged above 105 years) (Bilal et al., 2008). Notably, mitochondrial haplogroups may play roles in common chronic diseases with age, such as the role of the N9a haplogroup in diabetes, the B5 haplogroup involvement in Alzheimer's disease (AD), or the role of G haplogroup in osteoarthritis (Bi et al., 2015;Fang et al., 2016Fang et al., , 2018. Previous studies have found that the mitochondrial M7 haplogroup is predisposed to susceptibility to common chronic diseases such as chronic obstructive pulmonary disease, coronary atherosclerosis, lung cancer, and hepatocellular carcinoma (Sawabe et al., 2011;Zheng et al., 2012a,b;Chen et al., 2017). Notably, evidence has shown that haplogroup M7a is closely associated with Japanese Parkinson's disease (PD). PD patients have many phenotypes, such as resting tremors, muscle rigidity, gait and posture disorders, and motor retardation. This directly increases the possibility of disability (Takasaki, 2008). A sub-haplogroup of M7b, M7b1 2, is associated with the most extensively studied mitochondrial disease, LHON (Ji et al., 2008). The link of M7 to disability may to some extent be mediated by the effects of M7 on common chronic diseases, which impair multiple organs and physical function. Whole-cell extract of mitochondrial respiratory complexes from M8 and M7 cybrids solubilized with dodecyl maltoside (DDM) at a ratio of 2.5 g of detergent/g protein and then subjected to blue native PAGE (BN-PAGE)/immunoblot (IB) analysis. Complexes I, II, III, IV, and V were immunoblotted with anti-Grim19, SDHA, UQCRC2, COX1, and ATP5a antibodies, respectively; VDAC was used as a total-protein loading control. Protein levels were normalized to M8-1 cybrid. Data are presented as the mean ± SD from at least 3 independent tests per experiment. **P < 0.01. FIGURE 2 | Mitochondrial function is lower in M7 cybrids than M8 cybrids. (A) Mitochondrial respiratory capacities were determined in M8 and M7 cybrids. Oligomycin (2.5 µg/mL) was added for the measurement of uncoupled mitochondrial respiration. OXPHOS coupling respiration was calculated by subtracting the uncoupled component value from the total endogenous respiration value. (B) MMP was determined in M8 and M7 cybrids treated with 30 nM tetramethylrhodamine (TMRM) for 20 min. Relative MMP levels were normalized to M8-1 cybrid. (C) ATP was determined in M8 and M7 cybrids with boiled cells. Relative ATP levels were normalized to M8-1 cybrid. Data are shown as the mean ± SD from at least three independent tests per experiment. *P < 0.05; **P < 0.01, ***P < 0.001. FIGURE 3 | Mitochondrial oxidative stress is activated in M7 cybrids. (A) Mitochondrial ROS was determined in M8 and M7 cybrids by MitoSOX. Relative mitochondrial ROS levels were normalized to M8-1 cybrid. (B) Representative Western blot of mitochondrial quality control proteins, including GRP75, HSP60, LONP1, and CLPP, from whole-cell extracts of M8 and M7 cybrids. ACTIN was used as a loading control. Relative protein levels were normalized to M8-1 cybrid. Data are presented as the mean ± SD from at least 3 independent tests per experiment. *P < 0.05; **P < 0.01. Although data addressing the relationship between mtDNA haplogroups and disability are scarce, some evidences showed that mtDNA haplogroups were associated with several disability-related phenotypes, including frailty, weak grip, slow gait, and skeletal muscle fatigability (Moore et al., 2010;Sun et al., 2018;Erlandson et al., 2020). Mitochondrial haplogroup H is independently associated with weak grip and frailty among those > 50 years old with HIV (Erlandson et al., 2020). mtDNA T204C is associated with a greater likelihood of frailty and lower muscle strength (Moore et al., 2010). Interestingly, studies of elite athletes with mtDNA variations also demonstrated that mitochondria are associated with physical performance, especially skeletal muscle. For example, mitochondrial haplogroup T is negatively associated with the status of elite endurance athletes in Spain (Castro et al., 2007). Similar studies have also been conducted in other populations, such as Kenyan, Japanese, and Finnish (Niemi and Majamaa, 2005;Scott et al., 2009;Mikami et al., 2011). All of these studies revealed that mitochondrial inheritance is influential in variations in physical performance, and may eventually contribute to disability phenotypes. The cytoplasmic hybrid (cybrid) technique is widely used for studying how mtDNA variants influence mitochondrial functions in cellular physiological conditions. For example, cybrids with the mitochondrial haplogroup G related to osteoarthritis exhibit altered mitochondrial complexes I and III capacity . As shown in our results, M7 cybrids exhibited significantly lower respiration chain complexes I, IV, and V amount than M8 cybrids. Additionally, cybrids with diabetes associations, such as the N9a haplogroup, present as impaired mitochondrial function, including decreased ATP content, MMP level, and oxygen consumption (Fang et al., 2018). Consistent with this, M7 cybrids displayed diminished mitochondrial function as compared to M8 cybrids, including ATP-linked respiration and MMP level. Furthermore, N9a haplogroup cybrids exhibited increased ROS production and mild oxidative-stress response (Fang et al., 2018). In our study, superabundant ROS level in M7 cybrids enhanced mitochondrial quality control protein expression, yet, a shortage of oxidative-stress response was observed in M7 cybrids because only GRP 75 was modified, with no change in other quality control protein expression. Taken together, the M7 haplogroup exhibited decreased mitochondrial complex capacity, oxygen consumption, and MMP level, while increased ROS production, which may contribute to the risk of disability to some extent. It is noteworthy that enhanced mitochondrial ROS production has been implicated as one of the important reasons for the deterioration of DNA, proteins, and lipids in cells (Wallace, 2005). Several studies have revealed that enhanced ROS production induces mitochondrial respiration chain protein dysfunction and apoptosis in aging skeletal muscle (Short et al., 2005;Chabi et al., 2008;Gomes et al., 2017). Recently, a study group performed a longitudinal and deep multiomic profiling of 106 healthy individuals from 29 to 75 years of age and examined comprehensive measurements, including transcript levels, protein levels, metabolites, cytokines, microbes, and clinical laboratory values, and correlated all of them with age. Their data showed that increased oxidative stress response and abnormal ROS production impaired immunity pathways, liver and kidney function, metabolism, and inflammation with age (Ahadi et al., 2020). Notably, mitochondrial dysfunction may alter signals, such as ROS, AMP, and Ca 2+ , which could further change the activities of various nuclear transcription factors, and affect cellular processes, including proliferation, apoptosis, and metabolism (Chae et al., 2013;Picard et al., 2014). The mitochondrial retrograde signaling pathway has been shown to be involved in many common chronic diseases, such as diabetes, osteoarthritis, and AD. For instance, ROS-mediated ERK1/2 signaling increases the incidence of N9a haplogroup-related type II diabetes in Chinese populations (Fang et al., 2018). Moreover, osteoarthritis-associated haplogroup G cybrids had a lower ROS production and cell viability than haplogroup B4 cybrids under hypoxia, although with higher complex I and III activity levels, and ATP generation . The important mechanism was a shift in the metabolic profile from glycolysis to OXPHOS and activation of osteoarthritisrelated signaling pathways Additionally, mtDNA haplogroup B5 conferred genetic susceptibility to AD through decreased mitochondrial function and elevated ROS in Han Chinese (Bi et al., 2015). Thus, we suggest that the mtDNA haplogroup M7 contributes to physical function-related phenotypes and disability via mitochondrial dysfunction to some extent, with decreased mitochondrial complex capacity and oxygen consumption. Notably, the elevated ROS levels in the M7 haplogroup may activate mitochondrial retrograde signaling pathways to further affect cellular function and disability. This potential mechanism needs further research. The strengths of our study include the population-based approach, the reasonably large sample size of extreme longevity individuals, and the similar genetic environment, which ensures the homogeneity of our study population. This made it feasible to collect disability phenotypes based on extreme longevity (≥ 95 years old). However, the survival analysis only presented a lower tendency in M7 haplogroup participants than those non-M7 haplogroup participants due to the small samples of M7 haplogroup, this association needs to be validated in other independent samples. Further molecular mechanisms also need to be studied based on our preliminary cellular functional results. In addition, it is difficult for these extremely aged participants to accomplish physical performance measurements (i.e., gait speed and hand grip strength) accurately. Therefore, we could not conduct association analyses between mtDNA haplogroups and these disability-related phenotypes to validate our population findings. CONCLUSION In summary, we first identified a positive association between the mitochondrial M7 haplogroup and disability in an aging population. Moreover, we demonstrated that the M7 haplogroup exhibits declined mitochondrial biogenesis and function, while enhanced ROS production, which could be attributed to weak physical performance, disability, and even mortality. Our findings provide a primary but crucial role, to some extent, for further molecular mechanism exploration in the field of disability. DATA AVAILABILITY STATEMENT The datasets presented in this study can be found in online repositories. The names of the repository/repositories and accession number(s) can be found in the article/ Supplementary Material. ETHICS STATEMENT The studies involving human participants were reviewed and approved by The Human Ethics Committee of the School of Life Sciences, Fudan University, Shanghai, China. The patients/participants provided their written informed consent to participate in this study. AUTHOR CONTRIBUTIONS SY and WD collected the questionnaires. DS and FW performed the experiments. SY and DS analyzed the data. YM provided plasma for the cellular model. XW, JW, and LJ designed the study. DS wrote the manuscript. XW and JW edited the manuscript. All authors were involved in final approval of the submitted and published versions. SUPPLEMENTARY MATERIAL The Supplementary Material for this article can be found online at: https://www.frontiersin.org/articles/10.3389/fgene. 2020.577795/full#supplementary-material Supplementary Figure 1 | ADL score of the participants in this study. Four hundred and sixty three participants were involved in this study. Two hundred and eighteen participants were healthy controls, and Two hundred forty five participants were disability cases.
2020-10-23T13:07:48.430Z
2020-10-23T00:00:00.000
{ "year": 2020, "sha1": "6dd2fce6b84773c172140d1bb6b8c6c48afad01d", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fgene.2020.577795/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "6dd2fce6b84773c172140d1bb6b8c6c48afad01d", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
220306705
pes2o/s2orc
v3-fos-license
Ecological and Human Health Risk Assessment of Heavy Metal Pollution in the Soil of the Ger District in Ulaanbaatar, Mongolia The aim of the present study was to evaluate human health and potential ecological risk assessment in the ger district of Ulaanbaatar city, Mongolia. To perform these risk assessments, soil samples were collected based on reference studies that investigated heavy element distribution in soil samples near the ger area in Ulaanbaatar city. In total, 42 soil samples were collected and 26 heavy metals were identified by inductively coupled plasma optical emission spectrometry (ICP-OES) and inductively coupled plasma mass spectrometry (ICP-MS) methods. The measurement results were compared with the reference data in order to validate the soil contamination level. Although there was a large difference between the measurement results of the present and reference data, the general tendency was similar. Soil contamination was assessed by pollution indexes such as geoaccumulation index and enrichment factor. Mo and As were the most enriched elements compared with the other elements. The carcinogenic and noncarcinogenic risks to children exceeded the permissible limits, and for adults, only 12 out of 42 sampling points exceeded the permissible limit of noncarcinogenic effects. According to the results of the ecological risk assessment, Zn and Pb showed from moderate to considerable contamination indexes and high toxicity values for ecological risk of a single element. The Cr and As ranged as very high ecological risk than that of the other measured heavy metals. Introduction Coal is the primary source of energy in Ulaanbaatar, Mongolia, and accounts for approximately 90% of the energy sector in Mongolia [1]. Ulaanbaatar has a population of 1.5 million and more than half of the households (193,529) were located in a ger district in 2019 [2]. As mentioned by Davaabal. B et al. [3], a household of ger residents uses approximately 1.1-1.3 million tons of raw coal a year, Int. J. Environ. Res. Public Health 2020, 17, x 3 of 21 samples from highly-contaminated points based on the literature data. Second, the concentration of heavy metals in the retaken soil samples was identified and compared with the literature data. Third, the identified concentration in the present study was further used to assess both human health and ecological risks. Study Area Mongolia is located in East Asia and bordered by China and Russia, as illustrated in Figure 1. Ulaanbaatar is the capital city of Mongolia. The study area placed in the Ulaanbaatar and geographic coordinates of the research area were from 47°54′28.04′' N to 47°55′45.6′' N latitude and from 106°34′06.27′' E to 107°00′07.4′' E longitude. Selection of Sampling Locations and Sampling Although a number of studies have been performed to identify the concentration of heavy metals in soil samples as mentioned in the introduction, three main studies were selected in the present study as most of soil samples in these three studies [6,14,15] were collected from the ger area of Ulaanbaatar city and showed a high concentration of heavy metals. In these studies, there were 119 soil samples in total, as illustrated in Figure 2. Selection of Sampling Locations and Sampling Although a number of studies have been performed to identify the concentration of heavy metals in soil samples as mentioned in the introduction, three main studies were selected in the present study as most of soil samples in these three studies [6,14,15] were collected from the ger area of Ulaanbaatar city and showed a high concentration of heavy metals. In these studies, there were 119 soil samples in total, as illustrated in Figure 2. Based on these data, it was decided to recollect soil samples from 42 points that showed a high concentration and exceeded the Mongolian National Standard, especially for As. The soil samples were recollected at a depth of 10-30 cm using a plastic spatula and stored in self-locking polyethylene bags; the samples were collected between May and June 2019. The locations of the soil samples were recorded using a handheld Garmin Global Positioning System. The soil samples were dried at room temperature (20 °C) for three days at the National University of Mongolia. After the soil samples were dried, they were transported to Japan. The soil samples were stored for a week in a desiccator with airtight plastic bags and used for chemical analysis in the Kanazawa University laboratory. Laboratory Experiment at Kanazawa University The soil samples were dried in the National University of Mongolia laboratory after the removal of stone and fragments. The dried samples were sieved through a 0.25-mm sieve and crushed until powder. The reagent used the analytical method and was divided into five consecutive steps. In the first step, solid samples of 0.05 g each was digested completely in 3 mL 60% HNO3 with 3 mL 48% hydrofluoric acid (HF) and heated at 120 °C for 48 h. In the second step, 3 mL of 30 % hydrochloric acid (HCl) was added and heated at 120 °C for 24 h until dry. In the third step, 10 mL of 0.6% HNO3 was extracted and mixed for 24 h in the mix-rotor [17]. The obtained extraction solutions were filtered to the I-boy through a 0.20-µm cellulose membrane filter in the fourth step. In the final step, the sample was diluted 50 times by using a 0.6 % HNO3 solution for the ICP-MS and ICP-OES measurements. The procedure matrixes are summarized in Table 1. BCR (Community Bureau of Literature) was a modified extraction procedure for the analysis of heavy metals in soil [18]. Based on these data, it was decided to recollect soil samples from 42 points that showed a high concentration and exceeded the Mongolian National Standard, especially for As. The soil samples were recollected at a depth of 10-30 cm using a plastic spatula and stored in self-locking polyethylene bags; the samples were collected between May and June 2019. The locations of the soil samples were recorded using a handheld Garmin Global Positioning System. The soil samples were dried at room temperature (20 • C) for three days at the National University of Mongolia. After the soil samples were dried, they were transported to Japan. The soil samples were stored for a week in a desiccator with airtight plastic bags and used for chemical analysis in the Kanazawa University laboratory. Laboratory Experiment at Kanazawa University The soil samples were dried in the National University of Mongolia laboratory after the removal of stone and fragments. The dried samples were sieved through a 0.25-mm sieve and crushed until powder. The reagent used the analytical method and was divided into five consecutive steps. In the first step, solid samples of 0.05 g each was digested completely in 3 mL 60% HNO 3 with 3 mL 48% hydrofluoric acid (HF) and heated at 120 • C for 48 h. In the second step, 3 mL of 30 % hydrochloric acid (HCl) was added and heated at 120 • C for 24 h until dry. In the third step, 10 mL of 0.6% HNO 3 was extracted and mixed for 24 h in the mix-rotor [17]. The obtained extraction solutions were filtered to the I-boy through a 0.20-µm cellulose membrane filter in the fourth step. In the final step, the sample was diluted 50 times by using a 0.6 % HNO 3 solution for the ICP-MS and ICP-OES measurements. The procedure matrixes are summarized in Table 1. BCR (Community Bureau of Literature) was a modified extraction procedure for the analysis of heavy metals in soil [18]. Step 2 Hydrochloric acid 30% 3 mL 120 • C 24 h (until dry) Step 3 Solution Extracted solution of HNO 3 0.6% 10 mL Mix motor 24 h Step 4 Digestion Extracted solution 10 mL All soil samples were filtered by 0.20 µm membrane cellulose filter Step The experiments were performed using inductively coupled plasma optical emission spectrometry (ICP-OES, Varian 710-ES) and inductively coupled plasma mass spectrometry (ICP-MS, X-Series) at the Department of Global Environmental Science and Engineering at the Graduate School of Natural Science and Technology of Kanazawa University, Japan. The calibration curve was prepared with known concentrations of each element based on the multi-and single-standard solution. Low-level heavy metal (20 elements) concentrations for Cr, Co, Cu, As, Se, Cd, Pb, Mo, Zn, Al, V, Kr, Rb, Sr, Ag, Cs, Ba, Bi, Th, and U were analyzed using high sensitivity equipment of ICP-MS. Common metal concentrations (six elements) for Ca, Fe, K, Mg, Na, and Mn were analyzed using high ICP-OES. The detection limit depends on a heavy metal concentration around 50-100 ng/kg for low-level elements. Moreover, it should be noted that these elements have been listed on the Priority Pollutant list by the USEPA due to their potential toxic characteristics. They have also attracted increasing attention worldwide [19,20]. In addition, background and MNS were compared in the concentration of heavy metals in the soil for all of the samples. Pollution Indices of Soil Contamination To assess soil heavy metal contamination, pollution indices such as enrichment factor (EF) and geo-accumulation index (I geo ) were considered. The EF was used to assess the degree of soil contamination while the I geo was used to assess the potential anthropogenic impact and the background level of natural fluctuations [21,22]. The nine out of 26 elements were calculated for pollution indexes due to the reference data available. The EF and I geo are estimated as Equation (1) and Equation (2) (2) where C n is the measured concentration of element n = 6; B n denotes the value of the background concentration; these values were selected from [23]. Cr = 45, Co = 18, Cu = 25, As = 12, Cd = 1.0, Pb=20, Mo = 1.9, Mn = 710, and Zn = 60. The constant 1.5 was considered due to the possible changes in background data due to lithological variations. EF and I geo were categorized into six and seven classes, respectively, as listed in Table 2. Potential Human Health Risk Assessment Potential health risks of the residents living in the ger area of Ulaanbaatar city was performed by the USEPA method [24,25]. The residents living in the ger area of Ulaanbaatar city can be exposed to the contaminated soil through ingestion, and dermal and inhalation pathways. The exposure pathways for every heavy metal in the soil can be expressed by the average daily intake (ADI) according to Equations (3)- (5). Noncarcinogenic risk or hazard index (HI) is estimated by the ADI and reference dose (RfD) [25] while carcinogenic risk or cancer risk (CR) is estimated by the ADI and slope factor (SF) [25]. Eight out of 26 elements were used to assess for noncarcinogenics, while only four elements were used to assess for carcinogenics because SF and RD were available. The RfD and SF values are listed in Table 3. Used equations for noncarcinogenic and carcinogenic risks are shown in Equations (6) and (7) and Equations (8), respectively. The non-carcinogenic risk for a single heavy metal was determined as the hazard quotient (HQ). To find out the total cancer risk, according to Equation (8), the estimated risk caused by every heavy metal for the three pathways were added. In the present study, an permissible limit for the HI was set as 1, while the permissible limit for CR was set as 10 −4 [28]. In other words, if the calculated values exceed the permissible limit values, it is considered as exceeding the permissible limit or it can harmful to human health. The parameters used in the health risk assessment are listed in Table 4. Potential Ecological Risk Assessment The potential ecological risk of heavy metals was developed by Hakason's model in the present study [34]. According to this model, the contamination index (CL i ) can be calculated by Equation (9): where CL i Contamination index of heavy metal C i Measured concentration of heavy metal in the present study RC i Literature concentration of heavy metal in soil sample The contamination index enabled us to assess the soil contamination and potential ecological risk. This index is estimated by the ratio between the pre-industrial and current measured concentrations of the heavy metal in the soil sample. Six out of 26 elements were used to assess the potential ecological risk to the environment as the toxicity response factor was available for these heavy metal. The concentrations of heavy metals was collected from the literature and are listed in Table 5. The contamination index was categorized according to the contamination levels as listed in Table 6 [35]. Table 6. Category of contamination index. CL i Pollution Level Very strong pollution By using the information on the contamination index, the potential ecological risk index of a single element (eRP i ) can be calculated by Equation (10): where eRP i Ecological risk potential of i-th element in soil sample TRF i Toxicity response factor of heavy metal (TRF) TRF-Pb = 5, Cd = 30, As = 10, Cu = 5, Zn = 1 [36], and Cr = 2 [37]. Finally, comprehensive potential ecological risk (ERP) was estimated by Equation (11): The relation between the potential ecological risk index of a single element (eRP i ), potential ecological risk (ERP), and pollution level are listed in Table 7. Measurement Resulst of Heavy Metal Concentrations by ICP-OES and ICP-MS Methods The measurement results of the heavy metal concentrations in the soil sample by using the ICP-OES and ICP-MS methods are described in this subsection. As mentioned in Section 2.4, the experiment was performed at Kanazawa University. In total, 26 heavy metals in the soil samples were identified by the ICP-OES and ICP-MS methods. The mean concentrations of the elements were found to be in the following sequence: The measurement results are listed in Table 8 with the Mongolian National Standard (MNS). The Zn did not exceed the MNS limits, however, the Zn concentration in sample No 18 (waste point in Chingeltei district) was 384 mg/kg, which exceeds the MNS limits. The concentration of Mo was determined to be higher than the MNS in all of the soil samples sites. In particular, sample No 6 (waste area of the glass factory in Nalaikh district) and 22 (near a ravine in Chingeltei district) were determined to be 219 mg/kg and 334 mg/kg, respectively. The concentration of As was determined to be higher than the MNS limit in all of the soil samples. In particular, sample No 6 was determined to have the highest concentration of As (526 mg/kg), which may be due to the waste area of the glass factory in Nalaikh district. The concentration of other elements was determined to be lower than the MNS. Comparison between Present and Literature Data In the present study, 42 soil samples were recollected based on the literature data, as mentioned in Section 2.2. The collected samples were analyzed by using the ICP-OES and ICP-MS methods and 26 heavy metals were identified in the present study. As the location of the soil samples in the present study and literature were the same, the concentration of both studies was compared to assess the difference of the contamination level. The literature studies identified a limited number of elements, therefore only possible elements were compared with the data of the present study. The detailed explanations for each element are given in this section. It should be highlighted that measurement methods used in the literature were atomic absorption spectrometry (AAS) and x-ray fluorescence (XRF). The measurement results of the present study were compared with previous literature to assess the difference between the present and previous studies, although the compared measurement results were different due to many parameters such as measurement condition and difference in measurement technique. Arsenic (As): Measurement result of element arsenic (As) in the present study is illustrated in Figure 3a and the measurement data were compared with previous literature and the MNS values. In Figure 3a, the horizontal and vertical axes represent the concentration measured by the present and literature studies, respectively. The red line represents the Mongolian National Standard. If concentration values exist inside the red lines, it means that the concentration is under the standard value. The measurement results of the present study revealed that all samples exceeded the MNS value for the present and previous literature. In Figure 3a, the dotted black (20 mg/kg) and pink (40 mg/kg) lines represent the deviation from the centerline where two measurement values are equal. The difference between the present and literature data for As was under 40 mg/kg, as illustrated in Figure 3a. The measured concentration of arsenic (As) in sample No 6 was extremely high when compared with the other samples in both the previous literature and present studies. The sample No 6 was taken from a waste area of the old glass factory that is in the ger area of the Nalaikh district, Ulaanbaatar, Mongolia. The glass is made of a molten mixture of heavy metals including As. Therefore, the concentration of As in the sample No 6 might be measured as extremely high (526 mg/kg of As). compared with the other samples in both the previous literature and present studies. The sample No 6 was taken from a waste area of the old glass factory that is in the ger area of the Nalaikh district, Ulaanbaatar, Mongolia. The glass is made of a molten mixture of heavy metals including As. Therefore, the concentration of As in the sample No 6 might be measured as extremely high (526 mg/kg of As). Chromium (Cr): The measurement result of element chromium (Cr) in the present study is illustrated in Figure 3b and the measurement data were compared with the previous literature and MNS values. According to the previous study, 41 out of 42 samples were lower than the MNS standard, except for sample No 13. Although the measurement results are scattered in Figure 3b, the concentration of element Cr in the present study revealed that all measurement data were lower than the MNS standard. Lead (Pb): The measurement result of element lead (Pb) in the present study is shown in Figure 3c and the measurement data were compared with the previous literature and MNS values. According to the previous study, 41 out of 42 samples were lower than the MNS standard, except for sample No 35. On the other hand, the measurement results of element Pb in the present study revealed that all measurement data were lower than the MNS standard. Zinc (Zn): The measurement result of element zinc (Zn) in the present study is shown in Figure 3d and the measurement data were compared with the previous literature and MNS values. According to the previous study, 41 out of 42 samples were lower than the MNS standard, except for sample 7. On the other hand, the measurement result of element Zn in the present study revealed that all measurement data were lower than the MNS standard, except for sample No 18. An excess of Zn can affect inhalation, irritating the nose and throat, and also cause wheezing and coughing. Additionally, it can affect the male reproductive system and decrease sperm count. Copper (Cu): The measurement results of element copper (Cu) in the present study is shown in Figure 3e and the measurement data were compared with the previous literature and the MNS values. According to the previous study, 41 out of 42 samples were lower than the MNS standard, except for sample No 3. On the other hand, the measurement results of element Cu in the present study revealed that all measurement data were lower than the MNS standard. Cadmium (Cd): Although Cd was identified in the present study, there were only measurement data for samples No. 38,No. 41,and No. 42 in the previous literature. Therefore, the comparison was made at only those points. However, measurement data of the previous and present studies did not exceed the MNS value and there were no significant differences. Result of Pollution Indices of Soil Contamination Estimated EF and Igeo using the concentration of heavy metals is shown in Figure 4 and Figure 5, respectively. In Figures 4 and 5, the x-axis represents the number of elements while the y-axis represents the heavy metals identified in the present study. On the other hand, the measurement result of element Zn in the present study revealed that all measurement data were lower than the MNS standard, except for sample No 18. An excess of Zn can affect inhalation, irritating the nose and throat, and also cause wheezing and coughing. Additionally, it can affect the male reproductive system and decrease sperm count. Copper (Cu): The measurement results of element copper (Cu) in the present study is shown in Figure 3e and the measurement data were compared with the previous literature and the MNS values. According to the previous study, 41 out of 42 samples were lower than the MNS standard, except for sample No 3. On the other hand, the measurement results of element Cu in the present study revealed that all measurement data were lower than the MNS standard. Cadmium (Cd): Although Cd was identified in the present study, there were only measurement data for samples No. 38,No. 41,and No. 42 in the previous literature. Therefore, the comparison was made at only those points. However, measurement data of the previous and present studies did not exceed the MNS value and there were no significant differences. Result of Pollution Indices of Soil Contamination Estimated EF and Igeo using the concentration of heavy metals is shown in Figures 4 and 5, respectively. In Figures 4 and 5, the x-axis represents the number of elements while the y-axis represents the heavy metals identified in the present study. Result of Pollution Indices of Soil Contamination Estimated EF and Igeo using the concentration of heavy metals is shown in Figure 4 and Figure 5, respectively. In Figures 4 and 5, the x-axis represents the number of elements while the y-axis represents the heavy metals identified in the present study. As shown in Figure 4, there was no enrichment for element Cr, Co, Cu, Zn, Cd, Pb, and Mn. Although EF value for 41 out of 42 samples was estimated to be lower than 2, and the highest EF value was estimated as 8.5 for As at sample No 6. Mo was the most enriched element in a comparison with other elements as illustrated in Figure 4. The highest and lowest EF values for element Mo was estimated as 22.4 at sample No 6 and 0.8 at sample No 31, respectively. Samples No 6 and No 22 were categorized as severe enriched for element Mo. Mo has a potential anthropogenic contamination source and is enriched in coal combustion residues [38]. As shown in Figure 5, the Igeo values were estimated as moderately contaminated for most of sample points for Mo (please see the range of the color bar next to the image). The maximum value of Igeo was estimated as 2 at sample No 22 for Mo. The measurement result revealed that there was no contaminated soil sample with Co and Mn. The highest value of Igeo for As was estimated at sample No 6. The Cr, Pb, Cd, Zn, and Cu elements were assessed to range from uncontaminated to moderately contaminated. Noncarcinogenic Risk Assessment The noncarcinogenic hazards and the Hazard Quotient of all elements were not evaluated in the case of some exposure pathways, namely the inhalation and dermal pathways for Co and Mo. This was because the noncarcinogenic RfD for these pathways was unavailable. Tables 9 and 10 show the quantitative value of HI for each element and pathway for the child and adult. As shown in Figure 4, there was no enrichment for element Cr, Co, Cu, Zn, Cd, Pb, and Mn. Although EF value for 41 out of 42 samples was estimated to be lower than 2, and the highest EF value was estimated as 8.5 for As at sample No 6. Mo was the most enriched element in a comparison with other elements as illustrated in Figure 4. The highest and lowest EF values for element Mo was estimated as 22.4 at sample No 6 and 0.8 at sample No 31, respectively. Samples No 6 and No 22 were categorized as severe enriched for element Mo. Mo has a potential anthropogenic contamination source and is enriched in coal combustion residues [38]. As shown in Figure 5, the Igeo values were estimated as moderately contaminated for most of sample points for Mo (please see the range of the color bar next to the image). The maximum value of Igeo was estimated as 2 at sample No 22 for Mo. The measurement result revealed that there was no contaminated soil sample with Co and Mn. The highest value of Igeo for As was estimated at sample No 6. The Cr, Pb, Cd, Zn, and Cu elements were assessed to range from uncontaminated to moderately contaminated. Noncarcinogenic Risk Assessment The noncarcinogenic hazards and the Hazard Quotient of all elements were not evaluated in the case of some exposure pathways, namely the inhalation and dermal pathways for Co and Mo. This was because the noncarcinogenic RfD for these pathways was unavailable. Tables 9 and 10 show the quantitative value of HI for each element and pathway for the child and adult. The children were at risk of noncarcinogenic effects, especially through the dermal pathway, which posed the greatest noncarcinogenic risks, followed by the ingestion pathway. Inhalation posed the lowest risk. In the case of children, the three different exposure pathways resulted in the following sequence for the HI of all the metals studied: Co > Mo > Zn > Cu > Cd > Cr > Pb > As. The dermal pathway yielded HQ and HI values greater than 1. In the case of adults, the same sequence was determined. However, 12 out of 42 sample sites were assessed to be higher than 1, while other sample sites were assessed at no risk. In particular, Cr, Pb, and As were assessed as the most for children. It is acceptable that there is no safe level of lead exposure, particularly for children. Pb can lead to brain swelling, kidney disease, cardiovascular problems, nervous system damage, and even death [25]. Therefore all of the sample sites were assessed for chronic exposure, especially for children in the mentioned elements. Results of the noncarcinogenic risk or hazard index (HI) for children and adults are shown in Figure 6a,b. which posed the greatest noncarcinogenic risks, followed by the ingestion pathway. Inhalation posed the lowest risk. In the case of children, the three different exposure pathways resulted in the following sequence for the HI of all the metals studied: Co > Mo > Zn > Cu > Cd > Cr > Pb > As. The dermal pathway yielded HQ and HI values greater than 1. In the case of adults, the same sequence was determined. However, 12 out of 42 sample sites were assessed to be higher than 1, while other sample sites were assessed at no risk. In particular, Cr, Pb, and As were assessed as the most for children. It is acceptable that there is no safe level of lead exposure, particularly for children. Pb can lead to brain swelling, kidney disease, cardiovascular problems, nervous system damage, and even death [25]. Therefore all of the sample sites were assessed for chronic exposure, especially for children in the mentioned elements. Results of the noncarcinogenic risk or hazard index (HI) for children and adults are shown in Figure 6a As illustrated in Figure 6a, the HI values for children were estimated as higher than the threshold, as mentioned in Section 2.5. The dotted red line in Figure 6 represents the threshold or maximum As illustrated in Figure 6a, the HI values for children were estimated as higher than the threshold, as mentioned in Section 2.5. The dotted red line in Figure 6 represents the threshold or maximum value to accept. The HI values for children were estimated to be higher than the threshold in all soil samples, with the extremely high HI value estimated at sample No 6, as shown in Figure 6a. On the other hand, the HI values were estimated to be lower than the threshold for adults, except for samples No 6,No 16,and No 32. At sample No 6, the highest HI values for adults was estimated to be the same as the children. Carcinogenic Risk Assessment Slope factor (SF) is used to estimate the carcinogenic risk, as mentioned in Section 2.5. The carcinogenic risk was estimated for Cr, Cd, Pb, and As in the present study. The children were at risk of carcinogenic effects, especially through the dermal pathway, which posed the greatest carcinogenic risks, followed by the ingestion pathway with the same noncarcinogenic risk. Exposure to Cr through the dermal pathway was determined to be the highest carcinogenic risk factor in both children and adults. Moreover, exposure to As through dermal contact was estimated to exceed the permissible limit for both adults and children. This means that residents are still affected by exposure to dose by dose of As. Arsenic causes skin cancer via chronic oral ingestion, either manifested as squamous or basal cell carcinomas. Some evidence suggests that oral ingestion of arsenic may also contribute to lung cancer as well as cancers of the bladder, kidney, liver, and colon. Epidemiological studies indicate that there is an increased respiratory cancer risk from occupational exposure to chromium. Cr can cause stomach and intestinal ulcers, anemia, and stomach cancer. Frequent inhalation can cause asthma, wheezing, and lung cancer. The carcinogenic test determined that As and Cr could contribute to the estimated carcinogenic risk imposed on the ger district. In the case of both adults and children, all the elements posing carcinogenic risks in the ger district were determined to exceed the permissible limit at all the soil sampling sites. In particular, the dermal pathway was determined to be the exposure pathway most likely to affect human health in the study areas. According to the results of the health risk assessment, both adults and children in the study area are highly susceptible to the exposure of environmental contaminants due to their living conditions and lifestyles. Major sources of chromium released into to the soil in the ger area are from the disposal of commercial products that contain chromium as well as coal ash. Ger residents have a greater chance of exposure because they live near a waste site that contains coal ash. The calculated risks for each element are listed in Tables 11 and 12 for children and adults, respectively. The result of carcinogenic risk or cancer risk (CR) is illustrated for both children and adults in Figure 7a The dotted red line in Figure 7 represents the threshold or maximum value to accept. The cancer risk values for both children and adult was estimated to be higher than the threshold at all soil samples, and an extremely high HI value was estimated at sample No 6 as shown in Figure 7a,b. As and Cr are major contributions to carcinogenic risk as aforementioned. The ger residents are exposed to these elements by touching the soil and digging or playing with the soil. Children may eat and breathe the dust of soil that contains As and Cr while playing. Dust can be brought into the ger from outside. Moreover, drinking water contamination by natural sources of arsenic and chromium are another possibility [39,40]. The dotted red line in Figure 7 represents the threshold or maximum value to accept. The cancer risk values for both children and adult was estimated to be higher than the threshold at all soil samples, and an extremely high HI value was estimated at sample No 6 as shown in Figure 7a,b. As and Cr are major contributions to carcinogenic risk as aforementioned. The ger residents are exposed to these elements by touching the soil and digging or playing with the soil. Children may eat and breathe the dust of soil that contains As and Cr while playing. Dust can be brought into the ger from outside. Moreover, drinking water contamination by natural sources of arsenic and chromium are another possibility [39,40]. Result of Ecological Risk Assessment As described in Section 3.5, the potential ecological risk caused by heavy metals was estimated by Hakason's model. In this section, the results of each parameter included in the ecological risk assessment will be described. Results of Contamination Index The contamination index of heavy metals for each sample was calculated and the results are shown in Figure 8. The estimated minimum, maximum, and mean values of the contamination index are shown for each heavy metal. The highest contamination index was calculated at sample 18 and sample No 6 for Zn and As, respectively. There were three samples where the contamination index for Pb was estimated as very high risk, while the contamination index for Zn were estimated as very high risk at sample No 22, As described in Section 3.6, the potential ecological risk caused by heavy metals was estimated by Hakason's model. In this section, the results of each parameter included in the ecological risk assessment will be described. Results of Contamination Index The contamination index of heavy metals for each sample was calculated and the results are shown in Figure 8. The estimated minimum, maximum, and mean values of the contamination index are shown for each heavy metal. The highest contamination index was calculated at sample 18 and sample No 6 for Zn and As, respectively. There were three samples where the contamination index for Pb was estimated as very high risk, while the contamination index for Zn were estimated as very high risk at sample No 22, sample No 27, and sample No 35. The degree of contamination level of heavy metals showed the following sequence As > Zn > Pb > Cu > Cr > Cd. The results of the contamination index for heavy metal are shown in Table 13 and Figure 8. Potential Ecological Risk Index of a Single Element (eRP ) Ecological risk is represented to evaluate the adverse ecological effects occurring as a result of exposure to soil contamination stressors. The ecological risk results are shown in Table 14 and Figure 9. The results of the contamination index for heavy metal are shown in Table 13 and Figure 8. Ecological risk is represented to evaluate the adverse ecological effects occurring as a result of exposure to soil contamination stressors. The ecological risk results are shown in Table 14 and Figure 9. It was shown that Cd and Cu were estimated to be lower than 40 and the Zn values were between 54 and 383 in the high toxicity values. The highest value of As was obtained at sample No 6. The ecological risk values for Zn showed moderate and considerable values with a higher toxicity coefficient. Cr and As presented very high ecological risk compared to the other measured heavy metals. Figure 10 illustrates the distribution of potential ecological risk (ERP). It was shown that Cd and Cu were estimated to be lower than 40 and the Zn values were between 54 and 383 in the high toxicity values. The highest value of As was obtained at sample No 6. The ecological risk values for Zn showed moderate and considerable values with a higher toxicity coefficient. Cr and As presented very high ecological risk compared to the other measured heavy metals. Figure 10 illustrates the distribution of potential ecological risk (ERP). Conclusions When coal is burnt in the stove of a ger dwelling, a lot of ash is produced. A small part of coal ash is dispersed in the air (fly ash) and most will stay in the stove [41]. After the burning process is finished, the remaining ash is collected in a bucket, and mostly thrown into the open street on bare soil. Surprisingly, hot ashes can help reduce a slippery surface and provide cover in the open street in the winter period (November to March) [3]. This is the reason why coal ash and its heavy metals fly in the air as well as accumulate in the soil and dissolve in water. In the present study, potential human health and ecological risk assessments were performed based on the concentration of heavy metals in the soil samples collected from the ger area of Ulaanbaatar city, Mongolia. In total, 28 heavy metals were identified by the ICP-OES and ICP-MS methods. The measurement results were compared with reference data in order to validate the soil contamination level. Although there was a large difference between the measurement results of the present and reference data, the general tendency was similar. For instance, the measurement results of As concentrations for the present and reference studies both exceeded the Mongolian National Standard at all locations where the sample was taken. Except for element As, the concentration of other elements were under the Mongolian National Standard. The concentration of As in sample No 6 could be measured as extremely high because this sample was collected from the waste area of the old glass factory of the Nalaikh district. The glass is made of a molten mixture of heavy metals including As [42]. The measurement results of the present and reference studies were the same at sample No 6. Moreover, the concentration of Mo was determined as higher than MNS for all soil samples in the present study. In particular, samples No 6 and No 22 were determined as 219 mg/kg and 334 mg/kg, respectively. Therefore, it is necessary to reduce the high concentration of Mo and As using some remediation methods. Soil pollution was assessed by pollution indices such as enrichment factor (EF) and geoaccumulation index (Igoe). The As and Mo sourced were contaminated by the anthropogenic effect. Other elements were assessed with no enrichment and no contamination. The carcinogenic risk was estimated to exceed the permissible limit for both adults and children at all of the sample sites. The total noncarcinogenic risk estimated exceeded the permissible limit or exceeded 1 for children at all of the sample sites. For adults, only 12 sample sites were estimated to exceed the permissible limit while the other sample sites were estimated to have no health risk. Pb, Cr, and As can pose serious concerns regarding the potential occurrence of health hazards. The degree of heavy metal contamination was increased as follows: Cd > Cr > Cu > Pb > As > Zn. The ecological risk values for Zn and Pd showed moderate and considerable values with a higher Conclusions When coal is burnt in the stove of a ger dwelling, a lot of ash is produced. A small part of coal ash is dispersed in the air (fly ash) and most will stay in the stove [41]. After the burning process is finished, the remaining ash is collected in a bucket, and mostly thrown into the open street on bare soil. Surprisingly, hot ashes can help reduce a slippery surface and provide cover in the open street in the winter period (November to March) [3]. This is the reason why coal ash and its heavy metals fly in the air as well as accumulate in the soil and dissolve in water. In the present study, potential human health and ecological risk assessments were performed based on the concentration of heavy metals in the soil samples collected from the ger area of Ulaanbaatar city, Mongolia. In total, 28 heavy metals were identified by the ICP-OES and ICP-MS methods. The measurement results were compared with reference data in order to validate the soil contamination level. Although there was a large difference between the measurement results of the present and reference data, the general tendency was similar. For instance, the measurement results of As concentrations for the present and reference studies both exceeded the Mongolian National Standard at all locations where the sample was taken. Except for element As, the concentration of other elements were under the Mongolian National Standard. The concentration of As in sample No 6 could be measured as extremely high because this sample was collected from the waste area of the old glass factory of the Nalaikh district. The glass is made of a molten mixture of heavy metals including As [42]. The measurement results of the present and reference studies were the same at sample No 6. Moreover, the concentration of Mo was determined as higher than MNS for all soil samples in the present study. In particular, samples No 6 and No 22 were determined as 219 mg/kg and 334 mg/kg, respectively. Therefore, it is necessary to reduce the high concentration of Mo and As using some remediation methods. Soil pollution was assessed by pollution indices such as enrichment factor (EF) and geo-accumulation index (I goe ). The As and Mo sourced were contaminated by the anthropogenic effect. Other elements were assessed with no enrichment and no contamination. The carcinogenic risk was estimated to exceed the permissible limit for both adults and children at all of the sample sites. The total noncarcinogenic risk estimated exceeded the permissible limit or exceeded 1 for children at all of the sample sites. For adults, only 12 sample sites were estimated to exceed the permissible limit while the other sample sites were estimated to have no health risk. Pb, Cr, and As can pose serious concerns regarding the potential occurrence of health hazards. The degree of heavy metal contamination was increased as follows: Cd > Cr > Cu > Pb > As > Zn. The ecological risk values for Zn and Pd showed moderate and considerable values with a higher toxicity coefficient. Cr and As were in the range of very high ecological risk compared to the other measured heavy metals. This study found that people residing in the ger district of Ulaanbaatar were found to be at the greatest risk of exposure to heavy metals through the dermal pathway due to contaminated media near waste points, ravine, streets, and auto services in the ger district. Additionally, there was one case of ingestion. The domestic cattle of neighbor nomads usually come to seek food from open dump areas, which may be one source of ingestion pathway [43]. If in the milk of these cattle and meat sales to consumers, it would be necessary to control the products from these cattle by the professional inspection agency of the urban city of Ulaanbaatar. Therefore, it is necessary to conduct a risk assessment of the drinking water in ger districts because targeted elements of Cr, Mo, Pb, and As can potentially affect the groundwater in these ger districts. The results of our study are expected to assist in future monitoring of pollution caused by heavy metals as well as the development of environmental standards in Ulaanbaatar, Mongolia. These results will also support the implementation of public policies aimed at ensuring the sustainability of development activities in ger districts. On the other hand, the background value of arsenic in Ulaanbaatar is two times higher than that stated in the Mongolian National Standard, which is less than the background value. This should be taken into account. Monitoring the concentrations of heavy metals in the soil surrounding the ger district is essential for controlling soil pollution and protecting the residents from the risks posed by heavy metal contamination.
2020-07-22T05:09:01.440Z
2020-06-29T00:00:00.000
{ "year": 2020, "sha1": "878690c7c760e8c69b3434d76f812e07c5cd23ba", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1660-4601/17/13/4668/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "878690c7c760e8c69b3434d76f812e07c5cd23ba", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Environmental Science" ] }
250573616
pes2o/s2orc
v3-fos-license
Changes, limitations, and prospects of adult height in GH treatment for Japanese GHD patients Abstract: For the treatment of pituitary dwarfism (called pituitary short stature in 1987 and renamed as growth hormone deficiency [GHD] in 1993), pituitary-derived human growth hormone (phGH) was approved in 1975, and recombinant hGH (rhGH) was approved in 1988. Adult height in patients with isolated GH deficiency (IGHD) improved by 2000. However, this improvement was mainly due to the increase in height SDS at treatment initiation. Although the mean adult height in patients with idiopathic GHD has been reported to be approximately –1.0 SD or higher in Europe and the United States, the mean adult height of patients with idiopathic GHD in Japan has not improved as much as that in Europe and the United States after 2000. The possible reasons were: low therapeutic doses than those in Europe and the United States; changes in background factors, such as reduction in severe GHD; differences in response to GH between Caucasians and Japanese; and, no increase in height at puberty onset because delayed puberty was normalized by GH treatment. In the future, long-acting GH is expected to improve adult height in GHD patients in Japan. Initiation of GH Treatment for GHD Patients in Japan The first human growth hormone (hGH) formally approved for the treatment of pituitary dwarfism in Japan was Corpormon®, a pituitary-derived human growth hormone (phGH) imported from Kabi, Sweden, in 1975. As there was no pharmaceutical company that manufactured phGH in Japan, four types of phGH were imported and sold thereafter for the treatment of pituitary dwarfism. As all clinical trials used a therapeutic dose of 0.5 IU/kg/wk, the therapeutic dose was 0.5 IU/kg/ wk, that was divided into two to three intramuscular injections. Self-injection was not approved at that time; therefore, patients had to visit the hospital for injection. Self-injection was first approved in 1981. "The Diagnostic Guidance for Pituitary Dwarfism" prepared in 1974 by the Study Group on Pituitary Insufficiency (later the Study Group on Pituitary Dysfunction) of the Ministry of Health and Welfare (MHW) included patients with pituitary dwarfism as defined by the criteria, "primary symptom: height SDS ≤ -3.0 SD, laboratory findings: all peak GH levels ≤ 5 ng/mL in two or more GH stimulation tests". The import volume of phGH was small; thus, to distribute it fairly, specialists nationwide organized the "Study Group of Pituitary Dwarfism Treatment" in 1974, where the indications were determined. When the Foundation for Growth Science (FGS) was established in 1977, indication judgement was transferred to the FGS. Although the importation of phGH increased, there were more patients, with 200 to 300 waiting patients each year, until recombinant methionyl hGH (mhGH) was approved in 1986. During this period, half the dose of phGH for one patient was administered to two patients, and concomitant anabolic hormones were frequently administered to compensate for the shortage. With the Grand-in-Aid Program for Chronic Diseases in Childhood by the Maternal and Child Health Division of MHW, patient copayment for health insurance treatment has been fully subsidized since 1975 (1). 1) Changes in the diagnostic criteria The Study Group on Hypothalamic and Pituitary Disorder changed the diagnostic term to "pituitary short stature" in 1984 and "growth hormone deficiency (GHD)" in 1993. The diagnostic criteria were changed to "primary symptom: height SDS ≤ -2.5 SD; laboratory findings: peak GH level ≤ 7 ng/mL" in 1984, to "primary symptom: height SDS ≤ -2.0 SD" in 1987, and to "laboratory findings: peak GH level ≤ 10 ng/ mL" in 1993, with imports of unlimited rhGH and the harmonization of international diagnostic criteria. However, the reference peak GH level for the diagnosis of GHD had poor clinical evidence. Simultaneously, it was set as "below the reference peak GH in two or more GH stimulation tests" since 1984. Therefore, even if a GH level above the reference peak level is observed in the stimulation test, patients can be treated as having GHD if the level was below the reference peak GH in the two GH stimulation tests. Although this situation is defined as mild GHD in Japan, globally, mild GHD is not accepted as GHD but as idiopathic short stature (ISS). The severity classification was introduced in the updated "Diagnostic Guidelines" by the Study Group on the Hypothalamic and Pituitary Disorder in 1993. In all the GH stimulation tests, a maximum peak GH level of 5 ng/mL or less was considered as the complete type, and others were considered as incomplete types; this was rephrased as severe/moderate in 1999 (1). With the standardization of measured values of GH in 2004 (2), a maximum peak GH level of 3 ng/mL or less was classified as severe. In 2007, a maximum peak GH level of 3-6 ng/ mL was classified as moderate, and patients other than those with severe and moderate GHD were classified as having mild GHD. In 1975, the target population for the treatment of GHD was only those with severe GHD, according to the current classification, but it has been expanded to moderate and mild cases since 1984. 2) Changes in GH measurement The diagnosis of GHD is based on the peak GH level in the GH stimulation test. Blood GH levels were initially measured using radioimmunoassay (RIA). However, the sensitivity of RIA measurement was poor, and the initial reference peak GH level of 5 ng/mL was close to the measurement sensitivity level of RIA. Later on, several measurement methods were developed, including immunoradiometric assay (IRMA), immunoenzymometric assay (IEMA), and chemiluminescent enzyme immunoassay (CLEIA). Each measurement kit used its own phGH standard. As the immunological potency of phGH was different among the purification batches, the differences in the measured values were approximately doubled, depending on the kit. Until the measured value was adjusted using a correction formula for each kit by the GH and its Related Factors Study Committee of the FGS in 1991 (3), prior diagnoses varied depending on the assay kits. The difference in the immunological potency of phGH standards existed until 2004, when the rhGH standard was introduced into GH measurement kits. Simultaneously, differences in the measured values among the kits were minimized because of the homogeneity of the rhGH standard, and the correction formula for each kit was abolished. However, because of the difference between the immunological potencies of phGH and rhGH, the reference peak GH level was changed from 10 ng/mL to 6 ng/mL. While preparing the correction formula for each kit, the standard GH value was adjusted to the mean of the measured values. Therefore, the reference peak GH level also depended on Clin Pediatr Endocrinol doi: 10.1297/cpe.31.2022-0034 the mean of the measured values, and the real reference peak GH level for the diagnosis could not be followed. 3) Change in severity of GHD According to Hibi et al. (4), in the early phGH period, breech delivery and birth asphyxia were considered to be the causes of GHD, and many of these patients had severe GHD. The Registration System Committee of the FGS examined 28,842 enrolled GHD patients (19,410 boys, 9,432 girls) from 1986 to 1995 (5). The patients were divided into three groups according to the maximum peak GH level in the GH stimulation test: ≤ 5 ng/mL (severe), > 5 to ≤ 10 ng/mL (moderate), and > 10 ng/mL (mild). Fig. 1 shows the percentage of the severity classification of idiopathic GHD for each year. The percentage of patients with severe GHD gradually decreased from 32.2% in 1986 to 8.2% in 1995. The percentage of patients with moderate GHD slightly decreased from 55.6% to 40.6%, whereas the percentage of patients with mild GHD markedly increased from 12.2% to 51.2%. In 1997, the subsidy for the Grand-in-Aid Program for Chronic Diseases in Childhood was applicable only to moderate/severe cases with height SDS of -2.5 SD or lower, and the criteria for IGF-I were also more strictly established. Simultaneously, the number of applications registered to the FGS dropped sharply because of the FGS's treatment indication decision, which was previously required to determine the subsidy for the Grand-in-Aid Program for Chronic Diseases in Childhood and was no longer necessary. The number of patients who received GH treatment also decreased. The number of GHD cases registered in the FGS decreased from the range of 4000 in 1997 to that of 2000 in 1998 to that of 1000 in 1999. The percentage of patients with mild idiopathic GHD who registered in the FGS from 1996 to 1997, 1998 to 2004, and 2005 to 2015 decreased to 39.5%, 31.2%, and 13.6%, respectively (6). However, as the local government's medical subsidy program for children has been enhanced since 2001, the treatment of mild GHD has increased, but the actual number of GHD patients after 1997 cannot be determined. Since 2000, the percentage of patients with multiple pituitary hormone deficiency (MPHD) and severe GHD has decreased. The percentage of patients with MPHD was 19.4% in the report by Hibi et al. in 1989 (4), but it decreased to 13.3% with phGH treatment and 6.2% with rhGH treatment in the report by Tanaka et al. in 2001 (7); 5.6% in the report by Ogawa et al. in 2012 (8); and, 2.8% in the report by Tanaka et al. in 2018 (9). The incidence of severe GHD was 22.8% in the report by Fujieda et al. (10) in 2010, but it decreased to 12.4% in the report by Ogawa et al. (8) in 2012 and 9.3% in the report by Tanaka et al. (9) in 2018. Fig. 2 shows the changes in the mode of delivery in patients with idiopathic GHD during this period (5). There was no marked change in the percentage of cephalic deliveries, but the percentage of breech deliveries decreased from 11.5% in 1986 to 6.0% in 1995. The percentage of delivery by cesarean section increased from 0.8% to 8.9%. Therefore, the decrease in patients with severe GHD may be related to the decrease in breech delivery; however, because the percentage of breech delivery was almost consistent after 1990, the subsequent decrease in patients with severe GHD cannot be explained. When the percentage of patients with severe GHD was examined annually by comparing breech and cephalic deliveries, the percentage of GHD patients whose disease became severe among those who were born by breech delivery and cephalic delivery in 1985 was 60.4% and 28.4%, respectively, (Fig. 3); however, these percentages decreased to 9.6% and 8.0%, respectively, in 1995 (5). The incidence of severe GHD decreased in both breech and cephalic deliveries, suggesting that the management of delivery in obstetrics has improved. 1) Adult height in the phGH period During the phGH period, patients with GHD were unable to receive hGH without an attending physician submitting their treatment application and reporting to the FGS. Therefore, all patients' treatment results Clin Pediatr Endocrinol were recorded in the FGS, making it the largest database on hGH treatment in Japan. Based on an analysis of this database, many articles have been published that contributed to the development of GH treatment in Japan. Reports of adult height after the first GH treatment in Japan were also the results of the FGS database analysis (4). Hibi and Tanaka (4) reported the adult heights of 108 patients with isolated growth hormone deficiency (IGHD) registered in the FGS, who had been treated with phGH between 1975 and 1986, and those of 26 patients with MPHD complicated by gonadal dysfunction who had been treated at the National Children's Hospital. The mean adult heights were 151. 8 Patients treated with phGH from 1975 to 1986 , and age at the onset of puberty was positively correlated with age at the start of GH treatment in IGHD (boys: r=0.67, p < 0.01; girls: r = 0.67, p < 0.01). With regard to patient background, overall, 56% of the patients were born by breech delivery, and 44% of the patients had birth asphyxia. In the IGHD group (n = 108), 48% of the patients were born by breech delivery, and 35% had birth asphyxia. In the MPHD group (n = 26), 88% of the patients were born by breech delivery, and 73% had birth asphyxia. Both the rates were significantly higher. Hibi confirmed the "Birth injury theory", explaining that disorders of the pituitary gland caused by breech delivery led to MPHD (11). Pituitary hypoplasia (12) and transection of the pituitary stalk (13,14) were observed in patients with MPHD who were born by breech delivery, supporting this theory. In clinical practice, the number of patients with typical MPHD decreased as the mode of delivery changed from breech delivery to cesarean section. The adult heights of phGH-treated patients who reached their adult heights from 1986 to 1989 were reported by Tanaka 2) Adult height in the early rhGH period A series of reports from the United States and the United Kingdom in 1985 showed that patients treated with phGH developed Creutzfeldt-Jakob disease (15)(16)(17). At that time, as the practical use of mhGH and natural recombinant hGH (rhGH) began, there was a switch from phGH to mhGH and rhGH worldwide. In Japan, Somatonorm®, a mhGH, was approved for use in 1986. Genotropin®, a rhGH, was approved in 1988, followed by Norditropin® and Humatrope® in 1989, Saizen® in 1982, and Growject® in 1993. The dosage and administration of rhGH was 0.5 IU/kg/wk, which was subcutaneously injected in six to seven divided doses per week. The period from 1988, when the unit dose of rhGH was IU/kg week, to 2000, when the unit dose of rhGH was changed to mg/kg/wk, was defined as the early rhGH period. Tanaka et al. (7) analyzed the adult height in patients registered in the FGS and treated with hGH since 1986. The patients were divided into two groups: Group P2, in which phGH treatment was started and adult height was reached from 1986 to 1989, and Group R, in which rhGH treatment was started after 1989. Fig. 4 (a) and (b) show the mean adult heights in IGHD with GH monotherapy and MGHD patients by sex from reports by Hibi et al. in 1989 (4) and Tanaka et al. in 2001 (7). Group P1 (early phGH period) consisted of patients with IGHD and MGHD, as reported by Hibi (4). Group P2 (late phGH period) consisted of patients with IGHD and MPHD who were initiated on phGH treatment, and Group R (early rhGH period) consisted of patients with IGHD and MPHD who were initiated on rhGH treatment, as reported by Tanaka (7). MPHD is associated with hypogonadism, and the epiphyseal plate is not closed because of poor progression of bone age before replacement therapy with sex hormones or gonadotropins, during which hGH can be administered to increase height. Adult height in patients with MPHD is directly proportional to the age at the start of replacement therapy. The mean ages at the start of the replacement therapy for MPHD were 19.4, 16.9, and 14.9 yr in boys in Groups P1, P2, and R, respectively, and 18.5, 14.5, and 13.9 yr in girls, respectively. The age at the start of replacement therapy decreased with time, resulting in lower adult height. In the early phGH period, age at the start of replacement therapy was very high because adult height was emphasized, but later psychosocial issues due to delayed puberty were emphasized. Consequently, age at the start of replacement therapy decreased with time, and adult height decreased concomitantly. The average adult height of IGHD patients has been increasing over time. Intramuscular injection of phGH was performed 2-3 times a week, whereas subcutaneous injection of rhGH was performed 6-7 times a week. Even though the dosage was maintained for one week, the therapeutic effect was greater when the number of injections was higher (18,19). However, the greatest cause of improvement in adult height during this period was that the height SDS increased at the start of treatment. Fig. 5 shows the mean improvement in height SDS from the start of GH treatment to adult height. The early phGH period showed the greatest improvement. Specifically, an improvement of 1.9 SD on an average was observed in girls. However, an improvement of only approximately 1 SD was observed in the late phGH and early rhGH periods. Fig. 6 shows the mean height SDS at the start of treatment. In the early phGH period, the height SDS was in the -4 SD range, but in the late phGH period, it was in the -3 SD range, and in the rhGH period, it was in the -2 SD range. In other words, early initiation of treatment before short stature becomes severe is a major factor in the improvement of adult height. However, there was a limitation in improving adult height because there was a height restriction in the FGS termination criteria that GH treatment should be terminated when the height is 160 cm for boys and 150 cm for girls in the phGH period. The height limitation was eliminated in 1992. 3) Adult height in the late rhGH period In 2000, the unit of dose was changed from biological activity IU (international unit) to mg by weight, and rhGH potency was defined as 3 IU/mg. Therefore, the 0.5 IU/kg/wk dose should be 0.167 mg/kg/wk for GHD. The Japanese Society for Pediatric Endocrinology (JSPE) requested that the MHW increase the dose at Clin Pediatr Endocrinol doi: 10.1297/cpe.31.2022-0034 the time the unit was changed because the therapeutic dose in Japan was lower than those in Europe and the United States (20), and 0.175 mg/kg/wk was approved. Doses up to 0.3 mg/kg/wk was approved in the United States. When the WHO Expert Committee on Biological Standardization assigned multiple laboratories to measure the biological activity of 1 mg of WHO 88/624, the international standard, the average result was 3.39 IU/mg (range 3.13-3.69) (21). Therefore, the change in the unit from IU to mg suggests that the biological activity of rhGH was higher than that of the converted value. After rhGH was approved, pharmaceutical companies selling rhGH began building a database through a post-marketing survey and by analyzing GH treatment. Data on adult height are frequently reported in analyses such as the International Cooperative Growth Study/Kabi International Growth Study (ICGS/KIGS) by Sumitomo Pharma -Pharmacia -Pfizer, which sold Genotropin®, and the Genetics and Neuroendocrinology of Short Nature International Study (GeNeSIS) by Eli Lily, which sold Humatrope® (22)(23)(24)(25). A higher therapeutic effect was reported in patients with severe GHD compared with that in patients with moderate/mild GHD in terms of short-term growth rate (24,26,27) as well as improvement in adult height and height SDS (8,10,22,24). Table 1 shows a comparison of adult height according to GHD severity. Age at the start of treatment tended to be lower in patients with severe GHD, although the difference was not necessarily significant among the reports. The height SDS at the start of treatment was lower in patients with severe GHD, and adult height was greater in both men and women. The improvement in height SDS from the start of GH treatment to the time of attaining adult height was significantly greater in patients with severe GHD in all the reports. There was no difference in the therapeutic effect between patients with moderate and mild GHD in terms of short-term growth rate (24,26) or adult height (8,10). This indicates that patients with mild GHD had a therapeutic effect similar to that of patients with moderate GHD. Table 2 shows age and height SDS at the start of rhGH monotherapy for idiopathic GHD and improvement in adult height and height SDS (8,9,(23)(24)(25)28). Compared to that in the early rhGH period (7), age at the start of treatment was lower and the height SDS at the start of treatment tended to be higher, but the mean Table 3 shows the reports of adult height in patients with idiopathic GHD after rhGH treatment in Europe and the United States (29)(30)(31)(32)(33)(34). According to many reports, adult height SDS had reached -1.0 SD in Europe and the United States. The differences in the adult heights of Japanese patients and patients in Europe and the United States with GHD after GH treatment were investigated ( Table 2). One obvious factor was the difference in the therapeutic dose. In many countries, patients were treated with a higher therapeutic dose than that in Japan. As evident in the report by Radetti et al. (29), treatment using higher doses also resulted in greater improvements in height SDS from the start of treatment to adult height, and resulted in significantly greater adult height (Table 3). However, a report by Radatti et al. (29) on low-dose GHD treatment showed a similar situation as that in Japan, and reports by Maghnie et al. (30) and Rachemiel et al. (32) also showed that the adult height in GHD patients was approximately -1.0 SD. Early diagnosis and treatment have been recommended in Japan, and the age at the start of treatment has decreased, and height SDS has increased. However, GH treatment is expensive in Japan, and the subsidy system has a criterion of -2.5 SD or less. Therefore, the mean SDS for height at the start of treatment rarely exceeded -2.5 SD ( Table 2). In Europe and the United States, there have been several reports of a mean height SDS of -2.5 to -2.0 SD at the start of treatment (Table 3). Comparison of Adult Heights between Japanese and Americans or Europeans in the Late rhGH Period Furthermore, in a report by Rachemiel (32), at 0.18 mg/kg/wk, improvement in height SDS from the start of GH treatment to adult height was +1.7 SD and +2.1 SD in men and women, respectively, showing an improvement of at least 1.5 SD. In contrast, in Japan, there have been no reports showing improvement in height SDS of 1.5 SD or higher, except for the report by Tanaka et al. in 1999 (23), in which patients with severe GHD accounted for 25%. This suggests that there are racial differences in the response to GH treatment. 1) Racial differences in the response to GH treatment As suggested above, racial differences in response to GH treatment were investigated. Tanaka (35) compared the improvement in growth rate and height SDS in the first year, adult height SDS, and improvement in height SDS up to adult height between two groups whose age, height, height SDS, and dose of GH at the start of treatment did not differ between Japanese and Caucasian patients (Japanese:56 boys, 60 girls; Caucasian:142 boys, 96 girls) according to the KIGS analysis. The mean growth rate in the first year was 7.3 cm and 6.9 cm for Japanese boys and girls, respectively, and 7.9 cm and 8.2 cm for Caucasian boys and girls, respectively, showing a significantly higher growth rate in Caucasians. A significant difference was also observed in the improvement in the height SDS. The mean adult height SDS was -1.80 SD for Japanese boys and -2.09 SD for girls, being significantly lower than that in Caucasian patients (-1.39 SD and -1.39 SD, respectively). There was also a significant difference in the improvement in the height SDS. The results indicate that there are racial differences in the response to GH treatment, and that Japanese patients are less responsive than Caucasians. 2) Problems with the hGH therapeutic dose GH treatment for GHD is referred to as replacement therapy. The replacement compensates for the deficiency in physiological amounts. Exogenously administered hGH inhibits endogenous GH secretion via negative feedback. Even at a dose of 0.175 mg/kg/wk, endogenous hGH secretion was mostly inhibited (36), indicating that exogenous hGH administration only plays a role in growth. The clinical trial in GHD patients in Japan was only conducted at a single dose of 0.5 IU/kg/wk (0.167 mg/kg/ wk); therefore, only this therapeutic dose was allowed. Catch-up growth was observed even at this therapeutic dose, and the treatment was approved based on 1-2-yr data. When the unit was changed in 2000, the current dose of 0.175 mg/kg/wk (25 μg/kg/d) was approved upon request from JSPE, claiming that the therapeutic dose in Japan was lower than that in foreign countries. With respect to physiological GH secretion in children, Martha et al. (37) examined the hGH concentrations in blood collected every 20 min for 24 h in healthy boys and reported that the mean hGH secretion was 610 ± 65 μg (21 ± 2.0 μg/kg) in 11 prepubertal boys, 740 ± 110 μg (19 ± 3.1 μg/kg) in 12 early-pubertal boys, and 1810 ± 250 μg (35 ± 5.0 μg/kg) in 16 latepubertal boys. Although the daily injection of GH is not equal to the physiological secretion, the dose was comparable to the physiological secretion amount of GH. The therapeutic dose of 0.175 mg/kg/wk (25 μg/kg/d) in Japan is slightly higher than the secretion during prepuberty, which is lower than the secretion during late puberty. It may be approximately equal to the secretion during middle puberty. The therapeutic dose of 0.30 mg/ kg/wk (≈ 43 μg/kg/d) approved in the United States is purely a pharmacological dose. However, is replacement therapy appropriate for GHD treatment? The clinical symptom of GHD is a decrease in the growth rate, and the expected improvement of symptoms with replacement therapy with a physiological amount of GH is the normal growth rate. At the early stage of treatment, however, catch-up growth continues for approximately 2 years, even with a physiological amount of GH. The bone seems to be more reactive, and catch-up growth presumably occurs because GH secretion is low before treatment. After 3 years, the growth rate is almost as high as that of healthy children. This phenomenon is called the waning phenomenon, but the author considers it to be the therapeutic effect of the original physiological amount administered in replacement therapy. Thus, the improvement in the height SDS is only approximately 1 SD in the first 2-3 years, the onset of puberty is also affected, which will be described later, and adult height improved by only by approximately 1 SD. This is the current situation of GHD in Japan. When the therapeutic dose of rhGH approved worldwide for pediatric GHD is compared in terms of Humatrope®, the lowest single dose is used in Japan as shown in Table 4, but the range of therapeutic dose has been approved in other countries, with doses up to 0.30 mg/kg/wk allowed in the United States and Taiwan. The reason for the low therapeutic dose being used in Japan is the attitude of Japanese pharmaceutical companies that prevented them from properly conducting dosefinding studies. 3) Problems of the timing of pubertal onset Another limitation is that hGH treatment may accelerate the onset of puberty. Pubertal development in GHD patients is slower than that in healthy children (38)(39)(40). Adult height in GHD patients is strongly and positively correlated with height at the onset of puberty (4,41). To increase adult height, it is necessary to increase height at the onset of puberty as much as possible. If GH treatment is started early, patients will catch up to normal height early because they respond well to GH (42). If GH does not affect the onset of puberty, that is, if puberty starts at a similar age regardless of whether GH treatment was started early or late, the earlier the treatment was started, the greater the height at the onset of puberty. Thus, the adult height should be greater. However, recent reports in patients who started treatment at < 10 yr (8,9,25) indicate that the mean adult height in patients with idiopathic GHD after rhGH monotherapy had not reached -1.0 SD, the mean improvement in the height SDS had not reached 1.5 SD, and poor improvement was seen in adult height ( Table 2). Tanaka et al. (43) examined each age at the onset of puberty by dividing 83 boys and 51 girls who started GH treatment for IGHD at a sufficiently younger age than the mean age at puberty (11.7 yr for boys and 11.4 yr for girls) into two groups by age at the start of treatment (younger than 10 yr for boys and 9 yr for girls). In both boys and girls, the groups that started treatment earlier entered puberty significantly earlier, and there was a significantly positive correlation between age at GH initiation and puberty onset. These results suggest that GH treatment may normalize late-onset puberty. As a result, puberty onset was at almost the same height, regardless of whether GH treatment was started early or late. This finding suggests that adult height, which is strongly correlated with height at the onset of puberty, is expected to be almost the same, suggesting that early treatment initiation does not lead to an improvement in adult height. Patients' and Parents' Expectations from GH Treatment Statistically, because the normal height is -2.0 SD or more, the normal adult height is 159.1 cm for boys and 147.6 cm or more for girls. However, according to a questionnaire survey of preferred adult height, patients with short stature desire a minimum height of 160 cm for boys and 150 cm for girls, and about half of the boys with GHD who are undergoing GH treatment reported the desired height to be 170 cm, whereas the median desired height was 160 cm for girls, where almost all girls desired an average adult height (44), suggesting higher expectations from GH treatment. As a clinician treating GHD with GH, it may be practically difficult to achieve the desired height, but one's personal thoughts and feelings are aimed to increase adult height as much as possible by treatment. The author hopes to achieve an adult height of at least 150 cm (-1.53 SD) for girls and 165 cm (-1.00 SD) or more for boys because boys' parents are highly conscious of their son's heights, and the percentage of children with short stature who visit the outpatient clinic for short stature is higher in boys (45). However, as previously mentioned, currently, most Japanese patients with GHD are moderate and mild cases that are less responsive than patients with severe GHD, and only one dose, which is the lowest in the world, has been approved. Owing to the high cost of treatment, it is not possible to start treatment from an early stage for short stature due to the subsidy system, and the response to hGH is poor in terms of race. As such, there are some disadvantages. Treatments that Increase Growth During Puberty If height at the onset of puberty cannot be increased despite initiating early GH treatment, improvement in adult height can be expected by increasing pubertal growth. One approach is to increase GH dose during puberty. Stanhope et al. (46) conducted a prospective randomized trial in 32 children with GHD to determine whether an increase in the GH dose during puberty would improve adult height. At the onset of puberty, either an unchanged dose of 15 IU/m 2 /wk (0.15 mg/kg/ wk) or a doubled dose of 30 IU/m 2 /wk (0.3 mg/kg/wk) was randomly assigned to each patient. The double dose of GH caused no significant change in height velocity during puberty. It was concluded that a higher dose shortened the duration of puberty without increasing adult height. Albertsson-Wikland et al. (47) also reported that the mean height SDS increase in boys with GHD was 0.7 SDS in the groups that received 0.7 IU/kg/wk (0.23 mg/kg/wk) and 1.4 IU/kg/wk (0.47 mg/kg/wk) in a once-daily injection during puberty. Doubling the GH dose during puberty did not increase height SDS during puberty. Other approaches include gonadal suppression combination therapy (48)(49)(50)(51)(52) and anabolic hormone combination therapy (9,25,53), whose effects have been confirmed. Progression of bone age during puberty depends on sex hormones, and fusion of the epiphyseal plate is clinically proven in both boys and girls because of estrogen (54,55). These methods aim to improve adult height by suppressing sex hormones to delay bone age progression and epiphyseal fusion and increase the duration of puberty. The first combination therapy used for gonadal suppression used cyproterone acetate (48); subsequently, leuprorelin acetate (Leuplin®), a gonadotropin-releasing hormone (GnRH) analog, was used. The suppression of sex hormones delays bone age progression, but decreases the growth rate. Therefore, it is important to maintain the growth rate with adequate rhGH levels. However, it is difficult to maintain a sufficient growth rate after the peak growth rate, especially in girls, and the concomitant use of rhGH and GnRH analogs for 3-4 years or longer is required to confirm the therapeutic effect (51,52). GnRH analogs also suppress the maturation of secondary sexual characteristics. Anabolic hormones that are not affected by aromatases are not metabolized to estrogens and, therefore, do not advance bone age. They inhibit gonadotropins as sex hormones by negative feedback action on the central nervous system. Consequently, testosterone levels decrease and bone age progresses slowly in boys (53,56). Anabolic hormones (methenolone acetate [Premobolan®], currently available in Japan) have growth-promoting and secondary sexual characteristic-promoting effects like sex hormones; thus, it is easy to maintain sufficient growth rate and Clin Pediatr Endocrinol doi: 10.1297/cpe.31.2022-0034 sexual maturation when used in combination with rhGH. However, because gonadotropins are suppressed, the increase in testicular volume was suppressed during treatment. In a report by Tanaka Growth Clinic, the mean growth at puberty in boys was 25.4 cm with rhGH monotherapy and 31.5 cm with concomitant therapy with anabolic hormones (53). Owing to the virilizing effect of anabolic hormones, it is difficult to use them in girls. These treatment reports were not derived from prospective control studies and were not covered by health insurance. Prospects for Improving Adult Height In Japan, the conditions (1. therapeutic dose that enables sufficient catch-up from an early stage; 2. dose increase range, which allowed sustained catch-up; 3. good adherence to treatment, 4. high height at the onset of puberty and satisfactory height for adults, and 5. low-cost therapeutic dose) are desirable for future GH treatment. In recent years, long-acting hGH (LAGH) has been developed and approved in Japan for the treatment of GHD in adults (57) and children (58). Patients and parents prefer once-weekly injections over daily injections (59). Reducing injection frequency improves patient adherence. In pediatric trial results with weekly injections, the mean growth rate after 1 year was greater than that with daily injections of 0.175 mg/kg/ wk, depending on the therapeutic dose of LAGH. There were no significant adverse events other than the pain caused by the injection (58,60). Although no long-term therapeutic effects have been reported, the effect on adult height is expected to be significant. Given the high expectations of children with GHD and their parents (56), future transitions from daily rhGH to LAGH can be expected if no adverse events occur during long-term use.
2022-07-16T15:02:43.643Z
2022-07-15T00:00:00.000
{ "year": 2022, "sha1": "c43f543b239b8c681700bb0a2e6890a7c997a3d6", "oa_license": "CCBYNCND", "oa_url": "https://www.jstage.jst.go.jp/article/cpe/advpub/0/advpub_2022-0034/_pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "582acffceaa63f7541c70af22ad3823ae1d31c96", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
134756057
pes2o/s2orc
v3-fos-license
2 Landscape Within The Framework Of Environmental Assessment At Project And Planning Levels Assessing the forecasted effects of climate change on landscape, is important for strategic and project level assessments. Environmental Impact Assessment -Concept, Implications And Wider Context Environmental Impact Assessment (EIA) can be defined as a systematic process in which potential environmental impacts of a planned activity are considered. More in detail, it is a process for evaluating the likely environmental impacts of a proposed project or development, taking into account inter-related socio-economic, cultural and human-health impacts, both beneficial and adverse, prior to a decision being made on whether or not the proposed project or development should be approved (CBD, 2017). There are a number of features that could be extracted from these definitions, which help further outline the nature and aims of EIA. EIA is anticipatory, in that it needs to look into the potential consequences of project developments at the earlier stages of decision-making prior to a decision being made (Partidário, 1993). This contributes to making EIA a decision-making support tool, rather than a decision-making tool. Following notions of positivism and scientific rationality, it is assumed that EIA can support and assist in making better decisions, if the process is informed by objective data evaluated according to a systematic and structured procedure (Weston, 2004). As acknowledged by Weston (2004, p.315), "the language of rationalism and EIA is indistinguishable", and EIA's process mirrors that of rational planning processes (Lawrence, 2000;Elling, 2009). This entails collecting information about the affected environment, and using that information effectively so that the planned objectives can be met (Elling, 2009;Weston, 2010). This in turn, emphasises another feature, which is that EIA is objective. The rigour with which it is supposed to be conducted, and the evidence-basis on which evaluations are made have given EIA the status of a scientific tool which aims to enhance knowledge about the environmental effects of proposals (Owens and Cowell, 2002;Bartlett and Kurian, 1999) through the collection of both, qualitative and quantitative data. However, as argued by some, the scientific and objective nature of EIA can also be used to legitimise planned developments or decisions that have already been made (Bühr, 2009;Wood, 2003), or to mitigate negative effects rather than lead to the abandonment of certain proposals (Jay et al., 2007). The objectivity of EIA, and the level of rigour with which the process should be conducted, requires that the data collected and the evaluations undertaken are summarised in a report describing the significance of likely impacts on the environment, and open to public scrutiny. This helps enhance the transparency of the process, and make EIA a participatory environmental management tool, which relies on and recognises, consultation and public participation, as an established step to carry out and a way of bringing communities into the process (Elling, 2009). Early and continuous communications between developers, statutory consultees, interest groups and members of the public with an interest in, or who might be affected by a proposed development, can in effect enhance the evidence basis of an EIA by providing advice and information; and assist in the evaluation of potential impacts, by providing local knowledge or values and perceptions that help to identify features valued by communities (Shepherd and Bowler, 1997). The importance of involving the public in environment policy-and decision-making is widely acknowledged, as reflected in numerous legal EIA requirements of different countries or international conventions, such as the Aarhus Convention, who recognise EIA as being instrumental for providing access to justice, and for empowering public rights to information and greater democracy in decision-making (Gonzalez et al., 2008;Creighton, 2005). However, as noted by Shepherd and Bowler (1997), public participation can make EIA costly and time-consuming, sometimes resulting in public involvements and consultations being conducted as a mere procedural exercise, still today (Morgan, 2012). EIA is also considered an advocate tool for (the protection of) the environment, as one of its aims, or "proximate aims" as noted by Jay et al. (2007), is to identify environmental impacts and take them into consideration in decision-making (Cashmore et al., 2004). While it cannot be assumed that the resulting decision will be more environmentally-friendly, it is believed that by systematically assessing environmental information, a process of learning will take place and attitudes towards the environment will improve (Jha-Thakur et al., 2009;Jay et al., 2007;Cashmore et al., 2004;Weston, 2010). However, as EIA does not require decision-makers to give any weighting to the environmental information taken into account, political considerations or weightings often prevail (Jay et al., 2007), making EIA's claim of being an advocate tool for the environment rather weak (Benson, 2003;Owens and Cowell, 2002;Wood, 2003). More recently, EIA is being perceived as a tool that can help design and plan more sustainable forms of development (Glasson et al., 2005). Its approach to the environment, which includes socio-economic, cultural and human-health impacts; the focus on impacts, including cumulative ones; and its participative nature requiring local views and knowledge, can help facilitate both, intragenerational and intergenerational equity (Lee and George, 2000;Bruhn-Tisk and Eklund, 2002). However, the extent to which this is actually happening is debateable with the environmental focus in EIA still dominating the way in which this decision-making support tool is perceived (Cashmore et al., 2004;Jay et al., 2007). EIA is mandatory and formalised. Since it was first instituted in the United States via the National Environmental Policy Act of 1969 (NEPA), EIA has been introduced in legislative frameworks around the world (Wood, 2003), with many countries, international organisations and banks developing their own EIA systems (Lee and George, 2000), incorporating formal procedures into either planning or other areas of environmental decision-making. The formalisation of EIA has progressed and been consistent over the years, and EIA is now "recognised in international conventions, protocols and agreements, including the Convention on Transboundary Environmental Impact Assessment; the Convention on Wetlands of International Importance; the Convention on Access to Information, Public Participation in Decision-making and Access to Justice in Environmental Matters; the United Nations Framework Convention on Climate Change; the United Nations Convention on the Law of the Sea; the Protocol on Environmental Protection to the Antarctic Treaty." (Morgan, 2012, p.6). As noted by Morgan (2012) a search carried out in 2011 found that 191 of the 193 members of the United Nations either had EIA legislation or references to EIA, or to an equivalent process in their legislation. EIA's strong legislative basis has therefore contributed to making it one of the most, if not universally and formally, recognised and practiced assessment tools for achieving environmental protection and solving environmental problems (Jay et al., 2007;Morgan, 2012). The main features and aims of EIA presented so far, including the EIA report or Environmental Statement summarising the findings of the assessment, can be applicable to other forms of impact assessment and to different levels of decisionmaking. Yet, in most jurisdictions, EIA is commonly understood to apply to the project level, which refers to concrete development projects. Within the EU context, Article 1(2) of the EIA Directive defines "project" as: "the execution of construction works or of other installations or schemes, other interventions in the natural surroundings and landscape including those involving the extraction of mineral resources." Further, the Directive specifies the project categories for which developments ought to be subjected to an EIA, which are listed in Annex I and II (see Directives 2011/92/EU and 2014/52/EU). However, a review of the effectiveness of the application of the EIA Directive in EU member states revealed that in practice it can be difficult to establish whether an individual project fits a project category and should be subjected to an EIA or not. A number of court cases have prompted the EU to issue further guidance to assist with the interpretation of definitions of project categories of Annex I and II and provide key principles clarifying the purpose of the Directive deriving from case law of the Court (EC, 2015). To a certain extent, the academic literature has also attempted to grasp what a project is, and how it compares to a policy, plan or programme. As indicated by Wood and Dejeddour (1992), the meanings of these different levels of impact assessment application vary considerably, with some countries calling "policies" "plans" and other countries referring to "plans" as "policies". Within the context of an iterative forward planning process, which starts with the formulation of policies at the upper level, followed by plans, programmes and projects, they consider a policy "as the inspiration and guidance for action, a plan as a set of co-ordinated and timed objectives for implementing the policy, and a program as a set of projects in a particular area" (ibid., p.8). Within this framework, projects refers therefore to the definition of actual developments. Following this introduction, the next two sections explore EIA in more detail, looking particularly at the consideration of landscape in EIA and at the procedural steps for carrying out landscape impact assessment in EIA. Application Of Landscape Impact Assessment Within EIA Since EIA first came to the fore in the 1970s, other forms of impact assessment have been introduced, often in response to different needs (Petts, 1999a) or, as argued by Morgan (2012) to address weaknesses arising from EIA practice. Under the umbrella term of EIA, specific forms of impact assessment are becoming increasingly established. Among many others, these include: Social Impact Assessment (SIA), which evaluates the impacts of a proposed development on humans and on the ways in which "people and communities interact with their socio-cultural, economic and biophysical surroundings" (IAIA, 2017); Health Impact Assessment (HIA), which the World Health Organisations defines as "a means of assessing the health impacts of policies, plans and projects in diverse economic sectors using quantitative, qualitative and participatory techniques" (WHO, 2017); or Strategic Environmental Assessment (SEA), which evaluates the impacts of a proposed policy on the environment. Its strategic nature is what makes SEA distinct from EIA, which as previously suggested, focuses on assessing the environmental impacts of development proposals at the project level. SEA and the consideration of landscape in SEA will be explored in more detail later in this chapter. There are also forms of impact assessment that focus on specific environmental receptors, for example, Water Impact Assessment (WIA), Ecological Impact Assessment or Landscape and Visual Impact Assessment. The latter is subsequently explored more in detail. Landscape has long been considered in EIA and/or in land use planning. As summarised by Knight (2009), assessments conducted up to the 1980s generally focussed on the designation of areas of landscape quality, prompting the areas non-designated to be pursued by developers. In the 1980s landscape assessments focused on identifying what makes one landscape distinct from another, setting the foundations for the concept of landscape character, now central to landscape assessments, and a change of "emphasis from landscape as 'scenery' to landscape as 'environment'" (ibid., p.123). While conventional EIA practice has been often based on the assumption "that landscape issues are passive mitigation, to be added after project design"; there is also growing recognition that a more positive approach to EIA is needed, thus one that considers landscape and visual effects as essential to project design for which impacts should be avoided, rather than "simply" mitigated (Hankinson, 1999, p.347). Ratifications of the European Landscape Convention are expected to enhance the consideration of landscape in EIA, at the very least, by providing a definition of landscape (Antonson, 2011). However, some scholars have raised criticisms about the way in which landscape is considered in EIA. Wood (2008) for example, raises questions about the use of, and lack of consistency in, expert judgements in EIA when determining the significance of landscape impacts, describing it as "an opaque or black box exercise" (p. 25). However, it is also worth noting that landscape considerations are probably the most subjective of impacts typically considered in an EIA, which presents added challenges as well as the need for qualitative approaches (Morris and Thérivel, 2009;Knight, 2009). Hankinson (1999) emphasises common technical problems, which include the excessive reliance on computer generated outputs, problems relating to access and timescale, and the resistance to accept that not all changes to landscape are negative. Further, Bond et al. (2004) note how in EU practice there has been limited consideration of cultural heritage in EIA, often restricted to the consideration of the built heritage, and excluding the consideration of cultural values, including those associated with landscape. When reflecting on the consideration of landscape in the Swedish EIA process, Antonson (2011) concludes that knowledge of landscape according to the terms of the European Landscape Convention appears to be limited in EIA practice and among the participants to the EIA process, including EIA professionals. The consideration of landscape in EIA is required in a number of legal systems. For instance, the European Union EIA Directives require that the impacts of a project on the population and human health, and on material assets, cultural heritage and the landscape, and on the interrelationship between all these aspects, should be identified, described and assessed (art 3, Directive 2014/52/EU, EC, 2014). This therefore includes the consideration of both, direct and indirect effects of a project on physical and human features, as well as the consideration of effects on landscape, including inherent changes in landscape character, regardless of whether visual effects take place (Hankinson, 1999). Fulfilling and complying with this requirement is a core part of the EIA process for member states of the European Union and for many developed countries. In addition to EIA legislation, practice of project-level landscape EIA is also supported by legislation which usually relates to landscape quality designations. For instance, internationally, the European Landscape Convention provides a framework for legislation for addressing landscape issues in the ratifying countries. Nationally, in the UK for example, the 1949 National Parks and Access to the Countryside Act designates National Parks as well as Areas of Outstanding Natural Beauty (AONB) in England and Wales with equivalent legislation in Scotland (the National Parks (Scotland) Act). Other relevant legislation can be associated with planning rather than landscape designations, e.g. greenbelts in the UK. Neither the EU Directives nor international or national legislation prescribe a methodology for how landscape effects should be assessed in EIA (Wood, 2008). International guidelines are also not available, hindering the development of good practice particularly in developing countries (Tahsildar and Flannery, 2012). Numerous guidelines have however been produced, with the joint effort of the UK's Landscape Institute (LI) and Institute of Environmental Management and Assessment (IEMA) being one of the most wellknown and cited, possibly justified by the extent to which project level landscape impact assessment is widely practiced in the UK (ibid). The Guidelines for Landscape and Visual Impact Assessment (GLVIA3) published in 2013 is at its third edition, with the latest edition set within the context of the European Landscape Convention and including an increased emphasis on green infrastructure, ecosystem services, and developments in landscape character assessments, seascape character assessments, and historic landscape characterisations. As stated in the guidelines, landscape and visual impact assessment (LVIA) can be conducted either formally as part of an EIA, or informally as part of a development proposal or planning application, though the core approach is similar in both cases (LI and IEMA, 2013). In the first case, the LVIA is carried out as a separate theme, and covered in the Environmental Statement or Report. In the second case, the approach is more informal and flexible, though the key stages of an LVIA still apply. A flow chart representing the EIA and LVIA process is provided in Figure 2.1. The participative nature of EIA and the European Landscape Convention's strong emphasis on seeking opportunities for public participation (Antonson, 2011), strengthen the definition of landscape as a cultural and social construct, which include the consideration of aesthetic and perceptual factors, as well as natural, social and cultural factors (Knight, 2009). The requirement to include in the assessment the effects of a potential development proposal on the interrelationship between people and place means that landscape cannot be a matter for experts only. Individual and community experiences of, and relationships to, landscapes are also important, and should feed into EIA processes. According to Antonson (2011), the public's values and views should be weighed on par with expert views. Landscape value could be recognised by experts through landscape designations, either via planning or environmental legislation, but it could also be valued by people for its tranquillity, wilderness, for its cultural associations, for conservation issues or other perceptual aspects, thus, without an official designation status needing to be in place. The public also holds local landscape knowledge which could be beneficial to both, project design and to the EIA process itself, which further supports Antonson's view. Other interest groups who should be involved in the process are the regulatory/competent authority, for example the local planning authority and its landscape officers; statutory consultees, thus those organisations who must be consulted according to the law; non-statutory consultees, thus other interest groups which might include conservation bodies or residents who should be consulted because they might either have an interest in, or be affected by the potential development proposal. Landscape Impact Assessment Procedure Within Environmental Impact Assessment in Selected European Countries As mentioned in the previous section, under the overarching tool of EIA, different forms of impact assessment have developed, such as landscape impact assessment undertaken within a conventional EIA process. As such, the process follows the wellknown steps of EIA (see Fig. 2.1), and adopts similar methodologies and terminologies. Within the context of the EU Directives, following Morris and Thérivel (2009) and Hankinson (1999), these procedural steps normally include: Screening: it is a very early and essential step in an EIA procedure, which aims to determine whether an EIA is required or not. It entails a preliminary assessment which normally seeks to answer two questions: (a) whether the proposed development will impact the environment; including the consideration of landscape change and visual impacts; and (b) whether the potential impacts are likely to be significant. If the answer to the second question is yes, then an EIA is required and the proposed development must be formally subjected to an EIA. If the answer is no, then an EIA is not required. Within the EU, the EIA Directives identify the projects for which an EIA is essential, and those for which an EIA might be required through case-by-case decisions based on three criterion: (1) characteristics of a project; (2) location of a project, and the environmental sensitivities of the area (including landscape rarities and areas of particular historical, cultural or archaeological value or designated to be of interest under legislation); and (3) characteristics of the potential impacts determined in relation to the first two criterion. Scoping: the aim of this step is to identify the key receptors, impacts and project alternatives to consider, the methodologies to apply, and who should take part in the consultation process. Normally conducted at the early stages of project design, the findings of this step are then summarised in a scoping report, made available to all participants to the EIA process. In relation to landscape, this step determines whether there is a need for a landscape and/or visual impact assessment, or not. If it is required, then the scope of the landscape and/or visual impact assessment needs to be defined. Issues likely to contribute to defining the scope of the landscape and visual assessment include: (a) a description about the proposed site and of its surrounding landscape; (b) a description of the proposed development; (c) an initial draft of the issues to cover in the baseline studies; (d) the alternatives considered; (e) the impacts pre-determined during the screening process; (f) the proposed assessment methodology and (g) mitigation measures. Baseline studies: they concern the description and evaluation of the baseline conditions of the area likely to be impacted by a proposed project. It constitutes the evidence-basis of the assessment, and includes socio-economic, environmental and any other relevant information concerning the likely impact area, some of which might be available or might need to be collected through site visits or field work. The findings of this step should not only outline limitations of the baseline study conducted, for example in relation to data accessibility or accuracy, but also provide an initial assessment of the value of key receptors and their sensitivity to impacts. In relation to landscape, different methods can be used to collect the baseline data in support of the landscape and/or visual assessment. These might include a landscape character assessment (see Box 2.1), desktop studies based on available and accessible published data (e.g. geology and soil maps, ordinance surveys, aerial photographs, existing policy, plans and legislation which set out designations of different types and land uses/covers/forms, but also literature, paintings or historical data to determine associations of a cultural value); and field studies, including site and landscape surveys, with photographical records, sketches, and survey sheets. Visual assessments in particular often make use of computer-aided systems to explore the impact significance of a proposed development from different viewpoints and generate a zone of theoretical visibility (i.e. definition of the area with potential visual implications). Impact prediction and assessment is the step in which the potential impacts of a proposed project on the environment are taken into account, including those on landscape. The impacts could be of different types, direct or primary, indirect or secondary, cumulative and synergistic, thus resulting from impact interactions. In addition, impacts can be positive or beneficial; negative or adverse; short, medium or long-term; reversible or irreversible; and permanent or temporary. The severity of impacts is defined in terms of magnitude, and both, qualitative and quantitative methods can be used to establish impact magnitudes. The approach set out in the UK's guidelines for LVIA (LI and IEMA, 2002) is summarised in table 2.1. During this step, the significance of impacts is also determined, which is an assessment of an impact's magnitude in relation to the value, sensitivity or recoverability of the (environmental) receptor impacted determined during the baseline studies step. In relation to landscape, the assessment stage will focus on establishing the potential impacts of a development activity on: (a) the site's and local landscape's character; (b) the extent to which the landscape is able to cope with the implementation of the proposed development and the changes resulting from it; and (c) on the local communities and existing developments on the site and surrounding area. In relation to visual assessment, this step explores: (a) theoretical and potential visibilities; (b) the views impacted and the viewers affected; (c) the degree of visual intrusion or obstruction; (d) distance of the views, and (e) impacts on the character and quality of the view. This process is represented in Figure 2.2. Knight (2009, p.135-136, originally in LI andIEMA, 2002). Definition of visual impact High An evident change in landscape components, character and quality of landscape Mitigation, specifically the identification of those measures that can help avoid, reduce, remedy or even compensate significant adverse effects of predicted negative impacts, including those of the main alternatives considered, resulting from a proposed project. Different measures will be needed to mitigate the effects of the adverse impacts on environmental components and receptors. When identifying mitigation measures it is good practice to apply the precautionary principle, meaning that measures should be in place even in the absence of strong evidence confirming that a negative impact will occur. In addition to mitigating impacts, good practice also recommends that opportunities for enhancing the environment, including an improvement in environmental conditions or features should be sought, emphasising the advocative nature of EIA previously discussed. As argued by Hankinson (1999), most landscape impacts could be avoided or reduced in significance by amending the project design; this would entail that the consideration of landscape be an integral part of project design through an initial landscape assessment, resulting in a more positive approach to project design. This is in contrast to looking at mitigation as a problem to solve or moderate (ibid) through an EIA based approach conducted after the project design stage, yet before the approval stage. Monitoring: within the context of landscape EIA, the monitoring of landscape and visual impacts has had limited practice to date. This is because unlike other receptors or thematic issues like noise, water or air quality, landscape quality cannot be monitored on a quantitative basis. Where practiced, monitoring has typically consisted of a process of quality control, ensuring that development proposals have been implemented as approved, and that the impacts have not been more significant than what was reported in an Environmental report or statement. Local residents might practice informal monitoring, particularly if the visual impacts associated with the implementation of a project are greater than expected or what they had initially deemed acceptable. Though not originally required by the EU Directive, the newly amended EIA Directive (2014/52/EU) (entered into force on 15 May 2014 to simplify the rules for assessing the potential effects of projects on the environment), now formally requires developers to take the necessary measures to avoid, prevent or reduce the occurrence of adverse effects. The monitoring procedure is to be established by individual EU member states. The deadline for transposing the rules introduced by the latest amendment to the EIA Directive, including monitoring requirements, was May 16, 2017. Environmental Statement: it is not a procedural step as such, but an output of an EIA process. It is a report or Environmental Report which summarises the EIA findings and proposals. It should include a non-technical summary so that the report can be fully understood by non-experts and subjected to public scrutiny. Where a landscape and visual impact assessment is conducted as part of, and within an EIA, the findings of the LVIA normally appear as either separate or combined sections of the environmental statement or report. If the LVIA was conducted as a standalone exercise, then the findings are normally presented as a separate report in support of a planning or project application. Whether produced as part of an environmental statement or as a standalone report, the findings of the LVIA should be presented in a manner to facilitate widest dissemination, legibility, and accessibility, making crossreferencing of files, documents and tables easy to understand and follow. In most EU member states landscape-based EIA tend to follow the procedural approach outlined by the European EIA Directives, with wording in EIA legislation often mirroring the wording of the EU Directives. What might distinguish European practices is the way in which landscape is understood in more conceptual terms; thus, as a result of different countries' planning traditions and cultural approaches to landscape, which go beyond procedural aspects and predate the EIA Directives and the European Landscape Convention, as briefly illustrated in the following examples. The UK, for example, has a long tradition of taking into account landscape considerations that goes back to Victorian times, with the creation of botanical gardens, urban parks, architectural gardens and landscapes. Since then, the way in which landscape is understood in the UK has evolved to reflect "the relationship between people and place, providing the setting for our day-to-day lives" (Knight, 2001, p.121). In more detail, in England, the UK's Department for the Environment, Food and Rural Affairs (DEFRA) and Natural England emphasise the unique combination of elements and features that determine the way in which landscape is perceived, experienced and valued by people. These elements and features include "topographic features, flora and fauna, land use, sights, sounds, touch and smells, cultural associations, history and memories" (Natural England and Defra, 2014). In Scotland, Scotland National Heritage (SNH) defines landscape as "more than just the view'". They go on to suggest that it is about how people relate "to places and to nature -what they value about it, and how they respond to changes in the landscape" (SNH, 2015). The timeless and unique features or characters of a landscape aptly expressed over centuries through the works of poets, writers and painters (Tudor, 2014), have been particularly appreciated in the UK and resulted in the development of numerous studies exploring what gives a landscape its unique character. In England, these studies started in the 1980s; they set the foundations for the "countryside character programme" of the early 1990s and evolved into a guide to best practice approach, namely "Landscape Character Assessment: Guidance for England and Scotland" (2002). This approach is now widely adopted across the UK's nations, and further afield (e.g. Keun-Ho and Pauleit, 2007;Jellema et al., 2008). It is further explained in Box 2.1. Italy is another EU country that has a long lasting tradition of addressing landscape, though from a different starting point than other countries. Just like in other countries, "landscape" as a term which can encompass different meanings, is still evolving. Instruments for "controlling" or preserving the integrity of landscapes were introduced in Italy as early as 1909 and 1939, with laws on natural beauty aimed at safeguarding in particular national landscape heritage, and laws aimed at protecting elements of landscape that present artistic, historical, archeological or ethnographic value, including villas, parks and gardens. The development of regional landscape plans was then made compulsory in 1985. The need to protect the natural beauty of landscapes is also strongly reflected in the Italian constitution (Ventura, 2008), and still today, though a more modern and sophisticated understanding of landscape that appreciates its complexities exists, a protectionist approach aimed at safeguarding built and natural landscapes in the public and national interests remains. With the 2000 European Landscape Convention, landscape plans in Italy have further evolved to account for issues of identity and for public perceptions of landscapes (De Montis, 2016), with the principles of the European Landscape Convention being transposed into the country's codes for cultural and landscape assets (Codice dei BeniCulturali e del Paesaggio) (Legislative Decree n.42, January 22 nd , 2004) or urban codes (Codici Urbani). Today, the urban codes portray landscape as an expression of territorial identity, which is the result of natural and man-made actions and interactions. The codes go on to legislate that it is those aspects and characters that constitute material and visible representations of cultural value that should be protected as expressions of national identity. The aim of landscape protection should be to recognise, safeguard and where necessary, recover those cultural values that landscapes express through a process of valorisation of both, areas that need requalifying because degraded or compromised, and news areas of landscape value which should be sought and established in a coherent and integrated way (Legislative Decree n. 4246, January 22, 2004). Box 2.1: Landscape Character Assessment in the UK. It is widely acknowledged that landscapes vary. They are more than just a visual image, which could be perceived by different people in different ways. Landscapes are history; they are the result of different physical and socioeconomic considerations, including their geology, soils, topography, land cover, hydrology, nutrient cycles, carbon fluxes, climate, customary laws, economic activities and cultural developments (Selman, 2012;Tudor, 2014). It is the interrelationship between each of these considerations that determines a landscape's character, making it distinct from any other landscape. Subsequently, a Landscape Character Assessment (LCA) is "the process of identifying and describing variation in the character of the landscape. It seeks to identify and explain the unique combination of elements and features (characteristics) that make landscapes distinctive" (Tudor, 2014, p.8), and for monitoring and managing changes in landscapes providing the basis for informed value judgements and decision-making. Consequently, it is essential that both, communities of place, of practice and communities of interest, are involved and engaged in a LCA process. As a process, LCA is increasingly used to inform the planning of natural, rural and urban areas, and more recently, its scope of application is extending to coastal and marine areas, with the development of Seascape Character Assessments (SCA) (Natural England and Defra, 2014). Both LCA and SCA consist of a four stage process (Natural England and Defra, 2014; Tudor, 2014; The countryside commission and Scottish Natural Heritage, 2002): 1. Define the purpose and scope of the assessment, thus, the area it will cover, the scale at which it will be carried out, the levels of detail, resources required (including skill sets), stakeholder engagement etc., 2. Conduct a desk-study, thus, collect and review relevant background documents, spatial data, and other forms of information, such speaking to stakeholders and communities involved with the landscape, 3. Conduct a field survey, thus, to test the findings of the previous stage and draft areas of common character to develop an understanding of the landscape's aesthetic, perceptual and experiential qualities, and 4. Classification and description, thus classify, map and describe the landscape's character areas, types and characteristics. This stage will have been informed by the previous two stages and by stakeholder engagement exercises. Once completed, the LCA will provide a document detailing the character of a landscape and an annotated map showing the character areas or types. It will most likely be complemented by photos, illustrations, diagrams and other survey data collected, enhancing its value as a decision-making support tool that can provide robust evidence for the baseline studies step of an EIA, linked to place and to those characteristics that contribute to creating a sense of place. As suggested by the Italian example, signing and ratifying the European Landscape Convention is helping to align European countries' understandings of landscape, with many countries opting to follow the ELC's definition in the absence of national legislation. In Sweden, for example, the definition of landscape is increasingly shifting towards an understanding that appreciates landscape also as a social construct, and recognising the need for public participation. The official definition of landscape in Swedish road planning is now one that encompasses "both natural and human features, experience, identity and character (Antonson, 2011, p.195), with landscape "no longer a matter solely for experts" (ibid.). This process of bringing existing legislation and guidelines in line with the ELC is still ongoing, and is bound to make every day practice of EIA difficult, due to inconsistencies in terminologies and references to different concepts, or understandings of landscape. It is only when the ratification processes will be complete across EU Member States, that the implications for EIA practice will become clearer. It might well be that the final outcome of this process results in the aligning of definitions of landscape and of principles for landscape protection and management informing EIA practices across ratifying countries, in the same way that the EU EIA Directives standardised (to a certain extent) EIA procedures. Landscape Impact Assessment And Land Use Planning Within The SEA Process Land Use Planning7 Land use planning can be defined as an interdisciplinary and comprehensive approach aimed at balancing regional development and the physical coordination of space, based on an overall strategy. It gives geographical expression to the economic, social, cultural and ecological policies of society (Council of Europe, European Regional/ Spatial/land use planning Charter, CoE 1983). Or put more simply, land use planning can be defined as the management and development of space to create places that meet the needs of society, of the economy and of the environment in the quest for sustainable development. It relies on methods that are largely used in the public sector to influence the future distribution and rational organisation of development activities (EU Compendium of Spatial/land use planning Systems and Policies, CEC, 1997). In many countries land-use planning represents a continuous and systematic activity which covers complex issues of spatial development at the zonal, local, regional and national levels throughout various procedural stages, including inventories, analyses, planning, decision-making and monitoring. It follows 7 due to the very large extent of this topic the authors do not consider land use planning instruments within the context of other planning framework and their supporting instruments such as strategic planning, spatial planning, communicative planning or rational planning. This subchapter analysis is focused just on land use planning in relation to both landscape planning and landscape impact assessment. organisational rules, as well as the physical and temporal co-ordination of buildings and of other activities influencing the development of the area. Furthermore, it is intended to be inclusive and informed by the public. As such, land use planning can be considered an instrument for sustainable development, as it not only ensures that spatial conditions are met, but it also aims to ensure access to social and technical infrastructure, good quality of environment and guarantee the prioritisation of social goals, based on the views of the wider public. Land use planning is normally practiced following the principles of subsidiarity and planning sovereignty of the basic spatial planning units, which tends to be municipalities or local planning authorities. This requires requires co-ordination of various interests among different decision-making tiers (e.g. between the municipality and the region), but also between economic (water management, agriculture, transport and others), public services (health care, social welfare, education, trade) and the private (including business for profit and non-profit) sectors, and individual citizens. Land use planning creates conditions for effective public and private investments, influencing public spending to ensure equal access to education, social and technical infrastructure, employment opportunities and suitable housing as basic precondition of social equity. The actual planning tool of land use planning consists of planning documentation usually represented by national spatial development strategies, regional plans, local plans and zoning plans. The value system of a society is projected into the legally defined priorities and objectives embedded in planning documentation, which is then subjected to approval. Planning approval of the plan is normally a decision made by the competent planning authority and has legal effect. According to the principle of subsidiarity, planning approval, including its objectives, is conditional to the plan's accordance and compliance with objectives, rights or principles guaranteed by the state. Often, national governments act as guarantor of the public interest, and can overrule the planning sovereignty of municipalities and or other planning subjects. The practice of land use planning varies considerably, but usually include the following four stages (Wood, 1992): a) formulation of goals and objectives, b) survey, prediction and analysis, c) generation and evaluation of alternative plans, and d) decision, implementation, monitoring . Land use planning is in many countries closely coordinated with landscape planning. Linkages between land use planning and landscape planning differs from country to country, as illustrated in Table 2.2. The range of possible approaches go from landscape planning as an optimising method of spatial arrangement of landscape based on the respect of landscape ecological conditions; to landscape planning approaches that focus mainly on landscape character and landscape scenery; to landscape planning Region Landscape framework plan (regional environmental quality standards, regional environmental focal points of conservation, development and redevelopment) Regional plan (regional economic and social objectives, regional as a tool for the protection of cultural heritage; or to landscape planning reflecting predominantly nature protection efforts. Landscape planning usually provides aims and principles for nature conservation and landscape management for land use planning procedures. Landscape plans identify measures for mitigation and/or compensation of significant adverse effects on nature and landscape of those actions proposed in land use plans. Haaren et al. (2008) pointed out that a close coordination of land use planning and landscape planning can only be utilised if landscape planning is drawn up at different levels and on different scales, just like the overall land use planning system or other sectoral planning of a spatial nature. Land Use Planning And SEA Stategic environmental assessment (SEA) is an environmental management tool that refers to the environmental assessment of plans, policies, programmes or legislation. SEA is deemed an essential mechanism for decision-making at the highest levels and contributes to sustainable development. Thérivel et al. (1992, in Thérivel andPartidário, 1996, p.4) defined SEA as "the formalised, systematic and comprehensive process of evaluating the environmental effects of a policy, plan or programme and its alternatives, including the preparation of a written report on the findings of that evaluation, and using the findings in publicly accountable decision-making". SEA follows the concept of EIA as a procedure of identification, prediction, assessment and mitigation of relevant effects on the environment. According to Lee and Walsh (1992) and Wood and Dejeddour (1992), SEA was first developed as a response to the limitations of EIA, as EIA was being applied too late in the process, and alternatives and impacts of the proposed development, were not being adequately taken into account and assessed. As a process, SEA is directly linked to decision making and an integral part of the development of all policies, plans and programmes, with policies setting the framework for plans, which in turn set the framework for programmes, which finally set the framework for project level development and decisions (Thérivel et al., 1992;in Jones et al., 2005). This is commonly known as a "tiered forward planning process"; it can apply to all levels of decision making (from national, to regional, and local)8, and to land use planing and sectoral actions (Wathern, 1992). Evolution of SEA concepts, systems and approaches in land use planning As indicated in the previous section, the formal introduction of environmental assessment of land use plans took place in the United States in the 1970s, with the adoption of the National Environmental Policy Act (NEPA) in 1969. NEPA did not differentiate between SEA and EIA, nor did it use the term "strategic environmental assessment" explicitly. It did however, introduce the term EIA as an impact assessment tool of "any major public decisions on new regulations, plans, programmes or projects" (Jones et al., 2005;Partidário, 2004), encompassing project level decisions as well as the more strategic decisions, such as those undertaken in land use planning, but without explicitly making a distinction between project-and strategic-level assessments. The widening of the scope of environmental assessment from EIA to more strategic assessments is reflected in practice, particularly in the methodological approach to the assessment of land use plans, known as "Programmatic EIA" and EIS (Environmental Impact Statement), also referred to as regional, cumulative or generic EIS. Following the USA, environmental assessment was then introduced in Canada (1973), Australia (1974), West Germany (1975 and France (1976), though none of these systems offered a systematic approach to SEA. Within the EU the development of environmental assessment in land use planning was influenced by individual European countries' initiatives and practices. It was not until the second half of the 1980s that environmental assessment practice in planning expanded (Wood and Dejeddour, 1992), with the creation of well established systems in California, Western Australia, New Zealand, Canada, South Africa, and many European countries, such as the Netherlands, Italy, Germany, Finland and the UK (Fischer, 2007). The term "Strategic Environmental Assessment" was first used by Wood and Dejeddour (1989) in a study commissioned by the European Commission (EC), which then led to the formal introduction of SEA as a new EIA tool for policies, plans and programmes (the later European Directive reduced the scope of SEA to plans and programmes only). In comparison to EIA, SEA was intended to be more flexible, less quantifiable and more suitable to the reality and nature of land use planning. Two different approaches to SEA emerged: an "EIA-based" approach, which applies the EIA procedure and rationale to strategic documents, as the only fundamental difference between the two tools is the level of application (Thérivel and Partidário, 1996); and a "plan-based" approach, designed to respond to the comprehensive and multiple purposes, forward looking, and uncertain nature of spatial/land use planning (e.g. Lee and Walsh, 1992;Thérivel et al., 1992;Wood and Dejeddour, 1992;Sadler and Verheem, 1996;Thérivel and Partidário, 1996;Partidário, 2000;Partidário, 2004). Other terminologies associated with the plan-based approach were coined, and include Regional EIA, Strategic Environmental Assessment Analysis, Environmental Appraisal of Development Plans, Sustainability Appraisal of Regional Planning, Strategic EIA, Programmatic Environmental Assessment (Partidário, 2004).While the EIA-based model is mostly applied in the USA, Netherlands, Italy, South Africa, California and Germany; Canada, New Zealand, the UK and the Scandinavian countries have adopted the strategic plan-based approach (Verheem, 1992). The most dynamic expansion of SEA applied to land use planning occurred in Europe in the 1990s, when the EU's 5 th Environmental Action Programme was approved in response to the perceived failure of existing regulatory measures to achieve the European Community's environmental standards. Draft versions of SEA frameworks were therefore developed, with mostly accession countries (including Poland, Hungary, Czech Republic, Slovakia, Slovenia, Estonia, Latvia, Lithuania) taking part in numerous pilot SEA projects of regional development programmes, often a condition for accessing structural funds resources. Since then, SEA has been applied in various countries, with differences in sectoral areas of application, in the range of information collected, in public participation requirements and in the way in which SEA findings are taken into account in decision-policy, plan and programmemaking and approval processes, resulting from different countries' legislative and planning frameworks (Lee and Walsh, 1992;Sadler a Verheem, 1996;Thérivel and Partidário, 1996;CEC, 1998;Elling, 2000;Kleinschmidt and Wagner 2000;Platzer, 2000;ICON 2001). These differences also extend to different countries having different decision-making cultures and traditions, particularly in terms of the way in which environmental issues are taken into account. But these differences suggest that for better and more effective decision-making SEA should always be tailored to context-specific planning needs. Non EU countries have also introduced formal requirements for land use planning SEA -including China, South Korea, Norway, and NIS countries (Russia, Belarus, Ukraine, Kazakhstan, Turkmenistan, Armenia, Georgia, Moldova, Azerbaijan, Kyrgyzstan, Tajikistan, Uzbekistan). The NIS countries in particular have SEA elements that are based on the State Environmental Review (SER) system, established in USSR in the mid 1980s together with the so called OVOS (assessment of environmental impact requirements). Only Ukraine shows a high compatibility with the EU approach outlined in the SEA Directive (Cherp, 2001;Klees et al., 2002). The driving forces behind the development of SEA in NIS countries came mainly from international banks (World Bank) and international initiatives, such as the Sofia Initiative which aimed to demonstrate the benefits of applying SEA to business development, the community and the environment. One of the important milestones in the development of SEA within Europe and beyond, is the adoption of the European Directive 2001/42/EC dated June 27 on the assessment of the effects of certain plans and programmes on the environment (the so called SEA Directive). The Directive does not use the term SEA explicitly; it instead refers to the environmental assessment of all kinds of land use plans, establishing a framework for future development consent of projects. The Directive also requires SEA for plans subjected to assessment under the Habitats Directive, though it excludes minor modifications to existing plans and programs and small area plans not having significant environmental effects. Further, the SEA Directive recognizes the concept of tiering and establishes procedural steps that mirror those outlined in the EIA Directive(s); like scoping, the consideration of alternatives, consultation and public participation requirements (including transboundary consultation), environmental report preparation, the consideration of assessment results in decision making, monitoring and follow-up requirements (Jones et al., 2005). Similarly to EIA, the SEA Directive also requires the development of a sufficient quality report, including a "statement summarising how environmental considerations have been integrated into the plan and how the environmental report and the results from public consultations have been taken into account." (EC, 2001). According to Dalal-Clayton and Sadler (2005), the SEA Directive is probably the best known SEA framework law, and together with the SEA Protocol and the Espoo Convention (UN ECE, 2003) it influenced not only EU countries but stands as a "reference point" for countries in Asia, Africa and South America. The biggest influence of the SEA Protocol has probably been in the UNECE countries. The implementation of the SEA Directive was accompanied by complications, illogicalities and duplications as many EU countries had pre-existing SEA approaches and experiences, and as already stated, different planning systems. According to Partidário (2004 and, the approved version of the SEA Directive eliminated the efforts and expectations for a more planning and policy oriented evaluation tool, thus, a truly strategic instrument for EU member states. Instead, the Directive clearly represents a highly structured and technically oriented EIAbased model, as it mostly follows the procedural nature and layout of the EIA Directive. The Directive was also not very strict or prescriptive in telling individual member states how SEA should be introduced, thus, whether as an amendment to EIA legislation, or via separate SEA legislation or within planning legislation (ibid). However, Annex 1 of the Directive did "strictly" list the information that should be considered and elaborated upon in the required Environmental report. EU member states were obliged to implement the Directive by the end of July 2004. In many countries this meant modifying existing legislation with the preparation of guidelines. With the exception of Portugal, Greece and Luxembourg, the Directive was implemented in all EU countries by June 2006 (Fischer, 2007). The second implementation report on Directive 2001/42/EC noted that the Directive does not lay down any measurable environmental standards. It is rather a process directive, which establishes certain steps that Member States must follow when identifying and assessing environmental effects. The report further stated that all EU member states had transposed the SEA Directive into their legal and administrative structure and arrangements (for example through specific national legislation or integration into existing provisions). Since 2007, more than half of EU member states have amended their national legislation transposing the SEA Directive to ensure that their national provisions fully comply with the Directive and to resolve cases of incorrect application. The transposition of the Directive occurred in different ways in different countries, setting the legal foundations for different types of SEA systems. The nature of legal requirements used for the transposition of the Directive vary -from ministerial decisions to official regulations at the national, regional and local level, depending on the degree of centralisation/decentralisation of land use planning in different countries. Following Fischer (2007), these include: • Explicit SEA-specific framework laws: UK, Denmark, Spain, Ireland, Malta, Cyprus, Finland and Hungary (the latter two not in combination with land use planning), • Amendments to existing EIA regulations: Belgium, Estonia, Latvia, Czech Republic, • Amendments to existing EIA regulations in combination with amendments to land use planning legislation: Slovakia, Poland and Germany, • Amendments to an Environment Code: The Netherlands, Slovenia, Italy, • Amendments to an Environment Code in combination with amendments to land use planning legislation: Sweden, Lithuania, France, • Amendments to land use planning: Austria. Several countries prepared their specific guidelines for land use planning SEAe.g. UK, Sweden, Finland, Denmark, Poland, Ireland and Hungary. Thanks to the activities of development banks (World Bank), international aid organisations (UNDP, OECD) and donor agencies, vast experience with SEA has developed in more than 30 developing countries (Dalal-Clayton and Sadler, 2005). SEA practice in land use planning is now well-established, with the literature populated by practice reviews and numerous case-studies covering different sectoral applications, emphasising different procedural aspects or requirements, or more simply, practice in different regions across the globe. Evaluations of European SEA practice have also been conducted by the EU, with the first evaluation reports focusing on the Directive's formal requirements (Lee and Hughes, 1995), and more recent evaluations on SEA quality and effectiveness. The evaluations conducted by Jones et al. (2005), for example, differentiate between the so-called process input and output criteria. While the first are represented by evaluations of legal, institutional arrangements, SEA procedures and methods; the latter refers to the evaluation of SEA against the goals set or SEA contributions to good land use planning practices. The very recent evaluation on the effectiveness of the SEA Directive has been adopted by the Commission in 2017, following the previous report published in 2009. The 2017 report examined the application of the SEA directive across EU Member States using five criteria: effectiveness, efficiency, relevance, coherence and EU-added value (https://ec.europa.eu/info/law/better-regulation/initiatives/ares-2017-3481432_ en#initiative-details). SEA And Land Use Planing -Rational And Potential Benefits SEA can assist land use planning in many ways. Sustainable development is the common objective for both, land use planning and SEA, which are also instrumental for achieving sustainable development (Partidário, 2000). As noted by Wood (1992), land use planning is an area of application to which environmental assessment is most commonly applied to. SEA can deliver environmental improvements and raise environmental awareness in land use planning, and it can also help reduce the negative and enhance positive environmental impacts associated with the implementation of spatially relevant plans (Jones et al., 2005). Other reasons for applying SEA to land use planning are (Brown and Thérivel, 2000;Sadler, 2001a;Sadler, 2001b;Owens and Cowell, 2002;Thérivel, 2004in Jones at al., 2005Partidário, 2004): • SEA can evaluate the consistency and compatibility between aims, strategies and policies of a particular plan, stressing potential linkages, while identifying potential conflicts and interactions, • SEA can improve the environmental quality of planning policies, • SEA can raise awareness of environmental impacts, • SEA can inform stakeholders of the environmental impacts of strategic decisions, • SEA can help to avoid delays in plan implementation by highlighting how environmental issues have been taken into account during decision making, • SEA can identify issues to be monitored during the implementation of plans, • SEA can improve the green image of planning authorities, • SEA can facilitate the earlier consideration of environmental impacts, the examination of a wider range of potential alternatives, generation of mitigation measures and the potential to address a wider range of impacts, • SEA has the potential to streamline the EIA process by focusing on the most significant project issues. Often, planning practitioners claim that land use plans already meet many of SEA's requirements. This can partly be true, as many national and European environmental and nature conservation legislation do overlap, leading to confusion in planning and approval procedures and in waste of time and money (Hoppenstedt, 2003). As noted by Wood (1992), in many countries land use planning systems already included a number of elements relevant to SEA within their respective plan-making processes, prior to the introduction of the Directive. These include for example, the statutory recognition of environmental goals within the broad plan making context, planning documentation already containing baseline analysis, indication of future prospects and alternatives, policies for environmental improvement, public participation procedures as well as consequent revision of the plan during subsequent stages of the planning process. Planning practitioners also claim that when conducting land use planning SEA, conflicts between environmental protection and sectoral or developmental interests can emerge, but cannot be solved within the SEA process. While SEA can enhance the transparency and comprehensiveness of decisionmaking and make these conflicts explicit, they ultimately require political solutions. The systematic, documented and evidence-based nature of SEA should help inform decision-making, even if the decision made is a political one. Application Of Landscape Impact Assessment Within SEA And Land Use Planning Following LI and IEMA (2013), the principles for landscape impact assessment practice determined at project level EIA can be applied to the plan (programe, policy) level, and therefore to SEA. An advantage of conducting landscape impact assessment in SEA is the consideration of cummulative effects of potential development proposals at very early stages of land use planning. There are several approaches of landscape impact assessment in SEA, which depend on the planning traditions and frameworks of individual countries. The approach described by LI and IEMA (2013) and SNH (2007) is based on the identification of landscape change and of the forces underpinning that change. When conducting a LIA in SEA, a land use plan (programe or strategy) is evaluated against criteria relating to: • the conservation and enhancement of a landscape's character and scenic value, • the protection and enhancement of the landscape everywhere and particularly in designated areas, • the protection and enhancement of diversity and of a landscape's local distinctiveness, • the improvement of the quantity and quality of publicly accessible open space, • the restoration of landscapes degraded as a consequence of past industrial activity. In SEA it is not possible to assess landscape change with the same level of detail required in an EIA. At the strategic level, the scope of SEA is limited to identifying potential broad changes in landscape characteristics such as landform, land use and land cover, the relationship between landform and land use, field pattern and boundaries, buildings and structures in the landscape, settlement patterns as well as landscape visual quality (SNH, 2007). However, similarly to EIA, Landscape Character Assessment (LCA) can be embedded within the SEA process, as it provides a baseline against which change can be assessed and monitored. Landscape capacity and sensitivity studies are also influential in informing baseline studies of landscape impact assessment within SEA. Other approaches used in landscape-based SEA are associated to different forms of landscape planning conducted as part of a land use planning process (or a separate landscape planning procedure that is consistent and in compliance with the land use planning procedure). According to Haaren et al. (2008) and others (Hoppenstedt, 2003;Schmidt et al., 2005), landscape planning belongs to a set of instruments that supports the effective consideration of landscape in SEA. In these cases, landscape planning can significantly contribute to the application of SEA-based landscape impact assessment by providing guidance on the current status and future landscape development of a particular spatial area. The requirements of the SEA Directive with landscape planning documentation do overlap to a certain extent. In many countries9, landscape planning is dependent on objectives from, and embedded within, other environmental and/or sectoral planning (e.g. water management, agriculture, air pollution, supply and disposal). In addition to setting objectives for nature conservation, landscape planning also acts as a framework for assessing all relevant environmental objectives to establish a more consistent and coherent system of objectives. Another task of landscape planning is to develop scenarios for site identification (for example residential development or soil degradation) and to take them into consideration. Hanusch and Fischer (2011) reviewed possible linkages and benefits between SEA and landscape planning instruments in Germany, Canada, Ireland and Sweden. The analysis focussed on objectives, contents, methods and procedures; their findings are that: • landscape plans and SEA act as advocate instruments for the environment, • SEA and landscape planning aim at integrating considerations on the environment, nature, biodiversity and landscape into decision-making and planning, • there are many overlaps regarding the contents of an SEA environmental report, such as the collection of environmental baseline data, the outline of environmental objectives and the assessment of likely significant effects; and the baseline data included in landscape plans. As such, landscape plans can function as a comprehensive information source for SEA, • landscape planning can contribute to impact analysis and evaluation as well as alternatives assessment and compensation measures, • there is a range of procedural linkages between SEA and landscape planning, for example timing of planning procedures, alternatives, public participation and monitoring. Procedural Steps In Landscape Impact Assessment Within Strategic Environmental Assessment Of Land Use Plans Similarly to EIA, landscape impact assessment can be conducted as part of Strategic Environmental Assessment. The SEA and EIA procedures are very similar, but there are some differences (EC, 2017): • SEA requires environmental authorities to be consulted at the screening stage, • SEA requires an assessment of reasonable alternatives (under the EIA the developer chooses the alternatives to be studied), • the SEA Directive obliges EU member States to ensure that environmental reports are of a sufficient quality. Following the SEA Directive, the process of landscape impact assessment within the SEA process includes the following procedural stages (Jones et al. 2005;SNH 2007;CCW, 2007): Screening: it aims to consider whether SEA is required or not. To answer this question, it is helpful to look at the purposes of a land use plan and at expected impacts. Within the EU, Article (3) (4) and (5) of the SEA Directive establishes the process of determining whether plans are likely to have significant environmental effects and thus require a SEA. Member States have to take into account the significance criteria set out in Annex II and presented here in Box 2.2. Criteria for determining the likely significance of effects referred to in Article 3(5) 1. The characteristics of plans and programmes, having regard, in particular, to -the degree to which the plan or programme sets a framework for projects and other activities, either with regard to the location, nature, size and operating conditions or by allocating This stage also aims at examining the goals and objectives of the plan and its purpose against landscape criteria considering several questions: a) are the environmental problems in the plan area related directly or indirectly to its landscape? If so, does the plan make a significant contribution to resolving those problems or does it significantly exacerbate them? b) what is the magnitude and spatial extent of effects on the landscape, including the geographical area likely to be affected? c) what is the magnitude and spatial extent of effects on people's enjoyment of the landscape, including the number of people likely to be affected in the context of their sensitivity to change in the landscape? d) what is the value of the landscape likely to be affected and its vulnerability to change due to its special natural characteristics or cultural heritage (e.g. wildness)? e) what are the effects on areas or landscapes which have a recognised local, regional, national, EC or international protection status? f) What is the probability/likelihood or risk of these effects on the landscape occurring and being significant if they occurred? The involvement of the public and other stakeholders is an integral part of the screening step. Scoping/baseline studies/objectives and targets: Scoping is the stage of the SEA process that determines the content and extent of the matters to be covered in the SEA report to be submitted to a competent authority. Considerations about whether a plan meets requirements of relevant policies, landscape protection objectives, international targets, etc. are included. Defining landscape objectives, indicators and checklists is a critical element within the scoping stage. It is the setting of the environmental 'objectives' and subsequent 'tests' against which the emerging plan will be assessed. The environmental objectives are usually adopted from international, EU and national policy frameworks and by objectives tailored to more local landscape policy frameworks. Alternatives to a proposed land use plan should be identified, and assessed in terms of their costs, benefits and landscape impacts. Key landscape issues should also be identified. During this stage a series of landscape impact assessment objectives/ criteria are developed against which the plan's performance is predicted. Very often targets and indicators based on landscape (environmental) criteria can be used for monitoring the implementation of a plan. Data about the present state of landscape conditions are gathered and analysed. Impact prediction/assessment: Impact prediction is based on landscape impact assessment objectives and criteria. Predictions should be made with the help of baseline landscape data. Impact prediction very often involves subjective and objective assessment. Mitigation measures are part of this stage. Annex 2 of the SEA Directive recommends assessing impacts in terms of a number of criteria listed in Box 2.2, which can then result in a more detailed classification of impacts according to pre-defined criteria. Table 2.4 provides an example of such classification. The intensity of impacts is one of the most frequently described indicators, which is often expressed numerically. Some other examples of landscape impact assessment are provided in Tables 2.5 -2.8. Table 2. 5 in particular shows an example of a hierarchy of intensity of effects on the landscape, and possible combinations of intensity of impacts with the nature of impact on the landscape. Table 2.6 is an example showing the impact on landscape scenery based on the sum of (mathematical sum of intensity values) the likely impacts on landscape image, scenery and characteristic landscape appearance. The impact is expressed by the degree of impact. The resulting expected impact on the landscape scenery presents therefore either a visual collision rate of new objects with the landscape's current appearance, or a measure of positive contribution to the landscape's scenery. The visibility of individual objects in a landscape panorama and the extent to which they contribute to visual perceptions of the territory are important aspects to consider when assessing impacts on scenery, as illustrated in Table 2.7. The example of levels of significance of potential impacts on the landscape image are illustrated in Table 2.8. Impact frequency regular or irregular Continual 6. Impact spatial area Local Regional National Global 8. Impact intensity very important negative impact Important negative impact low important negative impact no impact low important positive impact very important negative impact very important positive impact 9. Impact degree Individual Synergic Cumulative Table 2.5: Numerical expression of an "intensity of the impact" and possible combination of intensity of impact with the nature of the impact on the landscape. Source: Pauditšová (2014). Pauditšová (2014). Degree of impact Visual collision with new objects Essential to critical; extreme degradation of landscape scene of a regional scale, visible from long distances and from all observation points 〈-9; -8) -5 Very important; strongly visible degradation of landscape scene ( Object(-s) constitute a major dominating element in the landscape that radically changes the landscape panorama in a positive way, contributing to its increased attractiveness; object(-s) very well visible in the landscape panoramas (except when in exceptional weather conditions the visibility is minimal) Object(-s) representing action (plan), are well visible, clearly visually distinguishable from other landscape features and is a major positive landmark in the landscape, which radically changes the landscape image; object(-s) prominent and visible from many observation points, to a distance of over 20 km, visibility in the landscape not needed to be eased, on the contrary, positively enliven the landscape mosaic Environmental report: According to the SEA Directive a publicly available SEA report should be prepared to document the main findings of landscape impact assessment within SEA together with a non-technical summary. The report should be available for public inspection being a part of land use planning documentation. The minimum requirements for SEA report content includes a description of plan proposals and its alternatives; a description of baseline environment; the significant environmental impacts of plan proposals and alternatives; the timescale of predicted impacts; mitigation measures; comments on assessment problems and uncertainities. As indicated in the previous section, landscape planning can contribute to impact analysis and evaluation as well as alternatives assessment and compensation measures (see Box 2.3). SEA Report a)an outline of the contents, main objectives of the plan or programe and relationship with other plans and programmes b)the relevant aspects of the current state of the environment and the likely evolution thereof without implementation of the plan or programe c) the environmental characteristics of areas likely to be significantly affected Regional landscape plan a)an outline of environmental relevant objectives (e.g. priority areas and reservation areas as well as spatialrelevant projects) of the regional land use plan) b) landscape analysis regarding aspects of soil, water, air/climate, fauna/flora, natural scenery and cultural assets, prognosis of the likely evolution of the state of the environment without implementatio of the plan c) assessment of the sensitivity of an area on the basis of the landscape analysis and as a condition for spatial development and project alternatives Monitoring: Monitoring allows for the results of the environmental assessment to be compared with the outcomes from the implementation of plans and programmes, in particular the significant environmental effects. The SEA Directive does not prescribe the exact arrangements for monitoring the significant environmental effects, the frequency of the monitoring, its methodology or the bodies in charge of monitoring. Monitoring can be based on standard monitoring indicators, sometimes set in the national legislation, or be on a case-by-case basis. Environmental monitoring arrangements set up in other Directives, such as the Water Framework Directive, the Habitats Directive, and the Industrial Emissions Directive can be helpful in this stage. Landscape Impact Assessment In The Context Of Environmental Health And Climate Change Landscape impact assessment in the context of environmental health should be part of the assessment of environmental impacts of policies, plans, or projects. The concept of environmental health was created from the terms human health and public health. Public health of the population is determined not only by the quality of health care services, but also by economic, social, psychological and environmental factors. As already noticed, there is a link between landscape impact assessment and environmental health. The World Health Organization, when defining environmental health, takes as a basis the quality of life of an individual: "It is the individual perception of one's position in life, in the context of culture and of the value system, in which the individual lives. Quality of life expresses the relationship of individuals to their own objectives, expected values and interests. It includes, in a comprehensive manner, the somatic health of an individual, mental state, level of independence from the surroundings, social relationships, an individual's faith, and that all in relation to the main characteristics of the environment." (The WHOQOL Group, 1995). According to the International Association for Impact Assessment the assessment of impacts on health is part of environmental impact assessment, which includes impacts on landscape. They define Health Impact Assessment (HIA) as a combination of procedures, methods and tools that can be used for assessing policy, plan, programme or project in different economic sectors on the basis of potential effects on the entity under assessment on the health of the population using quantitative, qualitative and participatory techniques (IAIA, 2006). Knowing that the policies and strategies of the various sectors can have a serious impact on health, occurrence or prevention of diseases, has lead to a more integrated approach to the consideration of health in the countries of the European Union. The aim of HIA is to improve the understanding of the potential impact of a policy, programme or project on health and to present adequate information to managing entities and people affected by the given programme or project (activity). The result should be to adapt the proposed policy, programme, or project in order to reduce or minimize the likely negative effects, and on the other hand, if possible, to increase positive effects (Halzlová and Drastichová, 2014). From such a perspective the evaluation of expected impacts on the landscape is a substantial step in both spatial policy making and planning and requires standardization of the assessment procedure. Despite the international efforts, the assessment of impacts on the health of population remained on the level of national interests. HIA is voluntary at the European level, though EU member states can set their own requirements10. In the process of EIA and SEA, landscape represents a separate item, whether within the territorial characteristics or in the stage of identifying the predicted impacts. In other respects, however, the landscape mirrors a space where all the processes of the individual components of the environment are under way. For this reason, the cumulative effects on landscape need to be emphasized in assessing the impact of projects (plans) on the landscape in terms of climate change impacts. In addition, the climate phenomena has the intersectoral impacts, so the effect on the individual parts of the landscape overlap. The landscape is a pointer where the impacts can be put together and can be determined in detail at a component level. On this basis, and as reflected in national legislation of many European countries, it makes sense for landscape impact assessment to encompass the risks arising from climate change. The revised EIA Directive adopted by the European Commission in 2012 (October 26, 2012), for example, includes an appeal for integrating climatic change and biodiversity into environmental impact assessments. The idea of assessing scenarios of biodiversity development within the context of a changing climate, directly supports the idea of landscape assessment as a space where all processes take place and impacts are assessed. The EIA Directive shows not only how climate change is clearly referenced in the legislation, but that it should be given more weight in light of the Directive's preventive intent or 'spirit'. It also discusses the benefits and challenges of integrating climate change into EIA. The EIA Directive contains a number of principles that provide the basis for considering climate change in EIA, even though it does not refer to either term explicitly. In line with Article 191 of the Treaty on the Functioning of the European Union (The Treaty on the Functioning of the European Union, 2010, p. 47), the Directive clearly sets out to prevent damage to the environment rather than merely counteract it. The EIA Directive has a wide scope and a broad purpose (Guidance on Integrating Climate Change and Biodiversity into Environmental Impact Assessment, 2013) and therefore needs to be interpreted as such. The 2012 Commission proposal for the revised EIA Directive (Proposal for a Directive of the European Parliament and of the Council amending Directive 2011/92/EU) strengthened the provisions related to climate change and biodiversity. 10 For example, since 2014, Slovakia strengthened its legal system in matters of impact assessment on public health, with the Decree of the Ministry of Health of SR no. 233/2014 Coll. In the UK, planning practice guidance clearly states that planning has an important role in promoting the health and wellbeing of communities, and the importance of this role is emphasised by the number of links between planning and health in the National Planning Policy Framework. Assessing the risk of climate change as well as the resilience and vulnerability of a project or plan to climate change, is important and should take into account: a) the specific geographic area (local impacts) in which the proposed project is to be implemented (eg. whether the area is susceptible to erosion, landslides, earthquakes, etc.); b) the specific climatic events that have taken place in the past (e.g. extreme precipitation/storms, wind, extreme temperatures, as well as temperature changes); c) the characteristics of a specific project or plan. Both climate change and landscape involve complex systems and interact with people. Since we cannot fully understand all aspects of complex systems at the point in which we make decisions, we need to be able to use what we have (e.g. available studies, reports, databases and other sources of information). After identifying the specifics of the territory, it is necessary: • to evaluate the current state of the risks and to assess the future state of the risks, meaning what can be expected in the future regarding climate change and how will the proposed project or plan respond to climatic changes, which risks can be expected (type, intensity, frequency, and possibly a worst and best case scenario), • to identify and to assess possible adaptation measures, such as how well a project or plan is adapting to the implications of climate change (e.g. by developing an emergency plan of what to do in a climate event, whether particular considerations need to be made in the construction phases or in the choice of materials used), • to identify how the operation and maintenance of the project, plan or programme adapts to climate change/risk, and whether specific requirements should be proposed. The aim of this procedure is to reduce the risks and to integrate the adaptation plan into the development of a project/plan, subjected to environmental assessment. Climate change and landscape issues should be included into EIA and SEA processes during both screening and scoping stages. The issues and impacts relevant to a particular EIA or SEA will depend on the specific circumstances and context of each project/plan (e.g. location, characteristics of the environment, etc.). Three steps are particularly important (McGuinn et al., 2013): • to identify key issues early on, with input from relevant authorities and stakeholders, • to determine whether the project (plan) may significantly change greenhouse gas emissions, and if so, then the scope of necessary greenhouse gas assessments (climate mitigation concerns) should be defined, • to be clear about the climate change scenarios used in the EIA or SEA, so that the key climate change adaptation concerns can be identified, as well as how they interact with other issues considered withinan EIA or SEA. Involving relevant authorities and stakeholders at an early stage of the assessment process will make it possible to capture the most important issues and establish a consistent approach for assessing impacts and for formulating solutions, or better recommendations. Following McGuinn et al., (2013), making use of the knowledge and opinions of environmental authorities and stakeholders can help to highlight potential areas of contention and areas for improvement in a timely and effective way. Furthermore, it can provide information on relevant forthcoming projects, policies and legislative or regulatory reforms, other types of assessments that should be considered when analysing evolving baseline trends; and finally, it can help collect suggestions for building climate change mitigation and adaptation measures and/or landscape quality (ecological quality, visual quality etc.) enhancement schemes into the proposed project or plan from the very beginning. When addressing climate change adaptation concerns as part of EIA and SEA, climate data and scenarios must be taken into account. A clear description of the climate change scenarios facilitates discussion on whether the expected climatic factors should be considered in the project (plan) design. Also, it is important to review any existing adaptation strategies, risk management plans and other national or sub-regional studies on the effects of climate change, as well as proposed responses and available information on expected climate-related effects relevant to a project or strategic plan. Figure 2.4 shows the steps of EIA and SEA processes with a set of questions related to specific climate change topics. Addressing climate change in EIA/SEA makes it easier to comply with the EIA/SEA Directives and relevant national laws. Member States are also likely to have a suite of legislative instruments relevant to climate change and landscape protection (e.g. planning policies that avoid developing flood prone areas). Europe's infrastructure needs to be adapted to better cope with natural phenomena caused by climate change and with negative impacts for landscape. This means considering that the parameters identified at a project's inception may no longer be valid at the end of its potentially long lifespan. This idea is important for a shift in thinking, from the traditional assessment of environmental impact to taking possible long-term risks into account. The plans and projects need to be assessed against an evolving environmental baseline. SEA and EIA should show an understanding of how the changing baseline can affect a plan or project and how they may respond over time. The EIA and SEA processes are particularly important since they can help set the context for identification of potential climate change impacts (including disaster risks in landscape).
2019-04-27T13:12:20.723Z
2018-12-31T00:00:00.000
{ "year": 2018, "sha1": "a9880e4a91b79090a884c41d2d6f4b5752a63ac5", "oa_license": "CCBY", "oa_url": "https://www.degruyter.com/downloadpdf/book/9783110601558/10.1515/9783110601558-004.pdf", "oa_status": "HYBRID", "pdf_src": "ScienceParsePlus", "pdf_hash": "bc4e371178e7ebd5ad967a28c43fd950e3bf0468", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Geography" ] }
24427090
pes2o/s2orc
v3-fos-license
Maize Genome Structure Variation: Interplay between Retrotransposon Polymorphisms and Genic Recombination W Although maize ( Zea mays ) retrotransposons are recombinationally inert, the highly polymorphic structure of maize haplotypes raises questions regarding the local effect of intergenic retrotransposons on recombination. To examine this effect, we compared recombination in the same genetic interval with and without a large retrotransposon cluster. We used three different bz1 locus haplotypes, McC, B73, and W22, in the same genetic background. We analyzed recombination between the bz1 and stc1 markers in heterozygotes that differ by the presence and absence of a 26-kb intergenic retrotransposon cluster. To facilitate the genetic screen, we used Ds and Ac markers that allowed us to identify recombinants by their seed pigmentation. We sequenced 239 recombination junctions and assigned them to a single nucleotide polymorphism–delimited interval in the region. The genetic distance between the markers was twofold smaller in the presence of the retrotransposon cluster. The reduction was seen in bz1 and stc1 , but no recombination occurred in the highly polymorphic intergenic region of either heterozygote. Recombination within genes shuffled flanking retrotransposon clusters, creating new chimeric haplotypes and either contracting or expanding the physical distance between markers. Our findings imply that haplotype structure will profoundly affect the correlation between genetic and physical distance for the same interval in maize. INTRODUCTION Maize (Zea mays) has a highly polymorphic genome structure Song and Messing, 2003;Wang and Dooner, 2006). Retrotransposons, which constitute the bulk of the genome , differ among lines in their makeup and location relative to genes. Consequently, the pattern of interspersion of genes and retrotransposons varies from line to line, defining sharply distinct haplotypes. The extent of sequence variation in the bz1 genomic region is remarkable. In a recent vertical comparison of eight bz1 locus haplotypes, any two haplotypes shared between 25 and 84% of their sequences (Wang and Dooner, 2006). Haplotypic variation is common in the genome (Song and Messing, 2003;Brunner et al., 2005;Yao and Schnable, 2005) and could lead to huge differences in estimates of genetic distance in different backgrounds. This does not happen because the variable retrotransposon component of the genome is recombinationally inert Yao et al., 2002). Yet, twofold to threefold variations in estimates of map distances for single genetic intervals have been reported in several maize mapping populations (Beavis and Grant, 1991;Fatmi et al., 1993;Williams et al., 1995). Much of this variation is probably due to trans-acting modifiers, such as the recently demonstrated quantitative trait loci that affect global recombination frequencies in recombinant inbred lines (Esch et al., 2007), but some of it may be due to cis-acting factors. Cis-acting factors were demonstrated in a study that examined recombination rates across the a1-sh2 genetic interval in three heterozygotes containing the same maize haplotype and different teosinte-derived haplotypes in a common maize background (Yao and Schnable, 2005). This region measures 130 kb in maize inbred UE85, and although most of the intervening teosinte DNA between a1 and sh2 was not sequenced, several large insertion/deletion (indel) polymorphisms relative to maize, including two LTR retrotransposons, one MITE transposon, and one hAT transposon, were uncovered in the sequenced region. The analysis identified up to threefold differences in recombination rates and statistical differences in the distribution of recombination junctions across subintervals among haplotypes. Although levels of sequence polymorphism correlated negatively with rates of recombination in the sequenced region, they did not fully account for the observed results. The authors proposed that other types of cis factors, such as region-specific chromatin structure, may affect the rate and distribution of recombination across the a1-sh2 interval. The polymorphic chromosomal organization of maize, due mainly to intergenic retrotransposon variation, prompted us to ask the specific question: to what extent does heterozygosity for large retrotransposon insertions, which occurs in every mapping population, affect recombination in the adjacent genes? The highly methylated retrotransposon clusters are probably heterochromatic, as are similar blocks in the knobs of maize (Ananiev et al., 1998) and Arabidopsis thaliana (Lippman et al., 2004), and most likely affect recombination. Genes next to retrotransposon clusters may be less recombinogenic, because the more condensed chromatin state of the retrotransposon cluster may interfere with the access of the recombination machinery to the adjacent euchromatic regions. In order to investigate this, we compared recombination in the same genetic interval in the presence and absence of a large retrotransposon cluster. found that the 1.5-kb bz1-stc1 intergenic segment in the McC bz1 locus haplotype (Fu et al., 2001) was replaced by a 26-kb retrotransposon block in the B73 haplotype ( Figures 1A and 1B). We have identified a bz1 haplotype, W22, that resembles McC in its retrotransposon-gene junctions but differs from McC in many single nucleotide polymorphisms (SNPs) and indel polymorphisms. The availability of these three distinct haplotypes has enabled us to examine the effect of retrotransposon heterozygosity on recombination in the adjacent bz1 and stc1 genes. Potential trans effects in such an experiment were eliminated by first introgressing all haplotypes into a common inbred background. The confinement of recombination to the genic space in maize protects the genome from the massive disruptive rearrangements that would otherwise occur if the dispersed repetitive retrotransposons (SanMiguel and Bennetzen, 1998) recombined ectopically. However, the orderly exchange of different intergenic retrotransposon clusters by recombination between alleles should occur regularly in populations, leading to nondisruptive genomic changes that amplify the variability created by the explosive increase in retrotransposons in the recent maize ancestry . We confirm here that recombination in genes shuffles heterozygous retrotransposons flanking them, creating new chimeric haplotypes and expanding or contracting chromosomal segments. Structure of the W22 bz1 Haplotype Because of extensive polymorphisms in the content of retrotransposons, helitrons, and other transposons, the size of the bz1 region can vary by more than threefold among maize lines (Wang and Dooner, 2006). NotI fragments containing the bz1 region of W22 are not very different in size from those of McC ), yet the Bz1-McC and Bz1-W22 alleles are known to differ in >1% of their sequences (Ralston et al., 1988), so W22 is an excellent candidate for a contrasting bz1 locus haplotype lacking the retrotransposon cluster in the bz1-stc1 intergenic region. In order to fully characterize the W22 bz1 haplotype, the entire 238-kb Bz1-W22 genomic region was cloned as two adjacent NotI fragments in the pNOBAC1 vector Each haplotype is identified by the name of the line from which it was extracted, followed by the size of the cloned NotI (N) fragment in parentheses. The eight genes (bz1, stc1, rpl35A, tac6058, hypro1, znf, tac7077, and uce2) are shown as pentagons pointing in the direction of transcription, with exons in peach and introns in yellow. The same symbols are used for gene fragments carried by helitrons HelA and HelB, which are represented as bidirectional arrows below the line in McC and W22. The vacant site for HelA in B73 is marked with a short vertical stroke. Dashed lines represent deletions. Retrotransposons are indicated by closed triangles of different colors. DNA transposons are indicated by open triangles of red and orange. Small insertions are indicated in light blue and numbered as by Wang and Dooner (2006). Only the genes have been drawn to scale. of 10 The Plant Cell (Fu and Dooner, 2000). Sequencing confirmed that the bz1-stc1 intergenic segment of Bz1-W22 lacked retrotransposons. The structure of the distal 122-kb NotI BAC, which contains the genetic interval analyzed in this study, is presented in Figure 1C. The overall structure of the W22 bz1 haplotype is remarkably similar to that of McC ( Figure 1). The two haplotypes share all of the boundaries between Helitron or retrotransposon insertions and intergenic regions, as well as several hAT, CACTA, and small DNA insertions in either intergenic regions or introns. Surprisingly, the lowest sequence variation between the two haplotypes occurs in the large intergenic region between hypro1 and znf. The percentage divergence in that 67-kb stretch is 0.06%: the two haplotypes share the same four MITEs, the same two Helitrons, although HelA in W22 has experienced a 700-bp deletion of the hypro3 gene fragment, the same Doppia DNA element, and the same 53-kb three-level retrotransposon nest distal to HelB. The 59 and 39 LTRs of each of the three retrotransposons differ in both haplotypes, allowing dating of the insertions Ma and Bennetzen, 2004) to a time period 0.4 to 1.2 million years ago, from youngest (top) to oldest (bottom) (see Supplemental Table 1 online). Yet, five of the six LTRs in the nest are identical in sequence between W22 and McC, suggesting that this chromosomal segment in the two haplotypes derives from a very recent common ancestor. The Huck1b retroelement between the znf and tac7077 genes has diverged in W22 relative to McC by the gain of a fractured 1.9-kb Opie-like element, consisting only of the 59 LTR and the primer binding site, and a 9.5-kb Prem2 element, estimated from its LTR sequence identity to have inserted only 60,000 years ago. These two insertions account for the 11-kb difference in size of the NotI fragment in the two lines. Using LTR sequence information from both haplotypes, Huck1b is estimated to have inserted between 1 and 1.2 million years ago, so it is about as old as Huck1a in the triple-level nest. A comparison of the Huck1b 59 LTRs and 39 LTRs in W22 versus McC indicates that they diverged from each other between 0.4 and 0.5 million years ago, pointing to an older common ancestry for this chromosomal segment of the two haplotypes. The subsequent acquisition of retroelement sequences by Huck1b in only one of the haplotypes supports this inference. As discussed below, retrotransposon clusters are shuffled by recombination between the genes that flank them: possibly, two nonconcurrent recombination events in znf and hypro1 in the history of these haplotypes led to the replacement of the hypro1-znf intergenic region of one haplotype by that of the other. Lastly, the 1-kb hAT element in the last intron of uce2, which is present in both haplotypes, contains a second, unrelated 0.5-kb hAT element only in W22. Remarkably, however, the sequences of most genes and nonrepetitive intergenic segments are just as polymorphic between W22 and McC as they are among other lines (Wang and Dooner, 2006). For example, the transcribed bz1 segments differ by 1.6% in their coding sequences and by 3.2% plus one MITE indel in their noncoding sequences (intron and 39 untranslated region). The bz1-stc1 intergenic regions differ in six MITEs or other small insertions; excluding those insertions, they differ in 4.6% of their sequences. The transcribed stc1 segments differ by 1.5% in their coding sequences and by 2.2% plus one MITE indel in their noncoding sequences (introns and 39 untranslated region). Recombination between bz1 and stc1 In order to examine the effect of retrotransposon heterozygosity on recombination in the adjacent genes, we compared recombination between bz1 and stc1 markers in McC/B73 and McC/ W22 heterozygotes. The bz1-stc1 interval in these heterozygotes differs by the presence and absence, respectively, of a 26-kb retrotransposon cluster in the intergenic region. Each haplotype was first introduced into the common genetic background of the inbred W22 to minimize background differences (see Methods). The experimental setup is diagrammed in Figure 2. The 26-kb retrotransposon cluster in B73 is represented in a much smaller scale than the adjacent bz1 and stc1 genes, which are drawn approximately to scale. The cluster is made up of a 9.4-kb Xilon retrotransposon inserted into a 1.7-kb Mu1 element, a 0.7-kb Zeon solo LTR, and a 12.8-kb Tekay retrotransposon. The McC parental haplotype carries bz1-m2(D1), a bz1 allele containing a Ds element in the second exon, and stc1-m1(Ac6087), an stc1 allele with an Ac insertion in the first exon (Shen et al., 2000). The bz1-m2(D1) allele produces a spotted phenotype in the presence of Ac and a stable bronze phenotype in its absence. Flanking bz-m2(D1) and stc1-m1(Ac6087) are the endosperm mutations sh1 and wx1, which serve as recombination markers in the The cartoon depicts the spotted (bz-m) and solid purple (Bz) parental phenotypes at left and the solid bronze (bz) recombinant phenotype at right. The sh1 flanking alleles condition either shrunken or plump seeds; the wx1 flanking alleles condition either waxy or nonwaxy seeds (shown here as staining light or dark with iodine for diagrammatic purposes only). Recombination anywhere between the Ds2(D1) element in bz1 and the Ac6087 element in stc1 gives rise to Sh bz recombinants, most of which will also carry wx. Recombination between Ac6087 and sh1 gives rise to Sh bz-m recombinants. The reciprocal recombinants of both classes are sh Bz. The stc1 and bz transcripts are indicated by the wavy arrows. A 26-kb retrotransposon cluster containing Tekay, a Zeon solo LTR, and a Mu1-Xilon nest separates stc1 from bz in the Bz-B73, but not the Bz-W22, haplotype. The small triangles represent indel polymorphisms (Ins3, Ins5, and Ins6 [Wang and Dooner, 2006]) used to assign Sh bz recombinants to one of four intervals within the larger Ds2(D1)-Ac6087 interval. Retrotransposon Polymorphisms and Recombination 3 of 10 experiment. (Note that the marker order in Figure 2, with the 9S centromere to the left, is the same as in Figure 1 and previous publications Wang and Dooner, 2006] but opposite to the more common way of presenting 9S markers with the centromere to the right.) The other parental haplotype, either B73 or W22, carries normal Bz1 and Stc1 alleles and produces a purple phenotype. Flanking Bz1 and Stc1 are the contrasting markers Sh1 and Wx1. Stc1-W22 Bz1-W22 Wx1 heterozygotes were pollinated with a sh1-bz1-X2 wx1 stock, which carries a deletion of the sh1-bz1 region, including the entire bz1-uce2 interval depicted in Figure 1 (Mottinger, 1973;Shen et al., 2000). Use of this deletion allows the recovery of selections in a hemizygous condition, greatly simplifying their molecular analysis. The above test crosses will produce crossover haplotypes conditioning a plump bronze (Sh bz) seed phenotype if recombination occurs between Ds and Ac, as illustrated by the heavy lines in Figure 2. Most of these exceptions will carry a Sh1 wx1 crossover arrangement of flanking markers. Other exchanges between sh1 and bz1 are also identifiable. Crossovers between Ac6087 and sh1 will produce plump, spotted (Sh bz-m) seed, and the reciprocal crossover class of both Sh bz and Sh bz-m classes will be shrunken and purple (sh Bz). Thus, the sum of Sh bz and Sh bz-m crossovers should equal the number of sh Bz crossovers. Furthermore, the ratio of Sh bz to Sh bz-m kernels should not vary from family to family within a heterozygous haplotype genotype. This expectation provides an internal check that Ac did not transpose from stc1 in any of the McC parent plants. A x 2 analysis (data not shown) revealed that the data were homogeneous for different families of the same genotype and could be combined. The pooled data are shown in Table 1. As can be seen, the reciprocal crossover classes Sh (bz þ bz-m) and sh Bz occur in approximately equal numbers in both heterozygotes. The length of the sh1-bz1 interval, estimated from the sum of the last two columns, is significantly greater in the McC/W22 heterozygote than in the McC/B73 heterozygote (2.52 versus 1.89 centimorgan [cM]). In particular, the frequency of the Sh bz class, which provides a rough estimate of the distance between the Ds element in bz1 and the Ac element in stc1, is about twice as high in McC/W22 (0.33%) than in McC/ B73 (0.17%). This class includes crossovers of the type shown in Figure 2 as well as rare excisions of either Ds or Ac accompanied by coincidental exchanges in the sh1-bz1 interval. All but 3 of the 283 Sh bz selections from both heterozygotes in Table 1 were also wx, which is not surprising, as interference in the sh1-bz1-wx1 region is very high and double crossovers are rare (Dooner, 1986). The frequency of the Sh bz-m class, which provides an estimate of the distance between Ac and sh1, is not significantly different in the two heterozygotes, suggesting that potential local differences in recombination may average out over the longer stc1-sh1 interval. The Sh bz selections were first genotyped by PCR for a series of five key indel polymorphisms in the region (shown as open triangles in Figure 2), which allowed us to assign them to the bz gene, the intergenic region, or different parts of the stc1 gene. Selections carrying markers exclusively from the McC haplotype were characterized for the presence of the Ds-bz1 junction, in order to eliminate Ds excisions, and of Ac target site duplication footprints, in order to eliminate Ac excisions. Of the successfully tested Sh bz selections, ;10% arose from coincidental excisions of Ac or Ds and exchanges in the sh1-bz1 interval (9 of 90 from McC/B73 and 17 of 175 from McC/W22). The vast majority arose from recombination between the Ds element in bz1 and the Ac element in stc1. Based on the indel marker analysis, the fragments bearing the recombination junctions were PCR-amplified and sequenced in order to precisely locate the recombination junction relative to the nearest flanking polymorphisms, most of which are SNPs. The results from sequencing 239 junctions are shown graphically in Figure 3. In this figure, gene exons are colored peach, introns are colored yellow, and SNPs are shown as vertical lines. The 26-kb retrotransposon cluster in the B73 haplotype is drawn in maroon and in a much smaller scale than the adjacent 5.8-kb genic region. MITEs in noncoding gene sequences or the bz1-stc1 intergenic segment are represented as small blue triangles. As can be seen from the respective SNP densities, the bz1 and stc1 alleles of McC and B73 are somewhat more closely related than those of McC and W22. In both heterozygotes, most intervals are delimited by two SNPs. The smallest interval is a few base pairs in length; the largest one is a 1.8-kb stretch of uninterrupted homology at the 39 end of stc1 in the McC/B73 haplotype. The number of junctions falling within specific intervals is shown beneath the common McC haplotype in each heterozygote. In order to assess the effect of the heterozygous retrotransposon cluster located between bz1 and stc1 on recombination in Under each of the five segments are given the stretch of homology, in kilobases, the number of identified crossovers, and the genetic length, in centimorgan. The sum for all intervals is shown at right. The Ds2(D1)-Ac6087 genetic distance for each heterozygote is slightly less than would have been calculated from the Sh bz recombinant class of Table 1, which is uncorrected for Ac or Ds excisions and simultaneous exchanges in the sh1-bz1 region (0.30 versus 0.34 cM for McC/B73 and 0.59 versus 0.66 cM for McC/W22). The total length of the Ds2(D1)-Ac6087 interval is significantly smaller in the B73 heterozygote than in the W22 heterozygote (x 2 ¼ 25, 1 df, P < 0.001). An examination of the distribution of junctions in Figure 3 reveals that exchanges occur only in the bz1 and stc1 genes. No recombination at all occurs in the intergenic region in either heterozygote. This is not surprising in the McC/B73 As in Figure 1, exons are in peach and introns are in yellow; the stop codon for each gene is indicated by a red octagon. The retrotransposon cluster in B73 is in maroon and drawn in a smaller scale than the rest of the interval. Polymorphisms are represented as vertical lines (SNPs) or blue triangles (indels), numbered as indicated by Wang and Dooner (2006). The number of recombination junctions in each subinterval defined by these polymorphisms is shown beneath the common McC haplotype in each heterozygote, single-digit numbers in black and double-digit numbers in red. The Ds2(D1)-Ac6087 genetic interval was subdivided into five segments for analysis: bz1, intergenic region, and three roughly equally sized stc1 segments. Significantly different estimates of genetic distances for a segment in the two heterozygotes are indicated in red. See text for additional details. Retrotransposon Polymorphisms and Recombination 5 of 10 heterozygote, given that its intergenic region contains the large retrotransposon insertion, but the absence of crossovers in the McC/W22 intergenic region suggests that recombination is inhibited by the high density of SNP and indel heterologies in the interval, an effect previously documented for recombination within bz1 (Dooner and Martinez-Ferez, 1997;Dooner, 2002) and a1 (Yao et al., 2002;Yandeau-Nelson et al., 2006). Recombination was significantly higher in the bz1 interval and two of the three stc1 intervals of the McC/W22 heterozygote. The size of the bz1 genetic interval 1 is about four times larger in W22 than in B73 (x 2 ¼ 8.15, 1 df, P < 0.01). The size of the distal stc1 segment is twice as large (x 2 ¼ 6.05, 1 df, P < 0.01) and the size of the proximal stc1 segment is three times larger (x 2 ¼ 18.6, 1 df, P < 0.001). Only in the central stc1 segment did recombination apparently not differ. Although both heterozygotes differ in multiple SNPs, which are known to have an inhibitory effect on recombination, SNP density in the bz1 and stc1 genes is actually lower in the McC/B73 heterozygote than in the McC/W22 heterozygote. Thus, the overall lower recombination observed in the McC/B73 heterozygote is most likely due to the presence of the large retrotransposon cluster in the B73 bz1-stc1 intergenic region. Recombination in the common gene space of heterozygous haplotypes has been proposed to shuffle retrotransposon blocks, creating new chimeric haplotypes (Wang and Dooner, 2006). In the McC/B73 heterozygote, recombination events within the stc1 gene should produce a recombinant arrangement of retrotransposon blocks, as shown in Figure 4A. Like McC, they should lack the 26-kb retrotransposon cluster located between bz1 and stc1 in B73, and like B73, they should lack the 53-kb retrotransposon nest and the Helitrons of McC. To verify this, we characterized the size of the stc1-hybridizing NotI band in several stc1 crossovers by CHEF gel DNA gel blot analysis. Representative data from two crossovers in segments 3 and 4 of stc1 (Figure 3) are shown in Figure 4B. These crossovers were known from the initial analysis of the recombination junctions to lack the 26-kb retrocluster from B73. As expected, the parental McC haplotype gives a 111-kb NotI fragment and the parental B73 haplotype gives a 73-kb NotI fragment, both in the W22-introgressed line used in the recombination experiment and in the original B73 inbred. The two different stc1 crossovers give a smaller, ;50-kb NotI fragment, the size expected from the recombinational loss of the large retroclusters of B73 and McC and the retention of the sole distal Grande1 retrotransposon of B73. DISCUSSION In this study, we investigated whether retrotransposon polymorphisms, which are widespread in maize (Wang and Dooner, 2006), affect recombination in neighboring genes. We compared recombination between markers in the adjacent bz1 and stc1 genes in heterozygotes between haplotypes that differed by the presence or absence of a heterozygous retrotransposon cluster in the intergenic region and found that the genetic distance between the markers was twice as large in the absence of the cluster. To monitor recombination, we made use of Ac and Ds markers that affected seed pigmentation, allowing high-resolution analysis of the interval. In McC and W22, haplotypes that lack the retrotransposon cluster, the physical distance between the Ds2(D1) marker in bz1 and the Ac6087 marker in stc1 is ;6 kb, and the measured size of the genetic interval in an McC/W22 heterozygote is 0.59 cM. In the B73 haplotype, the physical distance is ;32 kb because of the insertion in the bz1-stc1 intergenic region of a retrotransposon cluster, consisting of Tekay, a Zeon solo LTR, and a Xilon-Mu1 nest, and the measured size of the genetic interval in an McC/B73 heterozygote is 0.28 cM. The heterozygous structure within the genetic interval is more than five times the size of the interval itself, so it could reduce recombination by interfering with the normal pairing of the adjacent genic sequences. Furthermore, the retrotransposon cluster is heavily methylated and probably more condensed than the adjacent euchromatin. Recombination is reduced by fivefold in the 0.81-kb segment of bz1 located between Ds2(D1) and the cluster and by twofold overall in the ;4-kb segment of stc1 located between Ac6087 and the cluster. However, the pattern of reduction within stc1 is somewhat unexpected. Recombination is reduced in the proximal (39) one-third and distal (59) one-third, but not in the middle. This is not the pattern one would expect if the retrotransposon effect diminished with distance from the cluster. It is unlikely that the observed differences result from a small experimental sample, as 77 and 141 stc1 recombinants were characterized in McC/B73 and McC/W22 heterozygotes, respectively. Differences in the density of heterologies, which shows a negative correlation with intragenic recombination in maize (Dooner and Martinez-Ferez, 1997;Dooner, 2002;Yao et al., 2002;Yandeau-Nelson et al., 2006), could, in principle, account for some of the observed differences, but the bz1 and stc1 alleles of B73 are less polymorphic with McCs in intervals 3 and 5 than in those of W22. One possible explanation for the higher recombination in the 59 stc1 segment of McC/W22 heterozygotes would be that that segment is normally more recombinogenic in W22 than in B73, perhaps because of sequence differences in the two Stc1 promoters, although the bz1 is to the left and stc1 is to the right. The 6.7-kb interval has been divided into 67 100-bp segments. Segments that make up fractions of an interval have been assigned a number of crossovers in proportion to the fraction of the interval constituted by that segment. The intergenic region lies between intervals 9 and 24. Retrotransposon Polymorphisms and Recombination 7 of 10 enhancement of recombination by promoter-adjacent sequences, as in yeast and mammals (Petes, 2001), has not been demonstrated in plants. As is evident from a comparison of the three haplotypes in Figure 1, B73 and W22 differ in other large insertion polymorphisms besides the 26-kb retrotransposon cluster in the bz1-stc1 intergenic region, so the reduction in recombination observed between bz1 and stc1 could be partly the result of heterozygosity for other large insertions in the introgressed segment. However, recombination in the bz1-stc1 interval, which contains the retrotransposon cluster and constitutes about onefourth of the bz1-sh1 distance, is reduced by twofold, whereas recombination in the stc1-sh1 interval, which includes the polymorphic sequences to the right of stc1 in Figure 1, is not reduced significantly in McC/B73 relative to McC/W22 heterozygotes. These observations argue that the reduction in recombination reported here arises principally from the heterozygosity of the large retrotransposon cluster in the bz1-stc1 interval. The distribution of recombination junctions in the Ds2(D1)-Ac6087 interval correlates negatively with the density of polymorphisms in both heterozygotes. In Figure 5, frequencies of crossovers and polymorphisms in the 6.7-kb interval are plotted as a moving average every 100 bp from the proximal to the distal end. The highly polymorphic intergenic region lies between intervals 9 and 24 and contains no junctions in either heterozygote. Almost all of the polymorphisms outside of this region are SNPs. As can be seen, there is a general inverse relationship between the frequencies of crossovers and heterologies across the entire interval. These data confirm and extend the earlier observations at bz1 and a1 cited above. All of the recombination junctions fell within introns or exons of bz1 or stc1 (Figure 3). The bz1-stc1 intergenic region does not include the 59 end of either gene, the end often associated with high conversion and the initiation of recombination in yeast (Petes, 2001). However, analysis of recombination junctions in a 100-kb stretch extending upstream (distal) of stc1 that included several intergenic regions confirms that most crossovers fall within genes rather than in intergenic regions (L. He and H.K. Dooner, unpublished data). Similar observations have been made in the maize a1-sh2 region (Yao et al., 2002). In Arabidopsis, a much less polymorphic species than maize, most crossover junctions fall in intergenic regions (Mezard, 2006), so the distribution of recombination appears to differ in the two plants. The methylated retrotransposons that are interspersed with genes in the maize genome are probably heterochromatic, as they are in the knobs of maize and Arabidopsis (Ananiev et al., 1998;Lippman et al., 2004), and can affect recombination in neighboring genes. The general recombinational inertness of heterochromatin in many organisms (Harper and Cande, 2000) and the reduction of recombination in euchromatic regions adjacent to heterochromatin in Drosophila (Baker, 1958;Westphal and Reuter, 2002) are well-known phenomena. The distribution of LTR retrotransposons in the euchromatic portions of Drosophila chromosomes varies among wild-type strains (Franchini et al., 2004), but the effect of this variability on recombination has not been studied. More relevant to this study is a recent finding in rye (Secale cereale), showing that a polymorphic interstitial heterochromatic sequence in chromosome 2R significantly sup-pressed recombination in the arm (Kagawa et al., 2002). What sets maize apart from other species studied to date is the extensive variation in the distribution of retrotransposons, hence interspersed heterochromatin, from line to line. Our results suggest that this variability will affect the local distribution of recombination events across the genome. The implication of the finding reported here is that, for very closely linked markers, local variation in haplotype structure will have a strong influence on estimates of genetic distance. Intergenic retrotransposons add physical length to the interval and reduce recombination in adjacent genes, thus doubly affecting the ratio of genetic to physical length in some mapping populations relative to others. Heterozygosity for intergenic retrotransposons is probably extensive in maize mapping populations. For example, the B73 and Mo17 bz1 locus haplotypes differ from each other by the presence of the 26-kb bz1-stc1 intergenic retrocluster Brunner et al., 2005), so the bz1-stc1 distance in a B733Mo17 map, such as the widely used IBM map (Lee et al., 2002;Sharopova et al., 2002), will be different from that in a W22/McC or W22/Mo17 map. When sequenced, the B73 genome will become the standard for mapbased cloning efforts in maize. Using it as the reference for the physical map, one would compute a centimorgan:kilobase ratio of 0.0087 for the Ds2(D1)-Ac6087 interval in an McC/B73 heterozygote (0.28 cM:32 kb). By contrast, the centimorgan:kilobase ratio for the same interval in an McC/W22 heterozygote would be 10-fold higher (0.59 cM:6 kb, or 0.098). Over longer genetic distances, variations in centimorgan:kilobase ratios between different mapping populations may average out. Recombination within genes can lead to either the loss or gain of intergenic retrotransposons in the recombinants. We showed here that recombination within stc1 in an McC/B73 heterozygote leads to the joint loss of flanking retrotransposons and to a calculated reduction of the distance between the bz1 and znf genes from 82 and 44 kb in McC and B73, respectively, to 18 kb in the recombinant. The recombinant now carries a bz1-znf gene island uninterrupted by retrotransposons. In the reciprocal product, which should be present in the unanalyzed sh Bz recombinant class, both retroclusters would be retained and the distance between bz1 and znf would expand to 126 kb. Again, taking B73 as the reference maize genome, the reciprocal products would have intraspecific expansion factors of 0.4 and 2.9, similar to those reported in an interspecific comparison of longer syntenic blocks from two maize (B73) homeologous regions with rice (Oryza sativa) as the standard (Bruggmann et al., 2006). Whether similar contraction:expansion ratios will occur when comparing megabase-sized distances within maize will have to await a fuller characterization of the genome from other maize lines. Plant Materials All of the maize (Zea mays) stocks used in this study shared the common genetic background of the inbred W22. The bronze1 alleles and the aleurone phenotypes of the various stocks are described below. Except for the W22 stock carrying a Bz1-B73 allele, the derivation of the other stocks has been described previously. of 10 The Plant Cell (Ralston et al., 1988). bz1-m2(DI) (bronze in the absence of Ac, spotted in its presence) harbors a 3.3-kb Ds element at positions 755 to 762 in the second exon of Bz1-McC (McClintock, 1962;Dooner et al., 1986). sh1-bz1-X2 (shrunken, bronze): an x-ray-induced deletion of a large chromosomal fragment that includes the sh1 and bz1 loci (Mottinger, 1973). stc1-m1(Ac6087): an insertion mutation in the first exon of stc1-McC produced by the transposition of Ac from the nearby bz1 locus in McC (Shen et al., 2000). Bz1-W22 (purple): the normal allele of the color-converted version of the inbred W22 (Ralston et al., 1988). This stock traces back to the W22 R1-r:standard stock derived by Brink (1956) for his now classical studies of R1 locus paramutation. Bz1-B73 (purple): the normal allele of the inbred B73. It was introgressed as part of the Sh1 Bz1 segment of that inbred by repeated backcrosses to a W22 stock carrying the sh1-bz1-X2 deletion, thus precluding any recombination within the introgressed segment. The B73 parent was genotypically c1 Sh1 Bz1, and the recurrent W22 parent was C1 sh1-bz1-X2. Plump, purple kernels were selected in each generation, and after three backcrosses, a line carrying a C1 Sh1 Bz1 recombinant chromosome was identified and selfed twice to establish homozygotes. The line was genotyped for bnl1401 and wx1 markers, located 13 and 25 cM proximal to bz1, respectively, and determined to carry W22 alleles at both of those loci. Thus, the Sh1 Bz1 introgressed fragment from B73 is small. Its distal crossover junction lies in the 4-cM c1-sh1 interval and its proximal junction lies in the 13-cM bz1-bnl1401 interval, so the size of the fragment ranges from a minimum of 2 cM to a maximum of 17 cM. PCR and Sequencing PCR was performed using Qiagen Taq polymerase (Qiagen). The PCR products were run on 1% agarose gels or 8% polyacrylamide gels based on their size and the polymorphisms to be discriminated. For sequencing, PCR products were purified by isopropanol precipitation and 70% ethanol washing. The same PCR primers were used as sequencing primers to directly sequence purified PCR products using ABI BigDye Terminator V3.1 reagent (Applied Biosystems). DNA sequencing was performed in an ABI 3700 DNA analyzer. BAC Isolation and Sequencing NotI BAC clones of the bz genomic region of W22 were isolated as described previously (Fu and Dooner, 2000). The W22 BAC clones were sequenced by the shotgun sequencing strategy, assembled, analyzed, and annotated as described for other bz BAC clones from different maize inbreds and land races (Wang and Dooner, 2006). DNA Gel Blot Analysis DNA was prepared from isolated nuclei of 4-week-old shoots and leaves. Agarose plugs containing ;10 mg of high molecular weight DNA from different haplotypes were digested to completion with NotI. The digested genomic DNAs were resolved on 1% agarose gels by pulsed-field gel electrophoresis (CHEF-DR II system; Bio-Rad). The gels were blotted to a Hybond þ nylon membrane (Amersham Pharmacia Biotech), and the membranes were hybridized with random primer-labeled 32 P probes from stc1-McC. Conditions for hybridization, high-stringency washing, and exposure to x-ray film were standard. Accession Number The GenBank accession number for the 238-kb W22 bz region sequence is EU338354. Supplemental Data The following material is available in the online version of this article. Supplemental Table 1. Estimated Times of Insertion of LTR Retrotransposons in the McC and W22 Haplotypes.
2018-04-03T05:50:41.316Z
2008-02-01T00:00:00.000
{ "year": 2008, "sha1": "20a3f09a5278c12eb551bcb9d86570cdc7929fe7", "oa_license": "CCBY", "oa_url": "https://academic.oup.com/plcell/article-pdf/20/2/249/37012260/plcell_v20_2_249.pdf", "oa_status": "HYBRID", "pdf_src": "ScienceParsePlus", "pdf_hash": "1b0a1ba9f682f7ee28c74433804f423757fccff1", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
202452018
pes2o/s2orc
v3-fos-license
Application Analysis of Oil Test Fracturing Technology in Deep Gas Wells In the process of oil exploration, it is necessary to define the general orientation and distribution of underground reservoirs through oil testing and fracturing operations, and to obtain the specific conditions of detailed crude oil reservoirs. Because of the characteristics of high temperature and high pressure of deep gas, it is difficult to test and fracturing deep gas wells. Based on the analysis of the characteristics of deep gas and the development needs of oil testing technology, this paper analyses the oil testing and fracturing technology of deep gas wells through reservoir modification and fluid drainage. Introduction In the process of oil testing and fracturing, there are some characteristics such as high temperature, strong pressure and low porosity in deep formation, which make it difficult to perform. Therefore, it is very important to study the technology of oil testing and fracturing in deep gas wells. Starting from the characteristics of deep gas reservoirs, this paper analyses the oil testing and fracturing technology of deep gas wells, such as reservoir modification, fluid drainage and production seeking, and discusses the oil testing and fracturing technology. Current Situation Analysis of Oil Testing Fracturing Technology Because of the influence of factors such as deep gas wells and surrounding faults in oil testing and fracturing technology, it is very difficult to fracturing technology. In oil testing operation, it is necessary to analyze the engineering conditions, geology and other aspects. Fracturing methods are divided into oil jacket, pitching, sealing and current limiting. Fracturing mode should be selected according to wellbore condition and construction pressure. The closure pressure of deep gas wells is much higher than that of ordinary gas wells. Therefore, in order to ensure sufficient pressure, proppants need to be used scientifically to enhance the effect of compression resistance.At present, the commonly used drainage technology includes jet pump, hydraulic piston pump, coiled tubing vehicle liquid nitrogen, liquid nitrogen pump truck, gas lift, suction, self-blowout and other drainage technologies. Among them, liquid nitrogen gas lift drainage technology has good safety, low pollution, low pressure and high efficiency, so it has become a common means of deep gas well drainage technology. In the process of fracturing, technologies such as drainage aids, demulsifiers and microcapsule gel breakers can be added, REES2019 IOP Conf. Series: Earth and Environmental Science 300 (2019) 022104 IOP Publishing doi:10.1088/1755-1315/300/2/022104 2 so that the fracturing fluid can be reversed in time to avoid serious impact on reservoir caused by residual fracturing fluid. Before fracturing, it can also take the form of filtration to remove impurities with large particle size and prevent plugging of pore throat. After fracturing, it is also necessary to close the fracturing fluid in the form of forced closure, so as to minimize the use time of fracturing fluid, or liquid nitrogen drainage, thereby reducing pollution. Starting from the reservoir characteristics of deep wells, the suitable depth of the casing is selected according to the static depth. If the casing exceeds the standard limit, the string combination is needed to increase the length. In addition, due to the unique environment, temperature, pressure and structure of deep gas wells, the downhole string will have different effects, including expansion, bending and high temperature, which will eventually lead to the displacement of the string, which is no longer consistent with the pre-designed axial force. At the same time, deep gas wells contain a large number of highly corrosive gases, such as hydrogen sulfide. These gases will affect the material of the pipe string and cause the corrosion of the pipe string. Therefore, string material and related performance parameters are the key to ensure oil testing and fracturing technology in deep gas wells. To solve this problem, APR string testing technology can be adopted. This technology mainly consists of upper and lower two outer barrels, upper and lower two core shafts, piston, RD safety cycle valve and connecting short joints and joints, which can effectively control annulus pressure and can corrupt acidification. Corrosion detection can be carried out, and the pipe string can be washed repeatedly by means of circulation valve to prevent corrosion, and the related pressure parameters of the pipe string can be optimized and controlled by pressure control calculation. Analysis of Oil Testing Fracturing Technology When testing fracturing in deep gas wells, the pressure of the pump is often very high, and the pump injection time is relatively long, which requires a high performance of the testing fracturing equipment, and the bearing capacity of the downhole string should also be relatively strong. Because of the characteristics of deep gas wells, it is difficult to discharge the residual fluid in gas wells. It is necessary to apply such technologies as drainage aids, demulsifiers and microencapsulated gel breakers to reverse the fracturing fluid in time, avoid the residual pollution of fracturing fluid and improve the efficiency of oil testing engineering. In terms of perforation, including phase angle, average hole density and negative pressure, scientific and reasonable design is needed. During the operation of oil test fracturing, it is necessary to improve the progress of construction and use empty well casing injection to fracturing to alleviate the pressure on the pump to a certain extent. It is necessary to reduce the pressure on the fracturing equipment, and the technology of liquid drainage such as liquid nitrogen gas lift and jet discharge should be adopted to realize the timely discharge of fracturing fluid. Because of the technical difficulty in the process of oil testing and fracturing of deep gas wells, including wellbore state, construction pressure, fracturing mode, proppant and fracturing fluid, these are the key points of reservoir transformation technology. Each step will directly affect the efficiency of oil testing and fracturing of deep gas wells, which requires continuous comprehensive analysis, rational design of relevant technical parameters, so as to enhance oil testing. Fracturing efficiency. Research and Development of Oil Test Fracturing Technology To continuously improve the technology level of oil testing and fracturing, it is necessary to develop new technologies to improve the efficiency of oil exploration engineering. However, there are many technological links involved in oil testing and fracturing technology for deep gas wells. It requires a number of technologies and equipment to work together to improve the efficiency of oil testing and fracturing technology, and to improve the performance of various technologies and equipment. In order to improve technical efficiency, we must also pay attention to environmental protection and equipment safety. From the aspects of technology, environmental protection and safety, it is the key direction of research and development of oil test fracturing technology. Conclusion At present, in the practice of oil testing and fracturing in deep gas wells, the technical difficulty is greater, including wellbore condition, construction pressure, fracturing mode, proppant and fracturing fluid are the key points of reservoir transformation technology. Each step will directly affect the efficiency of oil testing and fracturing in deep gas wells. The key research and development of oil testing and fracturing technology will be focused on, so as to enhance oil testing and fracturing technology and improve enterprises. Benefits of industry.
2019-09-11T14:13:01.723Z
2019-08-09T00:00:00.000
{ "year": 2019, "sha1": "256fc014f5978e09dc0f09388a86b8b544632a8b", "oa_license": null, "oa_url": "https://doi.org/10.1088/1755-1315/300/2/022104", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "82f095387f16ab24daa07acec0b32d0de3294cef", "s2fieldsofstudy": [ "Engineering", "Environmental Science" ], "extfieldsofstudy": [ "Physics", "Geology" ] }
219947132
pes2o/s2orc
v3-fos-license
Analogies between SARS-CoV-2 infection dynamics and batch chemical reactor behavior Highlights • The batch reactor dynamics is used to predict SARS-CoV-2 spreading.• The reactor model can phenomenologically explain the virus spreading.• The model predicts the peak (day and entity) and the infection extinction.• Initial Value Problem for ODE has been solved with an unknown initial condition.• Algorithm robustness and convergence have been tested. Introduction Data analytics and trend extrapolation for future predictions are often demanded in simplified models (Buzzi-Ferraris and Manenti, 2011). Linear, cubic or polynomials are the most widespread models for their easiness of implementation and clearness of result interpretation (Ryan, 2008). In the practice, an easy prediction is welcome when some general and macro trends are useful, as it happens in the global economy, energy markets and resource trading to quote a few. On the other hand, accuracy and robustness in the predictions assume more and more relevance when the phenomena to be predicted deal with primary needs, like health, surgical treatments or pandemic situations (Layne et al., 2020), and especially when such phenomena can be modeled in more robust and reliable ways (Joseph T. Wu et al., 2020a,b). Proposed models mainly deal with the domestic and international infection diffusion based on the number of flight passengers (Chinazzi et al., 2020), person-to-person transmissions (Chan et al., 2020) and transmission chain tracking (Grubaugh et al., 2019). Some models are also attributing a primary role to social interactions (Stevens, 2020), before and after governmental restrictions and considering how compliant the population is, but no one is trying to search analogies with chemical and physical phenomena where the process dynamics modeling experience has a long and consolidated history (CAPE Working Party, 2020;EFCE, 2020;Gani et al., 2020). The impressive analogies between the behavior of SARS-CoV-2, infection diffusion (Chinazzi et al., 2020;Joseph T. Wu et al., 2020a, b) and chemical reactor dynamics (Fogler, 2006;Froment et al., https://doi.org/10.1016/j.ces.2020.115918 0009-2509/Ó 2020 Elsevier Ltd. All rights reserved. Levenspiel, 2004), batch reactors in particular, are the seminal idea to develop a generalized mathematical model for infection spread dynamics. Then, some relevant information, such as the infection peak, for both time and active infected population, the end of the pandemic and its origins, can be predicted to develop appropriate countermeasures. Analogies Chemical reactors are modeled with reactions and balances. Reactions identify chemical transformations of molecules through kinetic mechanisms and mass and energy balances identify the nature and morphology of the reactor where transformations take place. Assuming that each person corresponds to a molecule of a specific chemical species, the susceptible, healthy, population can be considered as the reactant (component A). It is worth saying that A is not the total population of a region or a country, but, for the scope of the proposed model, it is the total population that will be infected at the end of the infection. Before the infection, we have only the susceptible population inside the reacting system. Infection outbreak starts generating a new component B (reaction intermediate) representing the active infected people. After picking up the infection, people have two feasible paths in terms of reaction transformation: recovering through reaction R 2 (component C, cured, healthy immune people) or passing away through reaction R 3 (component D, deceased people). The general kinetic mechanism for the infection appears as follows: Each reaction step is governed by a kinetic parametric relationship in the Arrhenius form (Bodke et al., 1999). Kinetic mechanism (1) takes place in a hypothetical reactor. The active volume for the reaction is the susceptible population (A). If the population can move around, the chemical reactor is considered as continuous, since the chemical compounds can either enter or exit it during the operations. Inlet and outlet flowrates are properly simulated by transmission models which have already been proposed in the virology and transmission literature (Chinazzi et al., 2020;Grubaugh et al., 2019;Joseph T. Wu et al., 2020a,b). Moreover, when the starting population is fixed, as it is the case during governmental lockdown, the chemical reactor mainly behaves as a batch process, where inlet and outlet flows are null. This latter case fits appropriately the current situation in many countries and regions and the batch reactor balances can reasonably predict the infection behavior. At the end of the infection, recovered people are supposed to be immunized. If future evidences will not confirm any immunization (Gretchen, n.d.;Jiang, 2020), recovered people will have to be considered reintegrated as part of the healthy population again with the same original probability to be re-infected by the SARS-CoV-2. The kinetic mechanism will be then furtherly simplified to: The proposed model is intentionally the easiest one to show the potential in applying chemical engineering principles to topics only apparently far away from them. Nevertheless, more sophisticated models will be ideated and implemented to progressively achieve more reliable predictions. Model description The batch-type reactor model has been implemented in order to dynamically characterize the evolution of each specie involved in the kinetic scheme. Batch-type numerical simulation have already been proposed (Stevens, 2020), however, a stochastic system has been developed. In this work, the virus spreading is be modelled as a batch (i.e. an intrinsically dynamic chemical reactor) providing therefore a phenomenological interpretation of data in order to monitor and predict the time evolution of the spreading process. According to the proposed chemical engineering model, people are represented as molecules, while the batch reactor stands for the country where the infection is spreading. Following the kinetic scheme a portion of the whole population is susceptible of contagion (molecule A). The population is therefore progressively infected (molecule B). At this point, B can follow one of two parallel reaction paths: either they recover from the virus (molecule C) or pass away from it (molecule D). Hence, molecule B is the intermediate product of the reaction mechanism and it is expected to have a maximum peak during the infection evolution as well as to completely disappear at the infection extinction time. The described reactions and corresponding kinetic constants are reported in Kinetic constants for the reactions are described according to a modified Arrhenius-type law where the temperature-dependence is considered negligible due to the isothermal nature of the infection outbreak. The general equation of the kinetic constant consists of a pre-exponential factor (k i 0 ) and the time-dependent sigmoid correction. This correction is tuned by parameters, weight (a) and time lag (b) to properly follow the dynamic evolution of the system. The rationale behind the choice of a sigmoid-time correction lies in the shape of the sigmoidal function itself. Its focal properties are the sluggish initial response, followed by an exponential growth and a final asymptote approach (Stephanopoulos, 1984). These properties are shaped by additional tuning parameters: weight (a) and time lag (b). The former is responsible for dilating or shortening the dynamic response of the function in time while the latter shifts the response in time causing an anticipation or delay. This approach is applied with the same values of a and b to each reaction kinetic constants. The choice of a kinetic constant as shown in (6) allows to have a sigmoid-like profile as seen in existing infection models (Akhtar et al., 2019;Stojanović et al., 2019;Zeng et al., n.d.): The chemical reactions rates are considered as first-order in reactant concentration (in this case, concentration corresponds to the number of molecules which have a peculiar property): Therefore, according to the kinetic scheme provided in (1) and the chemical reaction rates ((7)-(9)) in a batch reactor (Fogler, 2006;Froment et al., 2009;Levenspiel, 2004), the component balances at isothermal conditions are stated as follows: At a first moment, the second order autocatalytic reaction: representing the infection phenomenon related to contacting was taken into account as well. However, the same results are obtained and the computational time almost doubles. The associated reaction rate r a ¼ k a A Á B results indeed orders of magnitude lower than the reaction (3). This is due to the fact that the susceptible population is much higher than the currently infected people and the presence of B in the reaction rate expression substantially lowers the reaction rate final value. Ordinary Differential Equation (ODE) system (10) stability has been largely analyzed (Li and Muldowney, 1995). Moreover, it requires initial conditions for each of the differential equations provided. Initial conditions for active infected cases (B 0 ), recovered cases (C 0 ), and death cases (D 0 ) are trivial since they are zero at the infection beginning. On the other hand, the initial susceptible population A 0 ð Þ is unknown since its value can only be derived from the final value of the sum of the remaining species ðC and DÞ, considering that, at the extinction of the epidemic, the infected species B ð Þ is no more available: The integration of differential systems with initial unknown conditions is called Initial Value Problem (IVP) (Buzzi-Ferraris and Manenti, 2015). Since the initial condition A 0 highly affects not only the concentration profile of each species and the peak dynamics (i.e: time position and intensity) but also the final steady state condition, a robust numerical strategy should be adopted to solve the problem associated to system (10) with initial conditions (11) (Floudas, 1995;Floudas and Pardalos, 2014;Grossmann and Biegler, 2004). Numerical method ODE system (10) is adopted to analyze infection data provided by Johns Hopkins University (hereafter JHU) and Washington University (hereafter WU) websites (John Hopkins University, 2020; University of Washington, 2020). In equation (6) In regressions, an optimizer is aimed at minimizing a least-sum-of-squares objective function (f obj ) to match infection data and model previsions: The adopted numerical solution strategy is a global coupling between minimization and ODE solver blocks-structures implemented in MATLAB 2019b. This numerical problem is a typical nested optimization problem. The structure of this problem can be divided in two main different optimization layers: an outer one and an inner one. The former aims at finding the optimal initial condition (i.e. A 0 ). The latter evaluates the optimal regression parameters each time the outer optimization is called (Floudas, 1995;Floudas and Pardalos, 2014;Grossmann and Biegler, 2004). The problem is solved once a convergence criterion is met. Thus, the algorithm structure can be summarized as follows: 1. Assignment of a wide range for variableA 0 ; 2. Domain discretization; 3. Iterative search for optimal A 0 (outer optimization); 4. For each search: model-based optimization for parameter estimation (inner optimization); 5. Initialization of differential system; 6. Numerical integration; 7. Steps (2) to (5) repeated until convergence. Main convergence criterion for the outer optimization can be stated as follows: When A i 0 , which is the Total Infected Population (TIP) at the Infection Extinction Day (IED) estimated at the iteration i, differs less than 0.5 infected people with respect to the estimation at the previous iteration A iÀ1 0 , the procedure is concluded. It has been observed that the model is able to reasonably predict the infection evolution only when the inflectional point of the sigmoidal function in time is overcome since, there, the concavity change, both in A and B component profiles. At that point, predictions of the nearing peak in terms of intensity and position become reliable. Regression and validation The proposed mathematical model consists of four ordinary differential equations evolving along the time axis and providing the temporal evolution of: (i) total infected cases; (ii) active infected cases; (iii) recovered cases; and (iv) death cases. The key-step for data fitting and predictions deals with the estimation of 5 adaptive parameters, which concerned with the chemical reaction rates and the infection dynamics. Each of three chemical reactions have a specific reaction rate parameter and data collected for SARS-CoV-2 outbreak in online databases (Dong et al., 2020; John Hopkins University, 2020) supports their regression process. The remaining two parameters deal with the Kermack and McKendrick-like transmission models (Anderson, 1991) as well as microbial growth models (Lin et al., 2000) to identify the projection of total infected cases at the extinction date of the infection. The calculation procedure merges two well-consolidated techniques in scientific community of Computer Aided Process Engineering (CAPE Working Party, 2020) and European Federation of Chemical Engineering (EFCE)'s Working Party: (i) the nested optimization approach (Duran and Grossmann, 1987;Varvarezos et al., 1992) to overcome model over-parametrization; and (ii) the robust model-based techniques to identify and correct the gross errors in measurement, communication and delay in collecting data. Data for model regression and validation are acquired from (University of Washington, 2020) and samples are provided in Fig. 1. It illustrates the trends for Henan Chinese province, where the infection extinction day has been achieved. It can be noted that the proposed model properly interprets the whole dataset consisting of 60 observation days. In particular, relevant information like the Infection Peak Time (IPT), the Active Infected Population Peak (AIPP), the Infection Extinction Day (IED) and the Total Infected Population (TIP) are well predicted. Nevertheless, the relevant use of the model is the prediction when the infection dynamics is not yet clearly developed. For the Henan province, Fig. 1 also shows 4 smaller trends on the right side, which represent the convergence paths of IPT, AIPP, IED, and TIP, while the dataset is progressively enlarging day after day. As it is possible to note, the relevant information can be predicted largely before the complete evolution of the infection and with reasonable robustness. After about 20 days since the beginning of data collection, the identification of the peak and extinction dates are very close to the real final data registered 40 days later. According to numerical theories, the prediction can be considered reasonable when the TIP trend has already passed its maximum rate, which consists of its maximum derivative in mathematical terms. Before such a point, the predictions are considered unreliable for two correlated reasons: (i) the first infection cases are usually identified with uncertain delay time (COVID-19, the SARS-CoV-2 disease, incubation and identification) and the initial small amount of the infected is strongly affected by large errors; (ii) small datasets affected by gross errors cannot be considered as a good data sample (Buzzi-Ferraris and Manenti, 2011). For future developments, it would be therefore mandatory to regress the model not only forward, but backward as well to better predict the real starting point of the infection in a given population to overcome the initial lack of information and/or measures. Model predictions can strongly support the decision-making process to predispose at the right time materials and logistics for Fig. 1. Model prediction (solid lines) and model validation using UW data (dots) for the Chinese province of Henan. A) On the left side, the trends of: TIP (red); active infections (orange); recovered cases (green); and death cases (blue). B) On the right side, the small trends report the convergence paths of predictions considering the subsequent re-regressions of new daily data: the IPT and AIPP (orange); IED (green) and TIP (red). (For interpretation of the references to colour in this figure legend, the reader is referred to the web version of this article.) hospitals, dedicated buildings, human resources, doctors, health materials, and medical machines. Predictions According to Johns Hopkins' database (John Hopkins University, 2020) (updated to 23rd March), the 10 countries largely affected by COVID-19 are reported in Table 1. In particular the Chinese province of Hubei is still the largest region worldwide for TIP and its predictions are reported in Fig. 2. On the left side, the dynamics are almost completely evolved and the model is properly fitting the trends, converging to the TIP with less than 1% error and to the IPT with 1 day only of error at the day 20 of the infection (33% of total infection time for Hubei province). South Korea is the country with the longest historical database for SARS-CoV-2 infection out of China and predictions are reported in Fig. 3. Collected data are showing a TIP trend that is still linearly increasing after the identified AIPP on day 30. It, unavoidably, means that South Korea is not really behaving like a total batch reactor: a small, but continuous, inlet flow of infections is still present in the country. The model is constantly underestimating the TIP by 7% in the last observations to better robustly fit the remaining relevant information, especially the IED, which is rapidly converging to a shorter time in the last days. It is worth remarking that the figures represent the profile of A as the sum of B, C and D. In addition, small charts on the right side in Figs. 2 and 3 show the predicted main parameters (IPT, AIPP, IED and TIP) according to the number of days taken into account in the dataset. Discussion Thanks to this study, in reaction engineering terms, it is possible to distinguish four infection stages of epidemics/pandemics: -the starting stage (infection outbreak). It is the initial part of the infection from the case 0 to the inflectional point of the TIP. It is characterized by an increasing infection rate, but with a small amount of infected cases. Usually, it is affected by relevant errors in measures and wrong identification of infected cases. This stage has an explosion-like trend and cannot be modeled in mechanistic manner; moreover, small data affected by gross errors cannot be a base for model-based predictions; -the early stage (infection transmission). It is the stage where the infection is fast spreading and the TIP is enlarging dramatically. It is identified between the inflectional point and the AIPP. Model predictions become useful to quantify the pandemic entity. It is worth saying that the nature of this stage and the following ones differs from the nature of the starting stage: the starting stage is governed by the natural progression of the infection in a population, whereas the subsequent stages are governed by contingency measures adopted by the population and governments (forced behavior); -the mature stage (infection mitigation). It represents the days where the impact of the infection is decreasing in terms of active infected cases. It goes from the AIPP to the steady state of the TIP, which means that new active infections no longer occur. In this stage, conversion rate from active infection cases into recovered and death cases is the highest. It still involves a relatively large amount of active infected cases and usually expires when the incubation time has passed since the last new infection case; -and the final stage (infection extinction). It starts when the TIP achieves the steady state and it strongly depends on the hospital treatments and sickness activity. It is considered concluded when the last active infection case is either recovered or dead. The Hubei province belongs to the final stage: South Korea is just entered the mature stage. Once all the data and the related convergence paths will be collected, the kinetic parameters governing each phase will be properly estimated. They will serve as good guesses for numerical model-based predictions of the next potential pandemics. The model is progressively improving the predictions every day and its potential could support all the countries affected by SARS-CoV-2 pandemic to make decisions and organize supplies and human resources. For this reason, the mathematical model proposed in this work will be extended and adapted to all the countries and related regions where available infection data are provided. A global platform has to be organized soon at the https://www.super.chem.polimi.it. The aim is to collect all the data for the SARS-CoV-2 infection dynamics and estimate local and global kinetic parameters. Conclusions Predictions not only based on stochastic approaches, but also on phenomenological theories, could provide an additional element for governments and associations to make decision processes stronger and more robust. The idea of comparing infection dynamics to batch reactor behavior and chemical kinetics seems to provide good information also in early stages, when the infection is progressing fast. By definition, decision-making robustness in emergencies means also to adopt more different tools for future predictions and more sophisticated models can be proposed to improve the prediction reliability. Platform and database for SARS-CoV-2 predictions and, in general, for pandemic predictions, has been launched at CMIC Dept. ''Giulio Natta" Politecnico di Milano website with the aim of studying kinetic parameters for infection outbreak, transmission, mitigation and extinction. Declaration of Competing Interest The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. Fig. 3. Model prediction (solid lines) and model validation using UW data (dots) for South Korea. A) On the left side of the trends of: TIP (red); active infections (orange); recovered cases (green); and death cases (blue). B) On the right side, the small trends report the convergence paths of predictions considering the subsequent re-regressions of new data: the IPT [day] and AIPP (orange); IED (green) and TIP (red). South Korea predictions currently underestimate the TIP by 7% to better fit the remaining relevant information. Such a gap on the IPT is expected to zero in few days since the converging paths are achieving a steady-state. (For interpretation of the references to colour in this figure legend, the reader is referred to the web version of this article.)
2020-06-21T13:01:11.480Z
2020-06-20T00:00:00.000
{ "year": 2020, "sha1": "ecd598c82e43970cf1d25aa17ef207663e55584f", "oa_license": null, "oa_url": "https://doi.org/10.1016/j.ces.2020.115918", "oa_status": "BRONZE", "pdf_src": "PubMedCentral", "pdf_hash": "3b4185c164b370a2a86e87e209334dba8008f8be", "s2fieldsofstudy": [ "Chemistry", "Medicine", "Engineering" ], "extfieldsofstudy": [ "Computer Science" ] }
235507028
pes2o/s2orc
v3-fos-license
Designing gamified rewards to encourage repeated app selection: Effect of reward placement Designers commonly use gamification to improve the frequency of engagement with apps, but often fail to consider the impact of placement on reward value. As rewards tend to depreciate if delayed (termed temporal discounting ), placing a reward further into the future can significantly affect its ability to motivate behaviour. We examine the most effective placement of gamified rewards so as to reduce discounting and to increase the frequency an application is used. In two online studies, users were asked to choose between fictional budget tracking applications that varied in the placement of either monetary (N = 70) or gamified (N = 70) rewards. In both experiments we found that people more frequently used the application that provided rewards before, rather than after, the task. As predicted by temporal discounting, our work suggests that placing rewards early in the interaction sequence leads to an improvement in the perceived value of that reward, motivating further selection. We discuss the findings in the context of designing effective reward structures to encourage more frequent app engagement. Introduction Designers use a variety of features to drive engagement within their applications, such as social supports and motivational prompts (Elbert et al., 2016;Maher et al., 2015), or alarms and reminder notifications (Doherty et al., 2018;Stawarz et al., 2015). Gamification techniques, where game-like rewards are applied to non-game contexts (Deterding et al., 2011), are a popular technique to increase engagement, and have been previously found to significantly increase frequency of use when applied to certain contexts (Johnson et al., 2016;Lewis et al., 2016). Yet, the impact of gamification on engagement is not yet clear cut , as weak experiment design and inconsistent use of psychological theory has hampered clear insights on how to most effectively design gamified applications (Seaborn and Fels, 2015). While gamification is usually successful in motivating behaviours Lewis et al., 2016), it does not do so consistently. As a result, the majority of gamification research focuses on verifying whether certain gamification techniques are effective in specific contexts (e.g. Mekler et al., 2013;Velez et al., 2018), rather than exploring how certain reward processes impact the effectiveness of gamified rewards. For example, temporal discounting (Ainslie, 1975;van den Bos and McClure, 2013), which describes how the subjective value of a reward is reduced based on the size of the delay experienced before presentation, is consistently shown to be an important factor in mediating the value of rewards in both animal and human studies (Paglieri, 2013;Rosati et al., 2007). Yet, temporal discounting is seldom explored in the gamification literature. As more valuable rewards create stronger motivations to choose a certain option or behaviour (Flaherty and Caprio, 1976;Green et al., 1991;Sarafino, 2004), it is important to understand how temporal discounting may influence reward value so as to optimise the impact of incentives structures when designing gamified rewards. Our paper contributes empirically-tested, theory-driven guidelines by investigating the most effective placement of gamified rewards in order to encourage further engagement with an application. Frequent app selection behaviours provide further exposure to the app interface, allowing designers to leverage this attention to other parts of the application. According to one study conducted on smartphone users (Oulasvirta et al., 2012), frequent app-checking behaviours can act as a gateway for continued app use. Additionally, becoming more habituated to an app interface has been shown to increase the accuracy and speed of the interaction (Garaialde et al., 2020), which creates a switching cost that may demotivate users from moving to a new app. By applying temporal discounting to reward placement design we show that, rather than placing rewards after long interactions with the app, giving users rewards directly as the app is opened makes them significantly more likely to return to the app. Critically, this effect was replicated when both money (Study 1 -mirroring other cognitive research on rewards) and a points-based leaderboard (Study 2) were used as rewards. This shows how gamified rewards, like a points-based leaderboard, can be influenced by temporal discounting in a similar manner to financial rewards. These findings have implications for the design of reward structures in a variety of disciplines, particularly in terms of gamified platforms, as they show that rewarding users as soon as they open an application can significantly increase the likelihood they will open the application again. We argue this reward structure is likely to improve the overall perceived value of selecting the application within the automatic modelfree decision-making system (de Wit and Dickinson, 2009). Although further research is required to confirm the effects are effective in a more applied context, increasing this value is believed to increase the likelihood that users will select the app spontaneously (Kamphorst and Kalis, 2015;Wood and Rnger, 2016), therefore promoting more frequent use. Our results provide a valuable empirical test of theoretical predictions into how reward structures should be used when attempting to motivate users to engage with an app more frequently. We suggest that app designers who wish to use gamified rewards to motivate app use could benefit from placing rewards as close to the start of the interaction as possible, as this may encourage users to open their application more frequently. Gamification and rewards Gamification is a popular technique used to improve engagement rates across web applications or services, usually providing users with immediate gratification for common or desirable interactions (Deterding et al., 2011;Lewis et al., 2016;Looyestyn et al., 2017). Gamification, as it is widely applied, works by creating the type of reward structures and regular feedback mechanisms commonly found in games, in an attempt to motivate certain behaviours (Deterding et al., 2011;Hamari et al., 2014). Rewards such as points, levels, badges, quests, and leaderboards are usually paired with other types of visual feedback in order to motivate repeated engagement (Tondello and Nacke, 2018). These techniques have been shown to be successful in motivating users to open an app more frequently, spend longer amounts of time using an app, or in increasing levels of participation (Johnson et al., 2016;Lewis et al., 2016;Looyestyn et al., 2017;Seaborn and Fels, 2015). Gamification techniques are particularly popular in contexts where rewards are delayed, such as education (Barata et al., 2013) and exercise , and attempt to provide rewards in the short term in order to motivate users to stay engaged with the application. They are also very common in the context of increasing productivity of employees of business organisations (Koopmans et al., 2012), in motivating individuals to take part in citizen science activities (Eveleigh et al., 2013;Iacovides et al., 2013), and in other types of research (Lewis et al., 2016;Looyestyn et al., 2017). While reviews of the literature suggest that gamification is generally effective at increasing engagement, there are often issues that prevent these studies from providing conclusive evidence as to why these effects work and what exactly may be driving their success (Deterding et al., 2011;Hamari et al., 2014;Lewis et al., 2016). Seaborn and Fels (2015) highlight how multiple gamified rewards are often given concurrently, making it difficult to identify the unique contribution of each type of reward or reward technique. In addition, theory is rarely used to guide or explain the design of gamification. Because of this, clear evidence-based, best-practice guidelines on how to structure gamified rewards are hard to find. The current paper aims to provide empirical support for theory-driven guidelines that advise on the most efficient placement of rewards when using gamification as a motivational tool. How rewards affect choice As highlighted, rewards form a core component of gamification design. A large portion of theory-based research on how rewards affect choices is framed around dual-process theories (Kahneman, 2003;Metcalfe and Mischel, 1999;Sloman, 1996). These theories divide thinking into two separate but interconnected systems, referred to as the model-free (MF) and the model-based (MB) systems (Daw et al., 2011;Glscher et al., 2010). Other names for these systems include System 1 and 2 by Kahneman (2003), Hot and Cool systems by Metcalfe and Mischel (1999), and Associative and Rule-based systems by Sloman (1996). The MF system supports fast, automatic decision making and is heavily reliant on successful past experience to guide decisions. On the other hand, the MB system supports slower, more conscious, and deliberate decision making, whereby decisions are influenced by a predictive model of possible future actions and their related outcomes. The MF system is particularly sensitive to the magnitude and timing of rewards, relying almost exclusively on previous experience when making decisions (Wise, 2004). As such, any changes to these variables has a drastic effect on how the MF processes reward information (Kobayashi and Schultz, 2008). The MF system is also frequently described as the default decision-making process, with the MB system only exerting periodic influence when required (Evans, 2007). Because of this, designing an incentive structure that targets the MF's sensitivity to perceived reward magnitude and timing may be an effective strategy to influence a person's default behaviours. The current paper empirically tests whether temporal discounting, known to disproportionately affect MF decision making, mediates the influence of monetary and gamified rewards on participant choice. Temporal discounting Choosing the timing of reward delivery is likely to be a critical design decision when using gamified rewards to encourage behaviours. This is because the subjective value of rewards can change based on the length of the delay before presentation (Ainslie, 1975;Myerson and Green, 1995). Both humans and other animals value rewards given immediately much more than rewards that are delayed or given later (Luo et al., 2009;van den Bos and McClure, 2013), with this effect being most pronounced early in the delay and reducing over time to follow a hyperbolic curve (Frederick et al., 2002;Green and Myerson, 2004). Therefore, even small delays early on in the interaction may significantly impact the influence of the reward on decision making. This is particularly important in terms of interactions with apps where gamified rewards are given, as there are currently no studies looking at temporal discounting in this context. The current study thus provides the first step in the merging of the highly theoretical research around temporal discounting with the context of gamification, an area of study that regularly lacks this theoretical focus (Seaborn and Fels, 2015). The temporal discounting effect is believed to emerge from the MF system's inability to create an association between items that are temporally distant. This is believed to be due to a decreased ability of dopamine neurons to form associations between the action (or cue) and a temporally distant reward (Kable and Glimcher, 2007;. As this association is required for learning to occur, the decrease in dopaminergic activity subsequently reduces the subjective value of the reward (Peters and Bchel, 2009). In contrast, as the MB system does not rely on these learned associations, it is not as heavily impacted by reward delays (McClure et al., 2004). As such, most studies that involve only the prospective evaluation of future scenarios, rather than involving direct experience, found minimal reductions in value even for rewards that are weeks or months away (Kirby and Marakovic, 1995). Current research on temporal discounting usually only involves a simple choice between two paths, selected based on a button push or questionnaire answer (Kable and Glimcher, 2007;Kirby and Marakovic, 1995;McClure et al., 2007). In these instances, temporal discounting is measured from the point at which the simple behaviour is executed fully. And yet, many actions are more complex than simple button presses, and may require a lengthy sequence of actions to be completed before rewards are given. For example, completing a language learning session in an educational app involves multiple steps, including opening the app, choosing the lesson to start, and then completing each individual question or test that's required. Usually a reward is only presented after all these components of the sequence are completed, potentially leading to strong temporal discounting effects. However, as temporal discounting research generally only looks at simple behaviours, it is difficult to directly apply its findings to these more complex behaviours. Therefore, we devised two lab based experiments to explore the influence of reward placement further, particularly in the context of longer sequences of behaviours. Research rationale Currently, the common strategy for gamification designers is to present rewards after the user has completed the desired task within the app. And yet, temporal discounting research indicates that this may not be the most effective place to present a reward (Luo et al., 2009;van den Bos and McClure, 2013). Due to the time taken to execute the entire sequence, the motivating effect of the reward may be reduced. In this paper, we explore whether rewarding users upon opening an application provides a stronger incentive to select that application again when compared to the common practice of rewarding at the end of the sequence. To identify this, we ran two online experiments: the first exploring this effect using monetary rewards (Study 1) so as to situate the results within the rest of the decision-making literature (e.g. Cushman and Morris, 2015; Daw et al., 2011), while the second used a points-based leaderboard as the reward (Study 2), providing insights into how temporal discounting affects gamified rewards. Aims and hypothesis The aim of the first study was to test the effect of reward placement on app selection frequency. Participants were asked to complete a data logging task with multiple steps, devised to reflect the act of saving a transaction in a budget tracking application (e.g. Money Lover) by categorising a receipt. The task was deemed to be complex enough that it would not be completed too quickly, but mundane enough to not be in itself entertaining. A mundane task makes it more likely that any reinforcing effects measured are coming directly from the rewards, and also is more representative of the contexts where gamification is used (e.g. non-gaming contexts that require an extra motivational boost). Three apps were available in the experiment, each providing a reward at different points: immediately after selecting the app (pre-task placement), directly after the categorisation task itself (post-task placement), or after an artificial buffering delay that followed the task (delay placement). Our hypothesis for study 1 is: (H1) Reward placement will have a statistically significant effect on the selection frequency of each app, such that earlier delivery will improve selection frequency when compared to later delivery. Participants Seventy participants (26F, 44M) were recruited from the UK pool of Amazon Mechanical Turk (MTurk) workers. This breakdown is approximately representative of the gender distribution on the platform for workers in the UK (Difallah et al., 2018). MTurk has been shown to hold a more diverse participant pool than usual lab-based studies conducted in universities, both ethnically and socioeconomically, which improves the applicability of results to a larger population (Henrich et al., 2010). The study was conducted according to the British Psychological Society ethics guidelines (The British Psychological Society, 2018), and was cleared by the university's ethics review process for low-risk studies. The mean age of participants was 31.38 years (SD = 9.28 yrs), ranging from 19 to 64. Most participants (80%) reported they had completed at least a bachelor's degree, while the remaining participants either reported only completing secondary school (18%), or none of the above (2%). Study design The study used a within-participants design, with reward placement (3 levels: pre-task placement, post-task placement, delay placement) as the independent variable, and selection frequency (total number of times participants selected the app) as the dependent variable. Materials App Selection Task Participants were asked to select an application before starting a data logging task. Three apps were available across the experiment, each varying in the placement of the reward (see Section 4.2.4). These apps were selected by each participant based on their respective coloured icons, and all involved the same type of expense categorisation task. As such, each trial involved one app interaction where data logging was performed. These apps were represented within the participants' browser window, but were shown in full-screen mode to better imitate a native application. The app icons were presented in pairs, whereby participants had to decide between two alternatives during the app selection screen. Two apps, rather than three, were presented at a time to ensure that app selection task was as clear and as easy to complete as possible. Presenting options in such a manner has been previously shown to improve data clarity, and produce more clearly defined and stable results (Windsor et al., 1994). All app pairing permutations (pre-task vs post-task; pre-task vs delay; post-task vs delay) were shown 20 times, creating a total of 60 pairs displayed across the experimental session. To ensure that participants did not have any prior familiarity with the app icons, Tibetan symbols that varied in background colour (either blue, green, or pink) were used. These types of symbols have been used previously in decision-making research (e.g. Daw et al., 2011). The icons associated with each reward placement condition were randomised for each participant but remained constant throughout their experimental session. The app chosen between each presented pair was the measure used as the dependent variable (selection frequency) in the analysis. As part of the instructions for the task, participants were told that they would be categorising expense statements for three different companies, each with their own colour-coded application. They were also made aware that the companies could differ when they presented payment in the app, and were asked to choose between the two apps displayed. Finally, they were instructed that payment would be based on the number of expense forms completed, and that the experiment would automatically finish once they've reached the outlined time limit. Data Logging Task Participants had to match a receipt description to a list of expense codes (see Fig. 1). Each trial consisted of one receipt categorisation. This was done to make the temporal distance between app selection and reward as consistent as possible for each application. Allowing users to categorise multiple receipts in each trial would have introduced a major confound by making the time between reward delivery and action inconsistent across participants. After completing each trial, participants experienced a loading delay of six seconds. There were a total of 66 receipts (termed 'expense statements'), 6 practice and 60 main trials, which were picked in a random order regardless of the app icon chosen. The task was developed to simulate the kind of behaviour commonly executed on expense and budget management data logging apps (e.g. Money Lover), where participants have to log their spending. Numeric codes were included to add difficulty and increase the length of time taken up by the task. The task was designed to be mundane enough that it did not interact with the effect of the rewards. The task was identical for each condition except for the variation of reward placement. Participants were also given the option of exiting a trial by pressing the X button located on the top left of the screen. Conditions -Reward placement Each app varied the placement of the reward given when engaging in the data logging task. A reward appeared either after selecting an app and before the logging task (pre-task placement), after completing the logging task (post-task placement), or after a loading delay following the logging task (delay placement). The current experiment used monetary rewards (points leaderboard is explored in Study 2). Monetary rewards are commonly used in experiments because they are considered to be universally reinforcing (e.g. Daw et al., 2011;Otto et al., 2013) and thus are a more stable way of assessing reward influence. They are more effective at promoting behaviours than punishment (Li et al., 2016;, and their inclusion allows us to more easily interpret and compare results across other reward-based research, particularly because it has already been shown to successfully influence the MF system (Cushman and Morris, 2015;Otto et al., 2013). The monetary reward for each trial was represented by a £1 coin (local currency), shown for 2 seconds during every trial to signal payment for that expense form submission. Each coin represented a payment of $0.12 (default MTurk currency) and was paid through the platform. The participants were told that they would accumulate payment for each correct expense form they submitted and would be given the amount accumulated at the end of the study. The aim of these instructions was to incentivise participants to maximise the amount of expense forms they completed and to react positively to each reward. The maximum amount of money any participant could accumulate over the experiment session was $7.92. Although not informed of this until the end of the study, all participants were in fact given $8.00 at the end of the study regardless of performance. The payment rate was based on guidelines for fair MTurk payment practices (Lascau et al., 2019). The delay condition included an artificial delay of 6 seconds, which included a screen with the text "Loading... Please wait a few seconds." Previous research has shown that adding a delay before a reward is presented reduces the value of that reward (van den Bos and McClure, 2013), yet this effect has not yet been replicated in the context of app interactions. The post-task condition was thus included to ascertain whether this established effect could be replicated in the new experiment setting as it uses the same type of delay as other research (wait time). It also allows us to gain an insight in to whether our experimental paradigm is sensitive enough to explore temporal discounting effects. As this type of delay has been effective in other paradigms (e.g. Hayden, 2016), a lack of difference in selection frequency between this and any other conditions would indicate that the experimental set up was not sensitive enough to detect temporal discounting. As such, this condition was included as a type of control to show delays can significantly impact shorter app interactions. Following pilot testing, a loading delay of 6 seconds was chosen so as to minimise the potential for participants to switch tasks (which commonly occurs with delays longer than 9 seconds; Gould et al., 2015) as well as to allow enough trials to be completed while not tiring the participants. The post-task condition used a type of delay (execution time) which The three screens shown for each trial starting with A) an app selection screen where participants choose between the two apps presented, B) a data logging task where expense codes are matched to the purchase description, and C) an artificial delay presented as a loading screen before participants were taken back to the app selection screen. has been less extensively researched. When designing the experiment, it was thus unknown whether this type of delay would produce significant temporal discounting effects. As such, the expense form task was designed to be complicated enough to take approximately 12 seconds, ensuring a sufficient delay to produce an effect. Procedure Participants recruited through the Amazon MTurk platform took part in the study using their own device. To participate this device was required to use a physical keyboard as an input mechanism and to meet a minimum screen size of 600x800 px. The experiment site was hosted on a university server and used the jsPsych JS library version 6.1.0 (de Leeuw, 2015). This library has been previously found to record reliable response times when compared to other popular experimental resources (de Leeuw and Motz, 2016;Reimers and Stewart, 2015). The online platform also allows for easy recruitment of a large number of participants in a short time span, greatly reducing time burdens on experimenters and assuring uniformity in the materials presented (Mason and Suri, 2012). Upon selecting the HIT (experiment) on the MTurk platform, participants were given information as to the nature of the study and were asked to give consent to take part. They were then told that experiment entailed choosing from two icons representing fictional web apps, and then completing an expense data logging task. The instructions stated that all apps gave the same amount of reward for each completed trial, but could differ in when that reward would appear. Participants were instructed to make their icon selections based solely on which application they preferred at that time. Before starting the trials, participants completed a demographics questionnaire, which included questions about age, sex, education, and occupation. Participants were then asked to complete a set of practice trials, whereby they were presented twice with each app icon and the data logging task. This ensured that all participants had been consistently exposed to each type Fig. 2. Structure of study 1 conditions. The pre-task condition presents a reward immediately following screen A, the post-task condition does so after screen B, and the delay condition after screen C. Each application represents one condition and always presents the reward for each trial in the same location. of reward placement before they started the experiment trials. Following the structure of similar decision making research (Cushman and Morris, 2015;Daw et al., 2011;Otto et al., 2013), participants were required to use the arrow keys to choose from a pair of apps displayed for two seconds, otherwise the trial would time out. There was no timer when completing the data logging task, and participants had the option to cancel out of any trial if they wished to do so. The location of the screen where each app icon appeared, the colours associated with each condition, and the expense statements were all randomised for each participant. The expense form had to be completed correctly to be submitted, otherwise the participant was locked out of the task for five seconds while the incorrect answer was highlighted to the participant during lockout. Participants were locked out following an error to prevent them from guessing multiple codes in quick succession, or from quickly entering wrong values on purpose to be given the correct answer. Participants were given a time limit to complete the experiment of approximately 30 minutes for the main task, after this point they were no longer presented with trials even if they had not completed all of them. This length of time was tested in pilot studies to ensure that the majority of participants would be able to complete the trials within this time. At the end of the experiment, participants were asked to rank the apps based on preference, to give an account on why they had those preferences, and to give details on any strategies used throughout the task. Participants were then forwarded to a final screen, where they were fully debriefed on the nature and aims of the study. Data cleaning A participant's data was included in the analysis if at least 70% of all 60 trials had been completed successfully. This threshold is similar to those used in comparable experiments (e.g. Cushman and Morris, 2015;Otto et al., 2013) and was chosen before data collection. Having completed less that 20% of trials, the data from 11 (5F, 6M) participants was removed. Only four participants did not complete the main task of the study within 30 minutes, meaning they were presented with less than the total of 60 trials. However, since all four participants still had succesfully completed over 70% of trials, their data was still included in the analysis. Additionally, four participants (1F, 3M) were removed due to almost exclusively picking the app on only one side of the screen (over 85%), suggesting they were not basing their decisions on subjective preference for each item. The data cleaning process meant that the data of only 55 of the 70 participants was suitable for analysis 1 App selection analysis The data was analysed using the Bradley-Terry model (Bradley and Terry, 1952), which calculates a selection score based on how often an item is chosen when paired with other items, while considering the transitive property of the items (the selection scores for each condition are shown graphically in Fig. 4). The probability model ranks items based on maximum likelihood estimates and is recommended for studies where options are presented in pairs (Cattelan, 2012;Yao and Simons, 1999). The technique has been used in previous HCI studies analysing similar types of data (Ashktorab et al., 2019;Serrano et al., 2017). The analysis used version 2-1.0-9 of the BradleyTerry2 R package (Turner and Firth, 2012) and R version 3.5.1 Feather Spray (R Core Team, 2014). The model ranked pre-task placement as the most selected condition (λ = 0.286), followed by post-task placement (λ = -.064), with delay placement being the least selected (λ = -.222). As post-task was the middle item in the ranking and is the standard way rewards are presented in most data logging applications, it was used as the reference category in the pairwise comparisons. Preference for the pre-task placement condition was significantly greater than for the post-task placement condition (Z = 6.86, SE = 0.51, p < 0.001), whereas preference for the delay placement condition was significantly lower than the post-task placement condition (Z = -3.09, SE = 0.51, p = 0.002), supporting our hypothesis 2 To test whether participants changed how they performed the task based on the timing of the rewards being given, a linear mixed effects model was run on the time taken to complete the task and the number of errors. There was no significant difference in time to complete the task for the pre-task (t = -0.76, SE = 378.47, p = 0.447) and delay (t = -1.68, SE = 412.01, p = 0.092) conditions compared to the baseline post-task condition. There was also no significant difference in the number of errors for the pre-task (t = 0.948, SE = 0.237, p = 0.343) and delay (t = -1.768, SE = 0.026, p = 0.077) conditions compared to the post-task condition. Using the choix package version 0.3.3 (Maystre, 2015) running on python 3.6 and based on the Bradley Terry model selection scores, choice probabilities were calculated for each application. The app with pre-task reward had a 62.4% probability of being chosen when paired against the app with delay placement, and 58.7% chance of being chosen when compared to the app with post-task placement. Self-Reported app preference As part of the post experiment questionnaire, participants were asked to rate their preference for each of the apps by placing each app icon into one of three category boxes labelled most preferred, no preference, and least preferred. Overall 52.7% of participants (N=29) rated the pre-task placement app in the most preferred category, with 27.3% (N=15) rating the post-task, and 16.4% (N=9) rating the delay placement apps as their most preferred. In the no preference category, the pre-task app was placed by 32.7% (N=18), the post-task app by 52.7% (N=29), and the delay app by 58.2% (N=32). Lastly, for the least preferred category, the pre-task app was chosen by 14.5% (N=8), the post-task app by 20.0% (N=11), and the delay app by 25.5% (N=14) of participants. The association between preference category and app icon was statistically significant [χ 2 (4, N = 55) = 17.687, p = 0.001], mostly due to the high preference for the pre-task app. Logging task cancellation Participants were allowed to cancel out of the logging task by pressing the X icon on the top left of the screen, effectively ending that particular trial. The feature was primarily included to allow participants to skip past any items they found particularly challenging or if any other issues arose. It also allowed us to see whether participants would cancel out of the logging task after being given the reward in the pre-task condition. Across the experiment, only one of the original 70 participants was found to avail of the cancel feature, using it indiscriminately across all of the apps in 81% of all trials. This participant was removed from the analysis as they did not meet the threshold of completing at least 70% of trials. No other participant used the cancel feature. When examining the answers in the post-experiment questionnaire, many participants stated that although they had received a reward, they felt that payment of this reward was not guaranteed if the task was cancelled. Discussion This study aimed to identify how reward placement could be used to increase the frequency with which people selected an application. Our results show that people more frequently selected the app that placed rewards early in the interaction (i.e. immediately after selection and before completing the data logging task), compared to those that rewarded users after completing the data logging task, supporting our hypothesis. We propose that the results show a reduction in perceived reward value due to temporal discounting (Ainslie, 1975). By placing the reward immediately following app selection, temporal discounting of the reward appears to be minimised, improving the reward's ability to motivate a user to re-select the app by maximising the reward's value. If the reward appears further in the sequence, its valence is reduced, making users less likely to select the app again. Our work is the first to show that temporal discounting effects operate during short interactions with an application. The findings also carry implications for the understanding of where in a behavioural sequence a reward (gamified or otherwise) is likely to be most impactful in encouraging more frequent app engagement. According to the research on the effects of rewards on MF processing (de Wit and Dickinson, 2009), actions carry an associated value based on their proximity to rewards, which controls how likely the action is to be repeated in the future. Our findings suggest that, in multi-step sequences, increasing the value of the initial action (i.e. app selection) is significantly more successful at improving frequency of selection than including rewards at any other point in the sequence. Previous work has emphasised how starting an action sequence increases the likelihood that the rest of the sequence is executed, both because of the environmental cues controlling choice (Smaldino and Richerson, 2012) and internal motives related to sunk cost (Arkes and Blumer, 1985). We saw this in our work as people still completed the trial in the pre-task condition even though they had the option to cancel the trial after receiving the reward. Based on these findings we advise that, if delivering a reward, designers of gamified applications should consider placing that reward early (i.e. directly after app selection) as this may encourage more frequent app use. Nevertheless, since gamified rewards are not limited in number, this does not need to be the only location that a reward may appear. It is possible and even likely that a combination of reward structures are needed to encourage the desired behaviour. Therefore our advice is centred around the need for implementing some early rewards when designing gamified applications. However, the current study has a major limitation in that it uses money rather than more common gamified reward techniques such as points or leaderboards. Unlike money, which is considered to be universally reinforcing (Bijou and Baer, 1966;Latham and Huber, 1991), points given on their own usually are less successful at motivating behaviour. This is because they lack the applied context or purpose that gives them meaning . Leaderboards can supply this meaning, and are generally successful at promoting desired behaviours when paired with points Landers et al., 2015;Mekler et al., 2013). Yet it still unclear if non-monetary gamified rewards such as points and leaderboards would operate in a similar way to monetary rewards. We therefore conducted a further study to explore whether the previous findings extend to contexts where a points-based leaderboard is used as a reward instead. Importantly, using a points-based leaderboard also allows us to observe whether the novel effects seen in study 1 can be replicated in the context of a more ecologically valid gamification mechanic. Aims and hypothesis The aim of this study was again to investigate how the placement of a reward within an app interaction affects selection frequency, this time using a points-based leaderboard rather than monetary rewards. The gamification technique used in this experiment involved virtual points that control the ranking of the participant on a leaderboard. Our hypothesis for study 2 is: (H2) Reward placement will have a statistically significant effect on the selection frequency of each app, such that earlier delivery will improve selection frequency when compared to later or no delivery. In addition, rather than including a delay condition in this study, it was replaced with a no-reward condition. The rationale behind this choice was that we wanted to make sure that the leaderboard and points were reinforcing, which would be evidenced by a lower selection frequency for the no-reward condition compared to the other conditions. Additionally, the first study already provided evidence that the delay condition was the least preferred option, meaning that this condition was no longer necessary. Participants A total of 70 participants (21F, 48M, 1 other) agreed to take part in the study and were recruited from the MTurk platform. Those who participated in Study 1 were prevented from taking part in Study 2. This sex distribution was again representative of that on the platform for workers in the UK (Difallah et al., 2018). The study was conducted according to the British Psychological Society ethics guidelines (The British Psychological Society, 2018), and was cleared by the university's ethics review process for low-risk studies. The mean age of the participants was 33.63 (SD = 9.00), ranging from 19 to 56. Most participants (60%) reported they had completed at least a bachelor's degree, while the remaining participants either reported only completing A-levels or Secondary Education (39%), or none of the above (1%). Study design Similar to the previous study, the experiment involved a within-participants design with reward placement (3 levels: pre-task placement, post-task placement, no reward placement) as the independent variable, and selection frequency (total number of times all participants selected the app) as the dependent variable. Materials The app selection screen and the data logging task remained identical to that used in Study 1 [see Section 4 for details]. Each study used the same icons, colours, and expense items, with the studies only varying how the reward was displayed. Instead of only presenting a large pound coin to represent the reward, a large digital gold coin was shown along with a leaderboard (see Fig. 5). When receiving points in the rewarded conditions, an animation would move the coin into the leaderboard, increasing the participant's score and improving their ranking if their score went above another player. The leaderboard was randomly populated with a uniform distribution of scores between 16 -66 points. This was done to ensure that the average participant would be able to slowly climb the rankings to an above-average position. In this way, the potentially demotivating situation of being kept at the bottom of the leaderboard (Massung et al., 2013) was avoided. Conditions -Reward placement Both the pre task placement and post task placement conditions were the same as those used in Study 1 [see Section 4 for details]. As previously mentioned, delay placement was replaced by a no reward condition. In this condition, when completing the trial participants would receive no reward with no coin animation. Although the leaderboard was still visible in this condition, no points were added to their tally. As detailed in Section 4.2.4, the delay condition was included so as to confirm the experiment could measure significant differences in choice based on wait time delay, and was thus able to detect temporal discounting. The key comparison within Study 1 was the difference between pre-and post-task conditions, showing that execution time affects temporal discounting. For the current study however, ascertaining whether the gamified rewards used were reinforcing was more important than including a delay condition, as the viability of the experimental paradigm for measuring temporal discounting was already shown. Importantly, whether points-based rewards would be reinforcing in this experimental paradigm was not clearly known. Additionally, adding a fourth condition would have significantly increased the number of trials due to the increase in the number of permutations needed. As a result, rather than a delayed reward condition, it was deemed critical to include a no-reward condition in this study so as to confirm that the leaderboard was reinforcing to participants. Procedure The procedure was similar to that used in study 1 except that the time taken to complete the experiment was reduced. Since the delay placement condition was removed, the task was able to progress more quickly. All participants were instead given 17 minutes to complete the main task, at which point they were no longer presented trials even if they had not completed all 60 of them. This length of time was tested in pilot studies to ensure that the majority of participants would be able to complete the trials within this time. As the study took less than 30 minutes, $6.00 payment was given. The rate of payment was calculated at a $12/hr rate as suggested by research on fair payment for experiment participants on Amazon Mechanical Turk (Lascau et al., 2019). Data cleaning The thresholds for removing participant data was the same for study 2, where 70% of the trials had to be completed successfully. Three participants (3M) were removed due to not passing this threshold within the 17-minute task. An additional six participants (3F, 3M) were removed due to a heavy bias towards one side of the screen (> 85%), indicating that their choices were not based on a preference for a specific app as specified in the instructions. In total, out of the original 70 participants, 61 (18F, 42M, 1 other) provided data suitable for analysis following data cleaning procedures 3 App selection analysis The data was again analysed using a Bradley-Terry model (Bradley and Terry, 1952) which provided estimates (selection scores) of relative preference for each item. The model ranked pre-task (λ = 0.466) as the most preferred level of reward placement, followed by post-task (λ = -.022), with no-reward being the least preferred (λ = -.443). As post-task The count next to the participant location on the leaderboard increased by one every time the coin was shown. When no reward was presented, the leaderboard was still shown without the coin and the count remained the same. The position on the leaderboard changed as the count increased such that the player climbed the leaderboard throughout the experiment. was the middle item in the ranking and the most common way of delivering points in gamified apps, it was used as the reference category in the pairwise comparisons. Selection frequency for the pre-task placement condition was significantly greater than for the post-task placement condition (Z = 9.746, SE = 0.05, p < 0.001), whereas selection for the no-reward condition was significantly lower (Z = -8.439, SE = 0.05, p = < 0.001). Fig. 8 shows the ranking of each app based on preference estimates with 95% confidence intervals based on quasi-standard errors. Using the choix package, it was calculated that the pre-task app had a 62.0% chance of being chosen when compared to the post-task app, and a 71.3% probability of being chosen when paired against the no-reward app. Additionally, the post-task app was chosen with 60.4% preference when compared to the no-reward app. The comparisons between the no-reward app condition suggests that the points-based leaderboard reward itself was generally reinforcing. We again tested whether participants changed how they performed the task across conditions, using a linear mixed effects model on the time taken to complete the task and the number of errors. There was no significant difference in time to complete the task for the pre-task placement (t = -1.64, SE = 237.63, p = 0.102) and delay placement (t = -.453, SE = 284.74, p = 0.651) conditions compared to the post-task placement condition. There was also no significant difference in the number of errors for the pre-task placement (t = -.782, SE = 0.021, p = 0.435) and no reward placement (t = -1.646, SE = 0.025, p = 0.100) conditions compared to the post-task placement condition. Self-Reported app preference As part of the post-study questionnaire, participants were again asked to rate their stated preference for each app by placing their icon into one of three categories: least preferred, no preference, and most Fig. 6. Structure of study 2 conditions. While the pre-task and post-task conditions present the rewards in the same location as the previous study, they use points connected to a leaderboard instead of money. In addition, the delay condition was exchanged for a condition where no reward is presented. Each application represents one condition and always presents the reward for each trial in the same location, except for the no-reward condition where only the leaderboard is shown. preferred. Out of 52 total participants, 53.8% (N=28) placed the pretask app in the most preferred category, while 40.4% (N=21) did so for the post-task app, and 13.5% (N=7) for the no-reward app. For the no preference category, the breakdown was 34.6% (N=18) for pre-task, 46.2% (N=24) for post-task, and 50.0% (N=26) for no-reward. Lastly, the least preferred category had 11.5% (N=6) for pre-task, 13.5% (N=7) for post-task, and 36.5% (N=19) for no reward. The association between preference category and application was significant [χ 2 (4, N = 61) = 41.72, p < 0.001], with the model showing a heavy preference for the pre-task app. Logging task cancellation Participants were again given the ability to cancel a trial by pressing the X button located on the top left side of the screen. Only two participants used this cancellation feature during the experiment. One participant indiscriminately cancelled a total of 86% of trials regardless of reward placement, and so was not included in the analysis, while the other participant only cancelled one item. Therefore, similar to Study 1, it does not appear that participants were incentivised to exploit the reward contingencies by abandoning the trial. Discussion Similar to study 1, our results show a significant impact of reward placement on selection frequency. We found that the applications that gave rewards were selected more frequently than the one that did not, showing that participants were motivated by the points-based leaderboard in the study. This gives support to the notion that the leaderboard and points were reinforcing to participants. The reward-producing options were preferred even though the participants were told that placement on the leaderboard did not affect how much they earned for taking part in the study. We also found that, as hypothesised, giving users points after app selection but before completing the data logging task led to significantly more frequent app selection than the other reward placement alternative. This means that temporal discounting applies to gamified rewards, and that platforms should consider reward timing when attempting to motivate users with gamification. General discussion Our study aimed to inform app interface design, contributing findings on how reward placement can be used to increase the frequency with which people choose apps. Through the application of insights from temporal discounting theory (Ainslie, 1975;van den Bos and McClure, 2013), we measured whether an app that placed rewards early in the interaction, particularly after app selection, affected the frequency with which people choose that app compared to those that rewarded users after data logging. To test this, we conducted two online studies whereby participants selected from identical apps to complete a budget and expense tracking task, varying only in the location and type of the reward presented. Across both experiments the results showed that participants chose the app that rewarded them early on (i.e. after they had selected the app) more frequently than applications that rewarded them after a longer interaction. Importantly, this was shown both for monetary and for gamified (points-based leaderboard) rewards. This supported both our hypotheses and conforms to the predictions made by temporal discounting research (Ainslie, 1975;Hayden, 2016). In addition, no difference in performance measures such as accuracy and completion time were found, indicating that the reward placement had no immediate adverse effects on task performance. Designing rewards for increased engagement Across our experiments, we showed that participants chose the pretask app with greater frequency than the other apps to perform the data logging task, following the predictions of temporal discounting theory (Ainslie, 1975). These findings could be used in any field that uses gamification to motivate behaviour, be it education, health, or commerce-based applications. These results provide clear guidelines for interface designers who wish to use gamified rewards to motivate app use. In particular, it highlights how focusing rewards on the first crucial step, opening the application, could promote a significant increase in the frequency of app selection. Currently the design of most popular incentive structures for gamified apps tends towards rewarding the user after the task has been completed (e.g. Duolingo or Epic Win). However, the approach of rewarding early in the interaction is gaining traction in terms of gaming apps, as daily login rewards are becoming more common. A study looking at the reward mechanisms employed in the 16 most popular mobile games mentioned by the participants found that daily login rewards were included in nearly half of them (Prasad et al., 2020). Our research provides the much needed experimental evidence demonstrating that the approach of rewarding early in the app experience can lead to significant increases in app selection, which may be the reason for the rising popularity in daily login reward structures. Although not the main aim of the work, our findings also suggests that, if rewarded early, users tend to still complete in-app tasks once they have received a reward. Participants still completed the data logging trial, even in the instances where the reward had already been delivered. This echoes previous work suggesting that encouraging checking behaviours can lead to further app engagement (Oulasvirta et al., 2012). Other work has also shown that because of the sunk cost effect (Arkes and Blumer, 1985) and environmental cues controlling choice (Smaldino and Richerson, 2012), starting an action sequence makes it more likely that the sequence will be completed. Although replication on other populations is needed so as to ensure that this effect is not unique to MTurkers, it suggests that rewarding app selection early may not lead to users abandoning the task after receiving the reward. However, there is still a possibility that early-rewards could be exploited by users. As such, it is important to always carefully consider the entire incentive system and what behaviours are being promoted. Nevertheless, there are numerous techniques that can be used to reduce or prevent exploitation. For example, the early reward could be held conditionally such that it is only presented at the start of a sequence if the previous sequence was completed. With this method, the early reward could only be exploited once, but still provides minimal temporal discounting effects. It may therefore be necessary to balance the benefits of reducing temporal discounting with the drawbacks of decreasing the sunk-cost effect, which may mean the reward may need to be moved slightly depending on which behaviours are causing the most sequence abandonment. Points and leaderboards as rewards The current study highlights how early placement of rewards is not only influential when using money as a reward for behaviour, but is also applicable to non-monetary incentives such as a points-based leaderboard. Using leaderboards as a gamification technique to motivate behaviour is already common in education (Caponetto et al., 2014;Sailer and Homner, 2019), business Morschheuser et al., 2015), exercise (Goh and Razikin, 2015;Koivisto and Hamari, 2014), and crowdsourcing (Halan et al., 2010;Massung et al., 2013) domains. Our work echoes research that emphasises the effectiveness of points and leaderboards in promoting changes in behaviour Landers et al., 2015;Mekler et al., 2013). We add to this by showing that consideration for the placement of rewards appears to be an important part of creating an effective reward structure for gamification. In particular, based on our findings, we recommend that points be presented as soon as an action sequence is initiated. Especially when attempting to promote app selection, adding rewards to the start of this sequence may significantly improve the chance that it will be repeated. A crucial component of this is ensuring that the points rewards are seen as reinforcing. Leaderboards are thought to achieve this by giving the points an immediately understandable value, by acting as a metric for social comparison, and by giving users information about the boundaries of performance (Bowey et al., 2015;Garcia et al., 2013). Due to the similarities in findings across our studies, we conclude that, as long as the reward is reinforcing, similar improvements in reward value can be seen when rewarding early for both monetary and non-monetary rewards. Temporal discounting and app choice The results provide a significant contribution to temporal discounting research by entrenching it within an experimental context that is less constrained and more representative of complex real-world decision sequences. At present, temporal discounting research mostly uses questionnaires and a set of simple binary choices linked to the attainment of a reward (Kirby and Marakovic, 1996). Participants are also usually asked to imagine either receiving a sum of money now, or in a number of days (Paglieri, 2013), then asked to select their preferred option. This methodology has been criticised by several researchers (Navarick, 2004;Paglieri, 2013), who found the rate of discounting to be orders of magnitude lower in questionnaire studies when compared to comparable experiments where participants experience reward delay. Temporal discounting effects may have been grossly underestimated using these methods, with previous work suggesting that reward delays needed to extend over weeks or months to affect behaviour (Kirby, 2009;Kirby and Marakovic, 1996). Our results show that temporal discounting appears to affect decisions significantly, and in terms of altering the frequency of app selection, only a delay of seconds is required. According to dual process theories, promoting stronger positive associations in the MF system through rewards increases the chance that the action will be completed spontaneously (Kamphorst and Kalis, 2015;Wood and Rnger, 2016). For the process to occur, stronger associations need to be developed through the repeated pairing of the desired behaviour with a reward (Wood and Neal, 2007). Based previous behaviour-based neuroimaging research (Daw et al., 2011;Kobayashi and Schultz, 2008;Luo et al., 2009;Otto et al., 2013), the increased selection frequency seen for the pre-task reward apps is likely due to stronger positive associations between the reward and action within the MF system (Dayan and Balleine, 2002;Graybiel, 2008). Indeed, the temporal discounting effect seen in study 1 and 2 suggests that the reward mechanisms are influencing the MF system (Kobayashi and Schultz, 2008). Delineating between decision and action Current temporal discounting research (Logue and King, 1991;Navarick, 1998;Rosati et al., 2007) usually involves tasks that can be completed immediately, conflating the behaviour with the decision to act, and thus impacting ecological validity when applied to the context of longer interactions. As such, delays due to task-related execution time are currently under examined. The two conducted studies experimentally tested whether temporal discounting applies in a situation where the decision and the interaction are separated by an extended execution time. This was because the participants were first required to select their preferred app, followed by an expense form task that took approximately 10s to complete. The decision to complete the task using a certain app was thus separated from the task itself, allowing for a greater understanding of how temporal discounting affects decision making. Our results support the idea that behavioural instigation (i.e. the decision to act) and execution (i.e. action itself) are separate processes, a distinction that has received some recent support (Phillips and Gardner, 2016;Phillips et al., 2019). Therefore, it may be important to consider rewarding the decision to interact with the app when attempting to promote higher app engagement, which may be achieved by presenting a reward immediately after opening the app. Limitations and future work Similar to many lab based decision-making experiments in cognitive science and in HCI (e.g. Ashktorab et al., 2019;Daw et al., 2011;Otto et al., 2013), our study asked participants to make a number of app selection decisions within one experimental session. While this method allowed us to interpret our findings in light of this established literature base, future work should expand on our promising results. This could be done by performing a field trial comparing different app versions that vary in reward placement to examine whether this has a significant influence on login rates. This is important because the significant effects measured in the two studies were for options that were otherwise equivalent. As such, it is possible that for real world scenarios, where options are generally very different, the magnitude of the effect may be reduced when compared to these more controlled settings. Within our study, the absolute probabilities of choosing pre-task placement over post-task placement was found to be 58.7% and 62% for Study 1 and 2, with a relative increase compared to that expected due to chance (50% probability) of 17.4% and 24% for studies 1 and 2 respectively. It is important for future work to see whether this increase in selection frequency creates a meaningful difference when deployed in real-world applications. Additionally, the studies only involved one experimental session, and while this approach is common in the literature, it means the early reward technique may see reductions in efficacy over time. This is a frequent issue in the gamification literature as strong initial effects usually decrease towards the end of these studies Johnson et al., 2016;Koivisto and Hamari, 2014). Nonetheless, other research has been able to show sustained improvements in engagement (Farzan et al., 2008;Morschheuser et al., 2015), highlighting how differences in how gamification is applied may be behind these decreases in effectiveness. As a result, further research is needed to examine the longer term outcomes of early reward strategies. Our sample focused on UK based MTurk workers. MTurk workers were used because they tend to improve the applicability of results to a larger population as they tend to be more culturally heterogeneous and socioeconomically diverse than usual university samples (Henrich et al., 2010). However, further work is also needed to explore whether the effects seen in our work apply to non-MTurk samples as well as other nationalities and cultural backgrounds. In addition, as there is an ongoing controversy regarding the effect of rewards on intrinsic motivation (Bright and Penrod, 2009;Cameron et al., 2001;Cameron and Pierce, 1994), researchers may need to examine whether the early reward strategy contributes to the undermining effect (Deci and Ryan, 1985;Ryan and Deci, 2000). Therefore, future work may include direct measures of intrinsic motivation to ascertain if it is affected by changes in reward placement. Although small, self-reported preference data in Study 1 shows some participants preferred the apps where a reward was delayed, while some participants in Study 2 preferred the app where no reward was given. As most apps gave the same rate of reward, only varying in its placement, some participants may have formed opinions on which app was best based on other factors. These could include the perceived difficulty in categorising the expense items for a particular app, or a general preference for an app's symbol or colour. Even though all of these dimensions were randomised within the experiment across the apps, this type of reasoning was sometimes reflected in some of the answers to the post-experiment questionnaire. As these dimensions were randomised across the experiment, this is not likely to have significantly affected our results. As preference for earlier rewarding suggests a model-free process, it could be argued that participants had a natural expectation that they needed to collect as many points as possible, and therefore may have been actively using MB processing to guide them towards selecting the reward-producing options. While this is a possible explanation as to why the no-reward placement was the least preferred option within the experiment, it would not explain the preference for pre-task placement. Both the pre-and post-task placements presented the same magnitude and rate of reward, presumably making them equivalent to a MB system. As other research indicates that the MB system does not discount rewards significantly based on a delay of seconds (Green and Myerson, 2004), MF processing is still a more likely explanation for the difference in preference seen in each study. It may also be the case that some participants were meta-cognitively aware of how the rewards affected their MF system, and thus employed some MB processing to help maximise the rewards with the strongest effect. While MF processing is believed to occur automatically and outside conscious awareness (Otto et al., 2013), there is evidence that collaboration occurs between systems (Balleine and Dezfouli, 2019). The MF system is also believed to signal its desires through impulses and emotions (Gardner, 2015;Lally and Gardner, 2013), making it possible that these feelings can be acknowledged by the MB system. Further work needs to be conducted to disentangle the role of MB and MF processes in reward-based decision making seen in this work. Although targeting the MF system can benefit the user by reducing system conflict, affecting automatic behaviours through the use of particular cues can be exploited through dark design pattern (Greenberg et al., 2014). Pop-ups exemplify this unethical 'Bait and Switch' behaviour by creating realistic looking windows and dialog boxes (cues) that are changed in their function to bring undesired results (such as opening malicious programs). This type of hijacking of automatic behaviours is also common in phishing scams, where malicious actors present a familiar interface (e.g. PayPal Website) in the hopes that users will provide important personal information (Dhamija et al., 2006). There is a danger that unscrupulous designers may use the insights from these studies to unwillingly motivate users to use apps they are trying to avoid. However, there is evidence to suggest that the development of new MF behaviours needs at least some intention from the user to be effective (Gardner et al., 2020). As a result, we suggest that this technique only be used to reduce system conflict for behaviours the user already has some intention to do. Currently however, there is little discussion about the ethical considerations of targeting the MF system. As such, the potential harms from misappropriation of this research are still unknown. Conclusion Gamification is a common technique for incentivising users to engage with an application . Our work sought to identify how to most effectively design reward schedules to promote app selection when using gamified rewards. Consistent with the concept of temporal discounting (Ainslie, 1975), our findings suggest that placing rewards closer to the decision to open the application is more effective at promoting further app selection than placing rewards at the end of longer app interactions. Using such a reward structure may therefore be more effective in encouraging users to return to such applications. Rather than rewarding after longer interactions with the app, which is common in current gamified applications, designers should consider rewarding users early for deciding to interact with the app in the first place. The work also shows how an established theory from psychology and cognitive science, temporal discounting (Ainslie, 1975), is applicable to gamified rewards, opening the door for other theories to be extended to this context. In addition, it crystallises the importance of using supported theories as a foundation of any gamification research, which has focused too deeply on finding the most effective gamified rewards Seaborn and Fels, 2015), rather than understanding the underlying mechanisms that make these rewards motivating in the first place. By researching this issue further, future researchers may be better able to optimise the value of gamified rewards by making intentional, theory-driven, and targeted changes to key characteristics of the incentive system. Data availability The datasets and analyses used for the current research are available in the Open Science Framework Repository at https://osf.io/46xu7/? view_only=ff40fd7b38954f2082db8dd3e77504dd Declaration of competing interest The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
2021-06-22T17:55:23.014Z
2021-04-29T00:00:00.000
{ "year": 2021, "sha1": "43687828041ef346c0da45149105088fff91dc23", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1016/j.ijhcs.2021.102661", "oa_status": "HYBRID", "pdf_src": "MergedPDFExtraction", "pdf_hash": "980fc03c134e1d128e22e2cc74b7df0a8d6fa475", "s2fieldsofstudy": [ "Psychology", "Business" ], "extfieldsofstudy": [ "Computer Science", "Psychology" ] }
118441394
pes2o/s2orc
v3-fos-license
Recent Neutrino Data and a Realistic Tribimaximal-like Neutrino Mixing Matrix In light of the recent neutrino experimental results from Daya Bay and RENO Collaborations, we construct a realistic tribimaximal-like Pontecorvo-Maki-Nakagawa-Sakata (PMNS) leptonic mixing matrix. Motivated by the Qin-Ma (QM) parametrization for the quark mixing matrix in which the CP-odd phase is approximately maximal, we propose a simple ansatz for the charged lepton mixing matrix, namely, it has the QM-like parametrization, and assume the tribimaximal mixing (TBM) pattern for the neutrino mixing matrix. The deviation of the leptonic mixing matrix from the TBM one is then systematically studied. While the deviation of the solar and atmospheric neutrino mixing angles from the corresponding TBM values, i.e. $\sin^2\theta_{12}=1/3$ and $\sin^2\theta_{23}=1/2$, is fairly small, we find a nonvanishing reactor mixing angle given by $\sin\theta_{13}\approx \lambda/\sqrt{2}$ ($\lambda\approx 0.22$ being the Cabibbo angle). Specifically, we obtain $\theta_{13}\simeq 9.2^{\circ}$ and $\delta_{CP} \simeq \delta_{\rm QM} \simeq {\cal O}(90^{\circ})$. Furthermore, we show that the leptonic \CP violation characterized by the Jarlskog invariant is $|J^{\ell}_{CP}|\simeq \lambda/6$, which could be tested in the future experiments such as the upcoming long baseline neutrino oscillation ones. Introduction Recent analyses of the neutrino oscillation data [1,2] indicate that the tribimaximal mixing (TBM) pattern for three flavors of neutrinos [3] can be regarded as the zeroth order leptonic mixing matrix (1) where P ν = Diag(e iδ 1 , e iδ 2 , 1) is a diagonal matrix of phases for the Majorana neutrino. However, properties related to the leptonic CP violation remain completely unknown yet. The large values of the solar and atmospheric mixing angles, which may be suggestive of a new flavor symmetry in the lepton sector, are entirely different from the quark mixing ones. The structure of both charged lepton and neutrino mass matrices could be dedicated by a flavor symmetry, for example, the A 4 discrete symmetry, which will tell us something about the charged fermion and neutrino mixings. If there exists such a flavor symmetry in nature, the TBM pattern for the neutrino mixing matrix may come out in a natural way. It is well known that there are no sizable effects on the observables * Corresponding author. from the renormalization group running for the hierarchical mass spectrum in the standard model [4]. 1 Hence, corrections to the tribimaximal neutrino mixing from renormalization group effects running down from the seesaw scale are negligible in the standard model. The so-called PMNS (Pontecorvo-Maki-Nakagawa-Sakata) leptonic mixing matrix depends generally on the charged lepton sector whose diagonalization leads to a charged lepton mixing matrix V L which should be combined with the neutrino mixing matrix U ν ; that is, In the charged fermion (quarks and charged leptons) sector, there is a qualitative feature which distinguishes the neutrino sector from the charged fermion one. The mass spectrum of the charged leptons exhibits a similar hierarchical pattern as that of the downtype quarks, unlike that of the up-type quarks which shows a much stronger hierarchical pattern. For example, in terms of the Cabibbo angle λ ≡ sin θ C ≈ |V us |, the fermion masses scale This may lead to two implications: (i) the Cabibbo-Kobayashi-Maskawa (CKM) matrix [6] is mainly governed by the down-type quark mixing matrix, and (ii) the charged lepton mixing matrix is similar to that of the down-type quark one. Therefore, is associated with the diagonalization of the down-type (up-type) quark mass matrix and 1 is a 3 ×3 unit matrix, and (ii) the charged lepton mixing matrix V † L has the same structure as the CKM matrix, V † L = V CKM . Very recently, a non-vanishing mixing angle θ 13 has been reported firstly from the Daya Bay and RENO Collaborations [7,8] with the results given by sin 2 2θ 13 = 0.092 ± 0.016 (stat) ± 0.005 (syst) (3) and sin 2 2θ 13 = 0.113 ± 0.013 (stat) ± 0.019 (syst), (4) respectively. These results are in good agreement with the previous data from the T2K, MINOS and Double Chooz Collaborations [9]. The experimental results of the non-zero θ 13 indicate that the TBM pattern for the neutrino mixing should be modified. Moreover, properties related to the leptonic CP violation remain completely unknown yet. In this work, we shall assume a neutrino mixing matrix in the TBM form, We will neglect possible corrections to U TB from higher dimensional operators and from renormalization group effects. Then we make a simple ansatz on the charged lepton mixing matrix V L , namely, we assume that V L has the same structure as the Qin-Ma (QM) parametrization of the quark mixing matrix, which is a Wolfenstein-like parametrization and can be expanded in terms of the small parameter λ. Unlike the original Wolfenstein parametrization, the QM one has the advantage that its CP-odd phase δ is manifested in the parametrization and approximately maximal, i.e. δ ∼ 90 • . As we shall see below, this is crucial for a viable neutrino phenomenology. It turns out that the PMNS leptonic mixing matrix is identical to the TBM matrix plus some small corrections arising from the charged mixing matrix V L expanded in terms of the small parameter λ. 2 Schematically, U PMNS = U TB + corrections in powers of λ. (6) Consequently, not only the solar and atmospheric mixing angles given by the TBM remain valid but also the reactor mixing angle and the Dirac phase can be deduced from the above consideration. The Letter is organized as follows. In Section 2 we discuss the parametrizations of quark and lepton mixing matrices and pick up the one suitable for our purpose in this work. After making an ansatz on the charged lepton mixing matrix we study the lowenergy neutrino phenomenology and emphasize the new results on the reactor neutrino mixing angle and the CP-odd phase in Section 3. Our conclusions are summarized in Section 4. Lepton and quark mixing In the weak eigenstate basis, the Lagrangian relevant to the lepton sector reads 2 There are several papers [11] implemented in U PMNS = U BM + corrections in powers of λ, where the bimaximal matrix U BM leads to θ ν When diagonalizing the neutrino and charged lepton mass matri- Then we obtain the leptonic 3 × 3 unitary mixing matrix U PMNS = V † L U ν as given in Eq. (2) from the charged current term in Eq. (7). In the standard parametrization of the leptonic mixing matrix U PMNS , it is expressed in terms of three mixing angles and three CP-odd phases (one δ for the Dirac neutrino and two for the Majorana neutrino) 3 [10] U where s ij ≡ sin θ ij and c ij ≡ cos θ ij . The current best-fit values of θ 12 , θ 23 and θ 13 at 1σ (3σ ) level obtained from the global analysis by Fogli et al. [2] are given by where NO and IO stand for normal mass ordering and inverted one, respectively. The analysis by Fogli et al. includes the updated data released at the Neutrino 2012 Conference. 4 However, we know nothing at all about all three CP-violating phases δ , δ 1 and δ 2 . In analogy to the PMNS matrix, the CKM quark mixing matrix is a product of two unitary matrices, V CKM = V d † L V u L , and can be expressed in terms of four independent parameters (three mixing angles and one phase). Their current best-fit values in the 1σ range read [12] A well-known simple parametrization of the CKM matrix introduced by Wolfenstein [15] is Hence, the CKM matrix is a unit matrix plus corrections expanded in powers of λ. Recently, Qin and Ma (QM) [13] have advocated a new Wolfenstein-like parametrization of the quark mixing matrix based on the triminimal expansion of the CKM matrix. 5 The parameters A, ρ and η in the Wolfenstein parametrization [15] are replaced by f , h and δ in the Qin-Ma one. From the global fits to the quark mixing matrix given by [12], we obtain Therefore, the CP-odd phase is approximately maximal in the sense that sin δ ≈ 1. Because of the freedom of the phase redefinition for the quark fields, we have shown in [16] that the QM parametrization is indeed equivalent to the Wolfenstein one 6 and pointed out that where the three angles α, β and γ of the unitarity triangle are defined by and they satisfy the relation α +β +γ = π . Since α = (91.0±3.9) • , β = (21.76 +0.92 −0.82 ) • and γ = (67.2 ± 3.0) • inferred from the current data [12], the phase δ in the QM parametrization is thus very close to 90 • . The rephasing invariant Jarlskog parameter J q CP in the quark sector is given by (16) implying that the phase δ in Eq. (12) is equal to the rephasing invariant CP-violating phase. Numerically, it reads J q CP 0.2λ 6 using Eq. (13). For our later purpose, we shall consider a particular QM parametrization obtained by rephasing u and d quark fields: As we will show in the next section, it will have very interesting implications to the lepton sector. Low energy neutrino phenomenology Let us now discuss the low energy neutrino phenomenology with an ansatz that the charged lepton mixing matrix V † L has the similar expression to the QM parametrization given by Eq. (17): 5 The phrase "triminimal" was first used in [14] to describe the deviation from the tribimaximal pattern in the lepton mixing. 6 Relations between the Wolfenstein parameters (A, ρ, η) and the QM parameters ( f , h, δ) are shown in [16]. where the parameters λ, f , h and δ in the lepton sector are a priori not necessarily the same as those in the quark sector. Nevertheless, we shall assume that λ is a small parameter and that δ is of order 90 • . As we will see below, this matrix accounts for the small deviation of the PMNS matrix from the TBM pattern. We have emphasized in [17] that the phases of the off-diagonal matrix elements of V L play a key role for a viable neutrino phenomenology. Especially, we have found that the solar mixing angle θ 12 depends strongly on the phase of the element (V L ) 12 . This is the reason why we choose the particular form of Eq. (18). In the quark sector, there exist infinitely many possibilities of rephasing the quark fields in the CKM matrix and physics should be independent of the phase redefinition. The reader may wonder why we do not identify V L first with the original QM parametrization in Eq. (12) and then make phase redefinition of lepton fields to get CP-odd phases in the off-diagonal elements. The point is that the arbitrary phase matrix of the neutrino fields does not commute with the TBM matrix U TB . As a result, the charged lepton mixing matrix in Eq. (2) cannot be arbitrarily rephased from the neutrino fields. Therefore, in the lepton sector, this particular form of Eq. (18) for the parametrization of V L obtained by rephasing the u and d quark fields in Eq. (12) with a physical phase δ is the only way for V L consistent with the current experimental data, especially for sin θ 12 (see Eq. (24) below). With the help of Eqs. (2), (5) and (18), the leptonic mixing matrix corrected by the contributions from V L can be written, up to the order of λ 3 , as By rephasing the lepton and neutrino fields e → ee iα 1 , μ → μe iβ 1 , τ → τ e iβ 2 and ν 2 → ν 2 e i(α 1 −α 2 ) , the PMNS matrix is recast to where U α j is an element of the PMNS matrix with α = e, μ, τ corresponding to the lepton flavors and j = 1, 2, 3 to the light neutrino mass eigenstates. In Eq. (20) the phases defined as Up to the order of λ 3 , the elements of U PMNS are found to be From Eq. (20), the neutrino mixing parameters can be displayed as which indicates, interestingly enough, a tiny deviation from sin 2 θ 12 = 1/3 when cos δ approaches to zero. Since it is the first column of V L that makes the major contribution to sin 2 θ 12 , this explains why we need a phase of order 90 • for the element (V L ) 21 . 7 Likewise, the atmospheric neutrino mixing angle θ 23 comes out as which shows a very small deviation from the TBM angle sin 2 θ 23 = 1/2. The reactor mixing angle θ 13 can be obtained by Thus, we have a non-vanishing large θ 13 . Leptonic CP violation at low energies could be detected through neutrino oscillations that are sensitive to the Dirac CP phase, but 7 In [17] we have considered three different scenarios for the matrix V L . We obtained the constraint 0.17 cos δ 0.64 in two of the scenarios in order to satisfy the quark-lepton complementarity (QLC) relations θ 12 + θ q 12 = π /4 and θ 23 + θ q 23 = π /4. In this work, we will not impose these QLC relations from the outset. insensitive to the Majorana phases. The Jarlskog invariant for the lepton sector has the expression This shows that up to the order of λ 3 , the rephasing invariant Dirac CP-violating phase δ CP equals to the phase δ introduced in Eq. (18): i.e., δ CP δ. Also, the above relation is approximated as J CP −λ/6 for sinδ ≈ 1. We see from Eqs. (16) and (27) This leads to | J CP | λ −5 J q CP J q CP from Eq. (13). Equivalently, by using the conventional parametrization of the PMNS matrix [10] and Eq. (8), one can deduce an expression for the Dirac CP phase δ CP : For the purpose of illustration, we employ the values of the parameters λ, f , h and δ given in the quark sector (see Eq. (13)). Then we have the predictions sin 2 θ 12 = 0.346, sin 2 θ 23 = 0.450, and the corresponding mixing angles are θ 12 = 36.0 • , θ 23 = 42.1 • and θ 13 = 9.2 • , respectively. Hence, our prediction for θ 13 is consistent with the recent measurement of the reactor neutrino mixing angle. Fig. 1 shows the behaviors of mixing parameters as a function of δ for λ, f , h having the central values given by Eq. (13). The left plot of Fig. 1 represents the atmospheric (sin 2 θ 23 ), solar (sin 2 θ 12 ) and reactor (sin θ 13 ) mixing angles as a function of the phase δ, where the horizontal dashed lines denote the TBM values 1 2 and 1 3 for sin 2 θ 23 and sin 2 θ 12 , respectively. As can be seen in this plot, the deviation of the mixing angles θ 12 and θ 23 from the TBM pattern in the case of δ = δ QM is fairly small: (29), this is why in the right plot of Fig. 1 we focus on the range of δ in the vicinity of δ QM . Conclusion The recent neutrino oscillation data from the Daya Bay Collaboration [7] disfavor the TBM pattern at 5.2σ level, implying a non-vanishing θ 13 and giving a relatively large sin 2 2θ 13 = 0.097 (best-fit value) corresponding to θ 13 = 8.8 • . On the theoretical ground, we have proposed a simple ansatz for the charged lepton mixing matrix, namely, it has the QM-like parametrization in which the CP-odd phase is approximately maximal. Then we have proceeded to study the deviation of the PMNS matrix from the TBM one arising from the small corrections due to the particular charged lepton mixing matrix we have proposed. We have obtained the analytic results for the mixing angles expanded in powers of λ: the solar mixing angle sin 2 θ 12 1 3 (1 + 2λ cos δ + λ 2 2 ), the atmospheric mixing angle sin 2 θ 23 1 2 (1 + O(λ 2 )), the reactor mixing angle sin θ 13 = λ √ 2 [1 + O(λ 2 cos δ)] and the Dirac CP-odd phase δ CP δ. Therefore, while the deviation of solar and atmospheric mixing angles from the TBM values are fairly small, we have found a non-vanishing reactor mixing angle θ 13 9.2 • and a very large Dirac phase δ CP δ QM O(90 • ). Furthermore, we have shown that the leptonic CP violation characterized by the Jarlskog invariant is | J CP | λ/6, which could be tested in the future experiments such as the upcoming long baseline neutrino oscillation ones.
2012-07-27T07:02:32.000Z
2011-05-23T00:00:00.000
{ "year": 2012, "sha1": "8f755bb4219098db75a9669b75c4796d9e622732", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1016/j.physletb.2012.07.061", "oa_status": "HYBRID", "pdf_src": "MergedPDFExtraction", "pdf_hash": "6c25948891eac5c7e1e55bd5c62a838bf9300ae0", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
247530081
pes2o/s2orc
v3-fos-license
Amphiphilic Chitosan Bearing Double Palmitoyl Chains and Quaternary Ammonium Moieties as a Nanocarrier for Plasmid DNA Amphiphilic chitosan, bPalm-CS-HTAP, having N-(2-((2,3-bis(palmitoyloxy)propyl)amino)-2-oxoethyl) (bPalm) groups as double hydrophobic tails and O-[(2-hydroxyl-3-trimethylammonium)] propyl (HTAP) groups as hydrophilic heads was synthesized and evaluated for its self-assembly properties and potential as a gene carrier. The degree of bis-palmitoyl group substitution (DSbPalm) and the degree of quaternization (DQ) were approximately 2 and 56%, respectively. bPalm-CS-HTAP was found to assemble into nanosized spherical particles with a hydrodynamic diameter (DH) of 265.5 ± 7.40 nm (PDI = 0.5) and a surface charge potential of 40.1 ± 0.04 mV. bPalm-CS-HTAP condensed the plasmid pVAX1.CoV2RBDme completely at a bPalm-CS-HTAP:pDNA ratio of 2:1. The self-assembled bPalm-CS-HTAP/pDNA complexes could enter HEK 293A and CHO cells and enabled gene expression at negligible cytotoxicity compared to commercial PEI (20 kDa). These results suggested that bPalm-CS-HTAP can be used as a promising nonviral gene carrier. INTRODUCTION Gene therapy and vaccination are methods that introduce exogenous genes in the form of plasmid DNA (pDNA) and mRNA into an individual's body; thus, a target protein is produced and functions either to replace the dysfunctional protein or to stimulate an immune response. 1−5 Due to the instability of DNA and RNA in serum media and unfavorable translocation of naked nucleic acids through the cell membrane, a carrier system is developed to bind nucleic acids, providing a protecting shell to shield the gene from serum nucleases and transport nucleic acid cargos into target cells. This process is known as transfection. 6,7 There are two major types of delivery agents: viral vectors and nonviral vectors. Although the former has been proven to possess high transfection efficacy and yield several FDA-approved drugs, 1 there have been reports on several undesirable side effects including uncontrolled cell proliferation of transduced cells, induction of vector-specific immune response, and random genomic integration. 8 The latter, on the other hand, exhibits an excellent safety profile and chemical tunability. However, they are less effective regarding gene delivery to target cells compared with viral vectors. 9 Thus, it remains a challenge among several research groups that aim to develop gene carriers with increased transfection efficiency and minimized cytotoxicity. Cationic lipids and cationic polymers are among popular platforms for gene carriers. 10, 11 Delivery of nucleic acids using lipid nanoparticles (LNPs) has reached clinical trials. Although LNPs exhibit very low immune activation, 12 they provide only a short-term genetic activity of the transported nucleic acids in cells, thus requiring continuous infusion or frequently repeated administration. 3,4,13 Cationic polymer-based delivery systems, on the other hand, provide better sustained release due to their high molecular weight, hence increasing the bioavailability of nucleic acid cargos inside cells. 14 They have been widely reported to inherently offer unlimited gene packaging capacities through electrostatic interaction and allow versatile molecular modifications. Polyethyleneimine (PEI) is one of the popular cationic polymers that possess excellent gene complexation capability and offer high transfection efficiency; however, it was reported to exhibit high toxicity on clinical trial applications. 7 The abundant positively charged PEI could form nonspecific interactions with negatively charged components of phagocytosis, resulting in large aggregates, thus inducing immune responses. 15 Several strategies have been exploited to address this dilemma of polycations such as poly(ethylene glycol) (PEG) modification of cationic polymers 16,17 and charge-reversal copolymers. 15,18,19 These strategies mask the cations from absorbing negatively charged components in the cell medium before reaching the target tissues, improving the efficiency of gene delivery. Nevertheless, none of the polycation-based gene therapy has been FDA-approved. 20 Therefore, further development of cationic polymers is necessary to achieve safe and efficient gene administration. A natural polymer such as chitosan is a ubiquitous component of a biological system that can be harnessed to improve its function as a gene carrier. Chitosan consists of random β (1−4)-linked D-glucosamine and N-acetyl-D-glucosamine units, and it can be extracted from chitin present in the exoskeleton of most of the crustaceans via partial alkaline hydrolysis of acetyl groups. Chitosan has complementary properties toward gene carriers as it presents low toxicity (LD 50 of 16 g/kg < LD 50 of NaCl of 3 g/kg), low immunogenicity, and good biocompatibility. 4 However, its low solubility in various solvents limits its application. 21 Many investigations have been performed to tackle these limitations. For example, the hydrophilic modification with glycol, 22,23 PEG, 6,24 quaternized entities, 25−27 cyanoguanidine, 28 amino acid, 29 and polyethyleneimine 8,30−32 promotes the aqueous solubility of chitosan. In addition, lipid modifications with Nfatty acid, 33−38 deoxycholic acid, 39 alkyl, 40 and Brij-S20 41 drive molecular self-assembly into the core−shell structure in aqueous environments and promote the attachment of the materials to the cell membrane. Some researchers developed dual modifications such as hexanoic acid and PEG, 42 5βcholanic acid and PEG, 43,44 and dodecanal and carboxymethyl chitosan 45 to improve stability of the modified chitosan against the physiological environment and enhance gene transfection efficiency. 46,47 Even though the cationic lipids having shorter hydrophobic tails (C12−C14) were reported to produce a higher transfection activity than those having longer tails (C15−C18), 48,49 the amphiphilic dendrimers incorporated with longer hydrophobic tails (C18) were reported to exhibit a better transfection efficiency than those having shorter tails (C13−C15). Therefore, the influence of the hydrophobic length on transfection efficacy of gene carriers may also depend on their molecular structures and electronic properties, which affect the physical characteristics of the carriers such as selfassembly properties including the size, surface charge, morphology, and phase transition temperature. Moving to the chitosan platform, Sharma and Singh discovered that among several fatty acid-modified chitosan with various alkyl chain lengths (C6−C20), palmitic acid (C16)-modified chitosan expressed the best plasmid pGFP transfection efficacy toward HEK 293 cells. 38 To the best of our knowledge, a double installation of hydrophobic fatty acid chains onto the chitosan backbone has never been reported. Most of the LNPs successfully used for human vaccination such as Pfizer-BioNTech and Moderna Covid-19 vaccines were composed of multiple branched lipid components such as 1,2-distearoylsn-glycero-3-phosphocholine, 2-[(polyethylene glycol)-2000]-N,N-ditetradecylacetamide, and ((4-hydroxybutyl)azanediyl)bis(hexane-6,1-diyl)bis(2-hexyldecanoate). 50 Indeed, 2-alkylbranched lipid chains could induce the formation of lipid phases 51 that enable membrane fusion and endosomal escape, a crucial factor for successful gene delivery. 52 To exploit the structural advantage of palmitoyl groups in a double-tail lipid form that promotes cell fusion 53,54 and the inherited polymeric characteristic of chitosan that offers a high molecular weight and a biocompatible property, herein, we proposed a novel amphiphilic chitosan, bPalm-CS-HTAP, containing N-(2-((2,3-bis(palmitoyloxy)propyl)amino)-2-oxoethyl) (bPalm) groups as double hydrophobic tails and O-[(2-hydroxyl-3trimethylammonium)] propyl (HTAP) groups as hydrophilic heads ( Figure 1) as a new hybrid of high-M W biocompatible chitosan and fusogenic phospholipid-like entity. This newly designed bPalm-CS-HTAP was investigated for its selfassembly as well as biological properties in order to determine its potential use in gene delivery applications. The double palmitoyl chain is biocompatible and should provide stability to the self-assembled structure. HTAP as the hydrophilic moiety with positive charges should provide binding sites for negatively charged genetic materials, yielding polyplex nanostructures. 32,55−58 bPalm-CS-HTAP particles were characterized by several techniques, including Fourier transform infrared spectroscopy (FT-IR), nuclear magnetic resonance spectroscopy (NMR), scanning electron microscopy (SEM), and dynamic light scattering (DLS) techniques. In addition, the ability of bPalm-CS-HTAP to interact with genetic materials was investigated by electrophoretic mobility shift assay (EMSA). The cytotoxicity and transfection efficiency of bPalm-CS-HTAP were tested against two cell lines, HEK 293A and CHO, using trypan blue staining and CCK-8 assay for cytotoxicity assessment and the plasmid pVAX1 harboring a gene encoding a V5-tagged chimeric protein designed from SARS-CoV-2 or pVAX1.CoV2RBDme (pDNA) as a pDNA model. RESULTS AND DISCUSSION 2.1. Synthesis and Chemical Characterization of bPalm-CS-HTAP. Chemical modification of chitosan is plausible due to the reactive amino (−NH 2 ) and hydroxyl (−OH) groups at positions C2 and C6, respectively ( Figure 1). Due to the more nucleophilic amino groups, we first react chitosan with the bromoacetamido unit of compound 4 via a nucleophilic substitution reaction to get N-(2-((2,3-bis-(palmitoyloxy)propyl)amino)-2-oxoethyl) (bPalm) groups attached at the C2 position of chitosan, yielding bPalm-CS (compound 5, Scheme 1). As characterized by 13 C solid-state NMR, bPalm-CS shows the emergence of aliphatic and carbonyl (CO) signals, as compared to chitosan, at 10−50 and 170−180 ppm, respectively ( Figure 2B), confirming the successful conjugation of bPalm groups on the chitosan backbone. The degree of bPalm substitution (DS bPalm ) was 2.3% as calculated from the relative ratio between the peak integration of carbonyl carbon belonging to bPalm groups at 170−180 ppm to the peak integration of anomeric carbon of the chitosan backbone at 107 ppm (Figure S15, SI and eq 1). The FT-IR spectrum of bPalm-CS ( Figure 3C), in comparison with chitosan ( Figure 3A), shows an emergence of new peaks at 1740 and 1595 cm −1 and a relative increase in the absorption peak at 2850−2950 cm −1 . The signal at 1740 cm −1 could be attributed to the carbonyl stretching of bis-palmitoyl ester, while the signals at 1595 and 2850−2950 cm −1 could be respectively assigned to the N−H stretching of a secondary amine and the C−H stretching of aliphatic bis-palmitoyl chains in the bPalm units. These functional group assignments align with the FT-IR spectrum observed with the precursor of the bPalm unit ( Figure 3B). This evidence confirms the presence of bPalm units in the bPalm-CS. The reaction of bPalm-CS with GTMAC via nucleophilic ring-opening of epoxide gave bPalm-CS-HTAP. Due to the low DS bPalm at C2-NH 2 , GTMAC could react with both hydroxyl (C6-OH) and unreacted amino (C2-NH 2 ) groups, allowing HTAP units to attach at both C2 and C6 positions of chitosan ( Figure 1). As shown in Figure 4 glucosamine protons were omitted due to the small percentage of DS bPalm of 2.3% and the small degree of acetylation of chitosan (DA of 2.3% from 1 H NMR, Figure S16, SI), as to further simplify the calculation. The degree of quaternization (DQ) of approximately 56.2% was calculated from the relative ratio between the peak integration of Hb of HTAP units at 4.24 ppm and the peak integration of glucosamine protons (H1−H6) at 2.48−4.47 ppm (Figure S17, SI and eq 3). Due to the signal overlap of Hb with glucosamine protons that could affect the proton integrated values, conductometric titration of bPalm-CS-HTAP with silver nitrate (AgNO 3 ) was also performed to additionally confirm the DQ. The result from conductometric titration showed that bPalm-CS-HTAP gave a DQ of 41.4% ( Figure S18, SI), which is close to the value obtained from 1 H NMR analysis. This reassures the number of quaternary ammonium groups of about half of the number of monomer units in the polymer chain. In addition, we also analyzed the DS bPalm from the 1 H NMR spectrum of bPalm-CS-HTAP. Aligned with the result obtained from 13 C solidstate NMR analysis of bPalm-CS, the DS bPalm of about 1.1% was calculated based on the relative ratio between the peak integration of H9′−H21′ and H25′−H37′ of bPalm units at Figure 3D). 60,61 This evidence additionally confirmed the successful installation of HTAP units on the chitosan backbone. 2.2. Characterization of Self-Assembled bPalm-CS-HTAP Particles. As evidenced by DLS, the amphiphilic bPalm-CS-HTAP was found to assemble into nanosized particles in water with a hydrodynamic diameter (D H ) of 265.5 ± 7.4 nm (PDI = 0.50 ± 0.04) and a zeta potential of 40.1 ± 0.2 mV ( Figure 5A,B). SEM analysis revealed that bPalm-CS-HTAPs arranged themselves in a spherical shape with an average size of 101.25 ± 9.94 nm ( Figure 5C,D). The reduction in size of bPalm-CS-HTAP observed by SEM as compared with DLS could be explained by the fact that SEM analysis was performed under nonhydrated conditions of which particles are dried and well separated, whereas the DLS measurement was conducted under hydrated conditions so that the hydration layer and aggregation of particles have to be taken into account for the average size result. 62,63 The positively charged surface of the particles suggests that bPlam-CS-HTAP arranges HTAP moieties on the outside of the assembled structure, providing binding sites for nucleic acid cargos. The nanosized dimension (100−250 nm) and the positive zeta potential made bPalm-CS-HTAP desirable materials for effective gene carriers. 64 of pDNA into weight ratios of 0.1, 0.3, 0.5, 1, 3, 5, 10, 20, 30, and 40 that resulted in the total concentration of 2 ng/μL pDNA and 0.2−80 ng/μL bPalm-CS-HTAP ( Figure 7A). Live and dead cells were then determined by staining with trypan blue at 48 h post-transfection and were used for the calculation of percentage of dead cells. While the negative control (pDNA without a transfection reagent) exhibited 5.3 ± 1.8% cell death, a low amount of bPalm-CS-HTAP with the concentration from 0.2 to 60 ng/μL resulted in cell death lower than 11% (5.3 ± 2.0 to 10.4 ± 2.9%). An increased concentration of bPalm-CS-HTAP to 80 ng/μL caused more dead cells up to 16.8 ± 4.7%; however, cells transfected with 2 ng/μL PEI were found to give the highest cell death of 18.8 ± 4.5%. This suggested that the transfection of HEK 293A cells with bPalm-CS-HTAP at concentrations of 0.1−80 ng/μL and PEI at a concentration of 2 ng/μL in 12-well plates ensures a decent amount of cell viability of up to 75%. To evaluate the toxicity of bPalm-CS-HTAP in the presence and absence of pDNA, we performed CCK-8 assay of bPalm-CS-HTAP against HEK 293A cells in a broader range of concentrations. The assay was performed using the cells grown in 96-well plates and with various concentrations of bPalm-CS-HTAP of 2, 10, 20, 40, 60, 80, 100, 160, and 200 ng/μL, either in the presence or absence of 2 ng/μL pDNA ( Figure 7B). The cell viability was determined using the CCK-8 reagent at 48 h post-transfection. The result showed that bPalm-CS-HTAP enhanced the cell viability (123.6 ± 4.8 to 132.7 ± 2.2% cell viability) at low concentrations (2−10 ng/μL) and decreased the cell viability (74.6 ± 4.6 to 3.0 ± 1.0% cell viability) at the higher concentrations (>10 ng/μL) in a concentrationdependent manner. PEI (20 kDa, 20 ng/μL) exhibited a higher toxicity than bPalm-CS-HTAP at the same concentration. Interestingly, the toxicity of bPalm-CS-HTAPs decreased when they form a complex with pDNA up to 88.8%, suggesting that pDNA can neutralize the toxicity of bPalm-CS-HTAP. Interacting with negative charges of pDNA, the excess cationic surface charge of bPalm-CS-HTAP particles can be nullified. Hence, the polyplexes are not expected to efficiently bind electrostatically to the cell matrix and the cell surface. 67,68 Compared to the trypan blue staining method, the CCK-8 assay showed a higher toxicity of bPalm-CS-HTAP/ pDNA complexes as the concentration increased as well. However, the CCK-8 assay resulted in a higher toxicity of bPalm-CS-HTAP at 40−80 ng/μL (25.2 ± 8.5 to 10.4 ± 1.5% cell viability), in comparison with the trypan blue method (9.4 ± 2.2 to 16.8 ± 4.7% dead cells). Since the cytotoxicity measurements of the trypan blue method and the CCK-8 assay are based on cell membrane permeability and enzyme activity, respectively, 69,70 the trypan blue method is not sensitive to injured cells that lost cell functions but still maintained their membrane integrity. 71 Therefore, the higher toxicity of bPalm-CS-HTAP observed with the CCK-8 assay, in comparison with the trypan blue method, suggested that bPalm-CS-HTAP could impair cell functions while keeping the cells alive at high concentrations (>40 ng/μL). Evidence of cell growth enhancement in the presence of bPalmCS-HTAP at low carrier concentrations could be explained by the inherent mucoadhesive property of chitosan, leading to cell attachment and proliferation to a greater extent. 72,73 However, when the concentration of bPalm-CS-HTAP increased, the permanent positive charge of the quaternary ammonium groups could result in excess electrostatic interactions with the plasma membrane of the cells, which cause aggregation of bPalm-CS-HTAP on the cell surfaces, thus impairing the cell membrane function and leading to cell death. 29,31,60,74,75 2.5. In Vitro Gene Transfection. The ability of bPalm-CS-HTAP to assist the transfection of pDNA into mammalian cells was evaluated. Transfection was performed in HEK 293A and CHO cells with the plasmid pVAX1.CoV2RBDme. The transfection efficiency was determined by the level of the V5tagged CoV2RBDme expressed by pDNA upon cell transfection using the immunofluorescence technique. The amphiphilicity of bPalm-CS-HTAP allows both electrostatic and hydrophobic interactions to give pDNA condensation and self-assembled nanoparticles, improving the cellular uptake and stability of pDNA against nuclease degradation. 37,38,74 After incubation with bPalm-CS-HTAP/pDNA complexes at weight ratios of 3:1, 5:1, 10:1, 20:1, and 40:1 and a pDNA concentration of 2 ng/μL for 48 h, cells were stained with a rabbit anti-V5 primary antibody followed by goat antirabbit IgG conjugated with an Alexa Fluor 488 secondary antibody to determine the expression of the CoV2RBDme protein. Cells treated with naked pDNA (negative control) showed no fluorescence, while cells treated with PEI/DNA (positive control) yielded the highest intensity of green fluorescence among all transfection conditions (Figure 8). These results indicate that there was no pDNA uptake or successful pDNA translation without carriers, while PEI efficiently mediated pDNA uptake as seen by a high level of protein expression. Cells treated with bPalm-CS-HTAP/pDNA complexes showed fluorescence of the CoV2RBDme protein at the minimum bPalm-CS-HTAP/pDNA ratio of 3:1; however, we did not observe an increased green fluorescence signal when the ratio of bPalm-CS-HTAP:pDNA was elevated up to 40:1 ( Figure 8). Compared to the fluorescence intensity in the cells treated with the PEI/pDNA polyplex, the fluorescence intensity in the cells treated with bPalm-CS-HTAP/pDNA polyplexes was considerably lower, even at the elevated bPalm-CS-HTAP concentration that the greater amount of pDNA delivered into cells was expected. We also performed the transfection of bPalm-CS-HTAP/pDNA with the CHO cell line, and the minimum bPalm-CS-HTAP/pDNA ratio of 3:1 that generated the expression of the CoV2RBDme protein was also observed. Based on the green fluorescence signal, the transfection efficiency in CHO cells was slightly lower than that in HEK 293A cells at all conditions and was not improved with the increased ratio of bPalm-CS-HTAP/pDNA ( Figure S19, SI). We hypothesized that the low transfection efficiency of bPalm-CS-HTAP could be due to the tight binding between bPalm-CS-HTAP and pDNA via electrostatic interactions as mentioned above. This strong interaction between bPalm-CS-HTAP and pDNA resulted in an insufficient release of pDNA from the bPalm-CS-HTAP/pDNA complexes, thus impeding the expression of pDNA. In addition, the toxicity of bPalm-CS-HTAP as the concentration increased ( Figure 7B) could disrupt various cellular functions including protein synthesis, leading to poor gene expression at a higher bPalm-CS-HTAP:pDNA ratio. 74 Xiao et al. reported that a quaternized chitosan with a DQ of 43.7%, a similar DQ to that of bPalm-CS-HTAP (DQ of 56%), gave an about 10 5 -fold lower transfection efficiency of the transfecting plasmid pGL3 into Hela cells, compared to EndoFectin-Lenti, a commercial transfecting agent. Upon the reduction of the DQ to 12.4%, the transfection efficiency was improved by about 1000-fold. 74 These pieces of evidence suggest that bPalm-CS-HTAP could suffer from having too much positive charge that leads to modest gene expression. The presence of hydrophobic bPalm groups with a DS bPalm of 2% seems to be not enough to rescue the transfection performance of the quaternized chitosan carrier. It was also reported that the hydrophobic part of amphiphiles could weaken the electrostatic interaction between cationic carriers and their nucleic acid cargos. 76 To further improve the transfection efficiency of bPalm-CS-HTAP, the DQ and the DS bPalm should be modulated to an optimum point with a lower DQ and a higher DS bPalm that the amphiphile can still retain its aggregation into nanosized 1) and naked pDNA were included as a positive and negative control, respectively. Cells were indicated by the bright-field (BF) image. The nuclei of the cells were stained with DAPI, and the expressed CoV2RBDme protein was detected using a rabbit anti-V5 antibody followed by goat antirabbit IgG conjugated with Alexa Fluor 488. Fluorescence images were observed using a fluorescence microscope. particles with a decreased surface charge density while maintaining its aqueous solubility. The observed green fluorescence in the cell samples treated with bPalm-CS-HTAP/pDNA complexes indicates that bPalm-CS-HTAP has the potential to be further developed as a pDNA carrier. CONCLUSIONS The amphiphilic chitosan bPalm-CS-HTAP was successfully synthesized by attaching the N-(2-((2,3-bis(palmitoyloxy)propyl)amino)-2-oxoethyl) (bPalm) group and the O-[(2hydroxyl-3-trimethylammonium)] propyl (HTAP) group to the chitosan backbone. The resulting bPalm-CS-HTAP with a DS bPalm of 2% and a DQ of 56% gave well-defined and nanosized spherical particles with a hydrodynamic diameter (D H ) of 265.5 ± 7.40 nm (PDI = 0.50) and a surface charge potential of +40.1 ± 0.20 mV. The positively charged surface of bPalm-CS-HTAP enabled complexation with the plasmid DNA as observed in EMSA. While bPalm-CS-HTAP/pDNA complexes gave a cell viability of up to 80% at 0.1−80 ng/μL by the trypan blue staining method, it showed a higher toxicity in the CCK-8 assay at above 40 ng/μL due to its effect of disrupting cell functions. The in vitro transfection of bPalm-CS-HTAP/pDNA into HEK 293A and CHO cells showed similar modest protein expression levels with negligible cytotoxicity, compared to PEI at the same concentration. We hypothesize that the low transfection efficiency of bPalm-CS-HTAP is due to the high positive surface charge of bPalm-CS-HTAP particles that impeded the release of pDNA from the polyplexes. Although bPalm-CS-HTAP yields a lower transfection efficiency, in comparison with PEI, it has a merit of a much lower cytotoxicity and the chance to improve its transfection performance by balancing the hydrophobicity and hydrophilicity. The result obtained from this work demonstrates that bPalm-CS-HTAP has the potential to be used as a vector for gene delivery. A further investigation into the effect of DS bPalm and DQ could be performed to improve the transfection efficacy and reduce the toxicity of the material. MATERIALS AND METHODS 4.1. Materials. Chitosan (CS, M w of 100 kDa, degree of deacetylation of 95%) was purchased from Seafresh Chitosan Lab Co., Ltd. (Thailand). All reagents were received from Sigma-Aldrich (USA) and Tokyo Chemical Industry (TCI, Japan). Analytical-grade solvents were purchased from Sigma-Aldrich, Merck, Honeywell, QReC, Fisher Chemical, and RCI Labscan (Thailand). Commercial-grade solvents were purchased from Biotech and Scientific Co., Ltd. (Thailand) and RCI Labscan (Thailand). Distillation was performed to dry dichloromethane before use. The moisture-sensitive reactions were performed under a nitrogen atmosphere (Praxair, Thailand). Thin-layer chromatography (TLC) was used to monitor the reaction's progress and visualize it under UV light (365 nm). Commercial-grade solvents such as dichloromethane, ethyl acetate, and hexane were used for extraction and column chromatography without additional purification. All aqueous solutions of samples were prepared using type I water (Ultrapure). Phosphate-buffered saline (PBS, Sigma) was diluted in distilled water to 1× dilution before usage. ADT was the mixture of adenosine (0.5% w/v), deoxyadenosine (0.5% w/v), and thymidine (0.5% w/v), which were purchased from Sigma, USA. Dulbecco's modified Eagle's medium (DMEM) was purchased from HIMEDIA, India. Fetal bovine serum (FBS) was purchased from Hyclone, New Zealand. D-Glucose was purchased from Gibco, USA. L-Glutamine was purchased from HIMEDIA, India. The HEK 293A cell line was purchased from Invitrogen, USA. CHO cells were purchased from ATCC, USA. pVAX1.CoV2RBDme DNA, encoding the V5-tagged protein, was generated in the Animal Cell Culture lab (KMUTT, Thailand). A Cell Counting Kit-8 (CCK-8) was purchased from Dojindo, Japan. 4.2. Synthesis of Chitosan Having N-(2-((2,3-Bis-(palmitoyloxy)propyl)amino)-2-oxoethyl) (bPalm) Groups (bPalm-CS). Chitosan (3.0 g) was dissolved in anhydrous N,N-dimethylformamide (60 mL) in a roundbottom flask at 50°C under a nitrogen atmosphere for three days (500 mg of chitosan was dissolved to give 3.106 mmol of −NH 2 ). Following Step I, Scheme 1, compound 4 (2.139 g, 3.106 mmol) of which the synthesis and characterization are provided in the SI and N,N-diisopropylethylamine (1.08 mL, 6.211 mmol) were then dissolved in DMF and subsequently added dropwise (60 min) into the chitosan solution. The reaction was continued at 50°C for 48 h, and the reaction mixture was filtered to remove undissolved chitosan. The filtrate was dialyzed (M W cutoff of 12,000 Da) against distilled water for three days and lyophilized to obtain a yellowish crude product and washed several times with dichloromethane to remove unreacted compound 4 and dried in vacuo. A light yellow powder of bPalm-CS was obtained as the product (59.58% yield). 13 4.4. Chemical Structure Characterization. High-resolution electrospray ionization mass spectra (HRMS) were analyzed on a Bruker MicrOTOF-QII mass spectrometer, measuring both the positive and negative ionization modes. The HRMS data were analyzed by Bruker Daltonics Data Analysis 3.3. A nuclear magnetic resonance (solid-state NMR: 400 MHz) model JNM-ECZ-400R/S1 was used for 13 C solidstate NMR experiments. The chemical shift of 13 C spectra was calibrated with the external standard hexamethyl benzene (HMB). The degree of bis(palmitoyloxy)propyl)amino)-2oxoethyl substitution (DS bPalm ) of bPalm-CS was calculated according to eq 1. 77 where I ( Infrared spectra were recorded using a Thermo Fisher Scientific spectrometer (model Nicolet 8700, USA), scanning from 4000 to 400 cm −1 (% transmittance mode, 32 scans, 4.0 resolution, and 0.4 cm −1 data spacing) at room temperature. Samples were mixed separately with KBr powder and manually pressed into pellets for analysis. The background (air) was taken before each sample analysis. The spectra were analyzed in the OMNIC 8.3.103 software by Thermo Fisher Scientific, Inc. 4.5. Nanoparticle Characterization. The particle size and the zeta potential of self-assembled bPalm-CS-HTAP were measured using a dynamic light scattering technique (Nano-Partica SZ 100 series, Horiba Scientific, Japan) equipped with a doubled laser (532 nm, 10 mW) and a PMT detector operating at 25°C. The measurements were performed at a concentration range of 1−5 wt % (Milli Q water, 1 mL) with a particle size ranging from 0.3 nm to 8 μm (90°) for the size measurement and −200 to +200 mV for the zeta potential measurement. The morphology of the self-assembled bPalm-CS-HTAP was determined by a scanning electron microscope (SEM, JEOL JSM-6610 LV, Japan). The sample was diluted in water and dropped on a glass plate and dried overnight. The sample was then coated with gold by a sputter-coater (Crossington Model 108 Auto, JEOL, Japan) and subsequently observed and photographed. 4.6. Gel Electrophoresis. bPalm-CS-HTAP/pDNA polyplexes were prepared at varied bPalm-CS-HTAP:pDNA ratios by mixing 0.02−0.8 μg of bPalm-CS-HTAP with 0.2 μg of pDNA in a 10 μL total volume of 1× PBS. The mixtures were mixed well by vortexing and incubated at room temperature for 15 min to enable complexation. Then, Purple (6×), no SDS (New England Biolabs) (2 μL), was added to each tube, and 12 μL of the mixture was loaded into a 0.8% agarose gel. The samples were subsequently analyzed in an electric field of 135 V using 0.5× TAE buffer (40 mM Tris/acetate and 1 mM EDTA) as a running buffer for 14 min. After electrophoresis, the gel was stained with ethidium bromide and washed with distilled water. The gel was observed under UV light (200 AZURE Biosystems, USA). 4.7. Cytotoxicity Assay. The cytotoxicity of bPalm-CS-HTAP was evaluated by two methods using trypan blue staining and Cell Counting Kit-8 (CCK-8) by the following procedures. For trypan blue staining, HEK 293A cells were transfected with bPalm-CS-HTAP/pDNA complexes in 12-well plates following the same procedure as in the transfection part. After the cells were treated with the polyplexes and incubated for a total of 48 h, the medium was collected, and the cells were treated with 450 μL of 0.25% trypsin-1× EDTA for 2−3 min followed by pipetting the detached cells into a 1.5 mL microcentrifuge tube. Trypsinized cells and cells collected from the old media were centrifuged at 1200 rpm for 3 min at room temperature. The supernatants were decanted, and the cell pellets were suspended in 500 μL of complete DMEM and mixed well. The suspended cells (5 μL) were treated with 20 μL of a trypan blue dye, and the cells were counted for live and dead cells in triplicate for each transfection condition under a hemocytometer. Cell concentrations (C) were calculated according to eq 4. The percentage of dead cells was calculated according to eq 5. dead cells live cells dead cells (5) where C is concentration. For the CCK-8 assay, the cytotoxicities of bPalm-CS-HTAP and bPalm-CS-HTAP/pDNA were evaluated with HEK 293A cells. Cells were seeded into 96-well culture plates at a density of 5 × 10 3 cells per well and maintained in complete DMEM. After incubation at 37°C in a humidified atmosphere containing 5% CO 2 for 24 h, the old medium was removed, and the cells were treated with 50 μL of bPalm-CS-HTAP and bPalm-CS-HTAP/pDNA resuspended in serum-free DMEM at bPalm-CS-HTAP concentrations of 2, 10, 20, 40, 60, 80, 100, 160, and 200 ng/μL and a pDNA concentration of 2 ng/ μL. Cells treated with PEI/pDNA at 2 and 20 ng/μL PEI and 2 ng/μL pDNA served as a positive control, while nontreated cells in 50 μL of serum-free DMEM served as a negative control (100% cell viability). After incubation for 4 h in the serum-free medium, 50 μL of complete DMEM was added to each well, and the incubation was continued for another 44 h. Consequently, the cells were removed from the incubator, and 10 μL of CCK-8 solution was added into each well. After a 2 h incubation with CCK-8 at 37°C and 5% CO 2 , the absorbance was recorded by a microplate reader (Multiskan FC, Thermo Fisher Scientific, USA) at a wavelength of 450 nm (quartzhalogen). The cell viability (%) was calculated according to eq 6. 4.8. Transfection Assay. Human embryonic kidney (HEK 293A) cells were seeded in 12-well plates at a seeding density of 1 × 10 5 cells/well and incubated in 1 mL of a growth medium containing 90% DMEM, 10% FBS, and 1% pen/strep at 37°C with 5% CO 2 and grown for 24 h to reach 80−90% confluency prior to transfection. bPalm-CS-HTAP/pDNA polyplexes were prepared at weight ratios of 0.1:1, 0.3:1, 0.5:1, 1:1, 3:1, 5:1, 10:1, 20:1, and 40:1 with the fixed amount of pDNA of 1 μg in a total volume of 10 μL, which was added with 90 μL of serum-free media to reach a final volume of 100 μL each. The polyplexes were incubated at room temperature for 15 min, and another 400 μL of serum-free DMEM was subsequently added. The mixed solution was then transferred into cells that were previously washed with 1× PBS. After incubation for 4 h at 37°C with 5% CO 2 , another 500 μL of complete DMEM was added to each well, and cells were incubated for another 44 h at 37°C with 5% CO 2 . Chinese hamster ovary (CHO) cells were cultivated in 350 μL of a Minimum Essential Medium Alpha (MEM-Alpha) containing 1% D-glucose, 10% FBS, and 1% L-glutamine and 0.2% ADT mix at 37°C with 5% CO 2 and incubated for 24 h to reach 80−90% confluency prior to transfection. bPalm-CS-HTAP/pDNA polyplexes were prepared at the same concentration and weight ratio as previously described for the other cell line cultivated in 12-well plates. After 15 min of incubation, the polyplexes were transferred into cells to give the final transfection volume of 300 μL. Subsequently, cells were incubated at 37°C with 5% CO 2 for 4 h at room temperature followed by an addition of 300 μL of complete MEM-Alpha. The cells were incubated at 37°C with 5% CO 2 for another 44 h. Cells added with PEI (linear PEI HCl salt, 20 kDa, Sigma)/ pDNA complexes at a weight ratio of 1:1 and a final concentration of 2 ng/μL serve as a positive control, while cells incubated with 2 ng/μL pDNA serve as a negative control. After a total of 48 h of incubation, the transfected cells were fixed with 1% formaldehyde (diluted in PBS) for 10 min at room temperature and washed with PBS. To permeabilize the cell membrane, 90% cold methanol was added to the cells followed by incubation for 5 min at 4°C. After being washed with PBS, the cells were blocked with 2% FBS diluted in PBS (2% FBS/PBS) for 1 h at room temperature, washed with PBS, and incubated for 1 h at room temperature with a rabbit anti-V5 primary antibody (Abcam) diluted to 1:2000 in 2% FBS/ PBS. The cells were subsequently washed twice with PBS and incubated for 2 h at room temperature with a goat antirabbit IgG secondary antibody conjugated with Alexa Fluor 488 (Sigma) diluted to 1:1000 in 2% FBS/PBS. After being washed with PBS, the cell nucleus was stained with 4′,6-diamidino-2-phenylindole (DAPI, Sigma) diluted to 1:2000 in PBS. Following incubation at room temperature for 10 min, the cells were then washed with PBS and examined under an inverted fluorescence microscope (Olympus IX71, Japan) with a 20× objective lens. 4.9. Statistical Analysis. Data are shown as means (±SD) of three replicated experiments, and statistical analysis was performed using one-way ANOVA carried out by the SPSS 17.0 software (SPSS, Inc., Chicago, IL, USA). The degree of the significant difference was set at the probability of p < 0.05, which was determined by Tukey's post hoc tests. Mongkut's University of Technology Thonburi for financial support and the Scientific Instruments Center, School of Science, KMITL for 13 C solid-state NMR measurements. We would also like to thank Asst. Prof. Dr. Kittisak Choojun for advice on solid-state NMR analysis.
2022-03-19T15:10:17.917Z
2022-03-16T00:00:00.000
{ "year": 2022, "sha1": "3455f2510129d13b021a12f461fbc4517dc7d094", "oa_license": "CCBYNCND", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "553e9f5c57ae41d8d8f794eadbcad1fc136e3814", "s2fieldsofstudy": [ "Materials Science", "Medicine", "Chemistry" ], "extfieldsofstudy": [ "Medicine" ] }
245363489
pes2o/s2orc
v3-fos-license
Parent of origin DNA methylation as a potential mechanism for genomic imprinting in bees 3 Genomic imprinting is defined as parent-of-origin allele-specific expression. In order for genes 4 to be expressed in this manner an ‘imprinting’ mark must be present to distinguish the parental 5 alleles within the genome. In mammals imprinted genes are primarily associated with DNA 6 methylation. Genes exhibiting parent-of-origin expression have recently been identified in two 7 species of Hymenoptera with functional DNA methylation systems; Apis mellifera and Bombus 8 terrestris. We carried out whole genome bisulfite sequencing of parents and o spring from 9 reciprocal crosses of two B. terrestris subspecies in order to identify parent-of-origin DNA 10 methylation. We were unable to survey a large enough proportion of the genome to draw 11 a conclusion on the presence of parent-of-origin DNA methylation however we were able to 12 characterise the sexand caste-specific methylomes of B. terrestris for the first time. We find 13 males di er significantly to the two female castes, with di erentially methylated genes involved in 14 many histone modification related processes. We also analysed previously generated honeybee 15 whole genome bisulfite data to see if genes previously identified as showing parent-of-origin 16 DNA methylation in the honeybee show consistent allele-specific methylation in independent 17 data sets. We have identified a core set of 12 genes in female castes which may be used for future 18 experimental manipulation to explore the functional role of parent-of-origin DNA methylation in 19 the honeybee. Finally, we have also identified allele-specific DNA methylation in honeybee male 20 thorax tissue which suggests a role for DNA methylation in ploidy compensation in this species. 21 Introduction Genomic imprinting is defined as parent-of-origin allele-specific expression (Rodrigues and Figure 1: (a) Graphic display of the family-wise reciprocal crosses carried out between Bombus terrestris audax and Bombus terrestris dalmatinus. Each colour refers to related individuals, i.e. the queen from colony 08 is the sister of the male used in colony 19. This design reduces genetic variability between the initial and reciprocal crosses as we do not have inbred lines of B. terrestris. (b) Overview schematic for identifying allelic methylation di erences in the worker o spring. SNPs unique to either the mother or father are used to create N-masked reference genomes. The worker daughter sample is then aligned to the genome and reads are filtered to keep only those with an informative parental SNP. Methylation di erences between the alleles can then be assessed and parent-of-origin DNA methylation can be inferred from comparing reciprocal crosses. testing (Benjamini and Hochberg, 1995). For a CpG to be di erentially methylated a minimum di erence of at least 10% methylation and a q-value of <0.01 were required. Genes were determined 142 as di erentially methylated genes if they contained an exon with at least two di erentially methylated 143 CpGs and an overall weighted methylation (Schultz et al., 2012) di erence across the exon of >15%. 144 Two CpGs were chosen based on Xu et al. (2021), they find the methylation of two CpGs is enough 145 to promote gene transcription in Bombyx mori via the recruitment of histone modifications. 146 Identification of parent-of-origin DNA methylation 147 Whole genome re-sequencing data of the parents were checked using fastqc v.0.11.5 (Andrews, only homozygous alternative SNPs which are unique to either the mother or father of each colony. 156 We also removed C-T and T-C SNPs as these are indistinguishable from bisulfite converted bases in which did not contain an informative SNP were discarded. Di erential methylation between the maternal and paternal reads of all workers was then carried out using the R package methylKit 167 v.1. 16.1 (Akalin et al., 2012) as above, with the exception of a minimum coverage of eight reads, as 168 previously described in (Wang et al., 2016). DNA methylation a minimum coverage of 10 was required and a minimum score of 0.8, which is 192 considered representative of true allele-specific DNA methylation according to Orjuela et al. (2020). 193 We then identified genes which contained allelically methylated regions using R and compared these 194 gene lists to those which show parent-of-origin DNA methylation, as identified in Wu et al. (2020). 195 Gene ontology enrichment was carried out as above using GO terms from the Hymenoptera Genome 198 Genome-wide sex-and caste-specific DNA methylation 199 It is currently unknown to what extent DNA methylation varies between sexes and castes of B. 200 terrestris. We have therefore taken this opportunity to also generally characterise the sex-and 201 caste-specific methylomes of this species. We find low genome-wide levels similar to those previously from the two female castes (Fig.2a). We also see no clustering by sub-species for the males or 206 queens, for example male 08 in Figure 2a represents with other genomic features, we also added UTR regions and intergenic regions to further explore 213 the genome-wide methylation profile. We find the highest levels of DNA methylation for all sexes 214 and castes are within exon regions, whilst promoter, and 5' UTR regions show a depletion in DNA 215 methylation compared to intergenic regions (Fig.2b). 216 We also segregated genes into categories of di ering levels of DNA methylation to explore 217 the potential function of highly methylated genes across sexes and castes. There are a small number 239 We next classed a gene as di erentially methylated if a given exon contained at least two 240 di erentially methylated CpGs and had an overall weighted methylation di erence of at least 15%. 241 We find 155 genes are di erentially methylated between males and workers, 165 between males 242 and queens and 37 between queens and workers (supplementary 1.0.7). We carried out a GO 243 enrichment analysis on all di erentially methylated genes and on hypermethylated genes for each 244 sex/caste per comparison (supplementary 1.0.8). Whilst most terms are involved in core cellular 245 processes, we specifically find di erentially methylated genes between queens and workers are 246 enriched for chromatin remodelling-related terms (e.g. "histone H3-K27 acetylation" (GO:0043974) 247 and "chromatin organization involved in negative regulation of transcription" (GO:0097549)) and 248 reproductive terms (e.g. "oogenesis" (GO:0048477)). Di erentially methylated genes between 249 males and workers were also enriched for a large number of histone modification related terms 250 (e.g. "histone H3-K27 acetylation" (GO:0043974), "histone H3-K9 methylation" (GO:0051567), 257 When looking specifically at hypermethylated genes per sex/caste compared to all di erentially 258 methylated genes per comparison we find only two enriched GO terms for hypermethylated genes 259 in queens compared to workers: "developmental process involved in reproduction" (GO:0003006) 260 and "gamete generation" (GO:0007276). In genes hypermethylated in males compared to queens 261 and workers separately we find a large number of enriched GO terms related to neuron development 262 amongst other cellular processes. 263 Most of the di erentially methylated genes are common between males, queens and work-264 ers, with only 178 total unique genes changing methylation levels between sexes/castes (Fig.2c). 265 Specifically, we find 31 genes are hypermethylated in queens and workers when compared to 266 males and 18 genes are hypermethylated in males when compared to queens and workers. We 267 carried out a GO enrichment on these genes using all di erentially methylated genes from all 268 comparisons as a background set. We find general cellular processes enriched in both gene lists with 269 hypermethylated genes in the female castes also enriched for some telomere-related functions, e.g. 302 In order to further explore the function of these core 12 genes we carried out a gene ontology 303 15 enrichment analysis using all unique genes with allele-specific DNA methylation as a background 304 set (n = 3,448). We find a variety of terms enriched, including many involved in nervous system 305 development and the term "social behaviour" (GO:0035176) (supplementary 1.1.1). In addition to identifying this core set of genes which show potentially consistent parent-316 of-origin DNA methylation across multiple independent data sets, we also ran this pipeline for 317 some male samples as previous research has shown a diploid genome exists in some tissues of A. In this study we have explored the potential of DNA methylation as an imprinting mark in social bees. 345 We conducted reciprocal crosses to explore parent-of-origin DNA methylation in the bumblebee 346 Bombus terrestris. Whilst our crosses and data generation were successful, we were unable to 347 confidently identify genome-wide parent-of-origin DNA methylation. We were, however, able to 348 use these data to characterise the sex-and caste-specific DNA methylation profiles of B. terrestris 349 for the first time. We find genome-wide that sexes and castes show similar DNA methylation other epigenetic processes, such as histone modifications and chromatin dynamics. We also mined 353 previously generated honeybee whole genome bisulfite sequencing data to explore the consistency of 354 parent-of-origin DNA methylation, as identified in Wu et al. (2020), across independent data sets. 355 We find a core set of 12 genes which exhibit parent-of-origin DNA methylation show allele-specific DNA methylation in all 33 independently generated female data sets. We have also identified a potential role for allele-specific DNA methylation in some diploid tissues of male honeybees. 358 Recommendations for B. terrestris reciprocal cross design for parent-of-origin 359 DNA methylation 360 We used whole genome re-sequencing data of the mother and father from two sets of reciprocal SNPs, we were still unable to identify parent-of-origin DNA methylation across the entire genome 372 of B. terrestris. We explore the reasons for this and make the following recommendations for a 373 replication of this work. 374 Firstly, we sequenced the worker o spring samples to a depth of 30X, this coverage yields 375 enough data for standard di erential DNA methylation analysis even after data loss due to low 376 mapping rates of bisulfite converted data (generally less than 60% mapping e ciency (Tran et al.,377 2014)) and removal of PCR duplicates. An additional step required to identify parent-of-origin DNA i.e. the genomic sequence (Yagound et al., 2019;Marshall et al., 2019;Yagound et al., 2020). 395 This may be used, for example, to examine parent-of-origin methylation profiles in individual brain 396 samples. 397 Sex-and caste-specific methylomes of B. terrestris 398 We were able to use the data generated in this study to explore the sex-and caste-specific methylome terrestris queens is significantly longer than workers and males (Gree and Schmid-Hempel, 2008;416 Smeets and Duchateau, 2003;Duchateau and Marin, 1995). One role for DNA methylation in B. 417 terrestris may therefore be the regulation of caste di erences through core cellular processes, such 418 as telomere maintenance. Finally, we also find di erentially methylated genes between queens and 419 reproductive workers are involved in reproductive related processes. Previous work has suggested 420 a role for DNA methylation in reproduction in B. terrestris (Amarasinghe et al., 2014), as well 421 as other social insects (Wang et al., 2020;Bonasio et al., 2012), although this does not appear 422 to be consistent across Hymenoptera (Libbrecht et al., 2016;Patalano et al., 2015). Whilst the 423 di erentially methylated genes identified here suggest a role for DNA methylation in maintaining or 424 generating caste di erences, a direct causal link between DNA methylation and gene expression 425 changes mediating phenotypes has yet to be found. 427 Genes which show parent-of-origin expression have been identified in two social insect species to 428 date, B. terrestris (Marshall et al., 2020b) and A. mellifera (Wu et al., 2020). Whilst a direct link 429 between parent-of-origin DNA methylation and parent-of-origin expression has not been found in the 430 honeybee (Wu et al., 2020;Smith et al., 2020), it is possible parent-of-origin DNA methylation may 431 22 mediate imprinted genes in a trans-or temporal-acting fashion (Xu et al., 2021;Li-Byarlay et al., 432 2020 (Haig, 2000), and so a plastic response to queen presence of some imprinted genes may account for 460 the small number in common across independent samples. Finally, we cannot rule out that more of 461 the genes with parent-of-origin DNA methylation identified in Wu et al. (2020) are not consistent 462 across A. mellifera females. As discussed in the 'Recommendations for B. terrestris reciprocal 463 cross design for parent-of-origin DNA methylation' section above, it may be that we did not have 464 su cient coverage in some genome regions / samples from the data tested for these areas to show 465 significant allele-specific DNA methylation. Although, it's worth noting the advantage of identifying 466 allele-specific methylation through probabilistic models as opposed to using SNPs is that we can 467 survey homozygous regions which would usually be discounted when di erences in the underlying 468 genotype are needed for allele identification (Orjuela et al., 2020). 469 Finally, of the core 12 genes identified above we find three of those genes also show allele-470 specific DNA methylation in some male thorax tissue, including the gene involved in social behaviour. 471 Di erent tissues are known to vary in levels of ploidy in some social insects (Aron et al., 2005). DNA 472 methylation has also previously been suggested as a possible mechanism of ploidy compensation This study provides the groundwork for future research exploring parent-of-origin DNA methylation 479 as a potential imprinting mechanism in the bumblebee Bombus terrestris. We specifically highlight Li-Byarlay, H., Boncristiani, H., Howell, G., Herman, J., Clark, L., Strand, M. K., Tarpy, D., and
2021-12-22T16:57:08.445Z
2021-12-19T00:00:00.000
{ "year": 2021, "sha1": "5044e1d6059771125b91a3e9937b3029079b57af", "oa_license": "CCBY", "oa_url": "https://www.biorxiv.org/content/biorxiv/early/2021/12/19/2021.12.17.473163.full.pdf", "oa_status": "GREEN", "pdf_src": "BioRxiv", "pdf_hash": "89e4db26acba4eb0fc5748b490e79251e33aef2e", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [] }
232261241
pes2o/s2orc
v3-fos-license
Psychometric properties of Structured Clinical Interview for DSM‐5 Disorders‐Clinician Version (SCID‐5‐CV) Abstract Objectives The purpose of this study was to evaluate the psychometric properties of Structured Clinical Interview Version for DSM‐5 (R) Clinical Version (SCID‐5‐CV) in a population of patients with psychiatric disorders in Tehran. Method The study population included all outpatients and inpatients referred to three psychiatric centers in Tehran, namely Iran Psychiatric Hospital, Rasoul Akram Hospital, and Clinic of Behavioral Sciences and Mental Health (Tehran Psychiatric Institute). Inclusion criteria included age between 16 and 70 years, informed consent to study, ability to understand and speak Persian, and no specific physical problems that interfere with the conduct of the interview. Also, exclusion criteria included inability to communicate, mental retardation or dementia, severe symptoms of acute psychosis, and severe restlessness. In addition to demographic questionnaire, Persian version of SCID‐5‐CV was used in this study. Finally, diagnostic validity, test–retest reliability, and inter‐rater reliability were used to evaluate the information. Results In terms of the kappa criterion, for all diagnoses except for anxiety disorders, kappa was above 0.4 as a result of agreement above average, but in anxiety disorders with kappa 0.34 there was a moderate agreement between psychiatrist and SCID interviewer reports. Also, according to the psychiatrist's diagnosis as the gold standard, in most diagnoses, except for anxiety disorders, kappa was higher than 0.80, indicating the desirable characteristic of this tool in the diagnosis of disorders. Sensitivity of all diagnoses was higher than 0.80. Conclusion According to the findings of the present study, SCID‐5‐CV can be used for diagnostic purposes in psychiatric clinics and hospitals and to evaluate the treatment process of patients. In general, this version is suitable especially the schizophrenia spectrum and other psychiatric disorders; however, using SCID‐5‐CV for anxiety‐related disorders should be done with caution. In examining the psychometric properties of structured interviews based on SCID-5, most work has focused on interviews with specific disorders, such as anxiety disorders, PTSD, or alcohol and substance use disorders. The most comprehensive of them is the work done by Tolin et al who studied the psychometric properties of the SCID-5 structured interview for anxiety, mood, Obsessive-Compulsive and Related Disorders (DIAMOND). 362 adult patients underwent DIAMOND interview. The data of 121 patients gave inter-rater reliability and 115 of them provided the test-retest data. Reliability between DIAMOND raters ranged between very good and excellent (κ = 0.62 to κ = 1.00) and DIAMOND test-retest reliability between good and excellent (κ = 59 to κ = 1.00) (Tolin et al.,). There has been no study on the validity and reliability of SCID-5 structured interviews in Iran. But, the validity and reliability of SCID-IV in a study by Sharifi et In the test-retest reliability study, 104 clients were independently evaluated with SCID-5 on two visits (three to seven days apart). Feasibility was assessed by interviewees (299) and interviewers by questionnaires that included questions about the duration of the interview, how boring it was, its lucidity and acceptability of the questions, and how much effort it needed. Findings showed that diagnostic agreement was moderate to good for most specific and overall diagnoses (κ > 0.6). The overall agreement (total kappa) was 0.52 for all current diagnoses and 0.55 for lifetime diagnoses. Most interviewees and interviewers find the SCID version of Farsi acceptable (Sharifi et al., 2004). Obviously, the translation of an interview is not enough for its use in another culture and special attention should be paid to interlinguistic and intercultural differences in order to maintain its validity. In addition, the reliability and validity of the translated tool in the target culture should be measured and thus standardized. As far as the present researchers are aware, the psychometric properties of none of the diagnostic interviews based on SCID-5 have so far been studied in Iran, and since these structured interviews form the basis of most of the therapeutic and research work in psychology and psychiatry, the purpose of this study was to investigate the psychometric properties of the Persian version of SCID-5-CV. | ME THOD The study population is comprised of all outpatients and inpatients admitted to three psychiatric centers in Tehran, namely Iran Psychiatric Hospital, Rasoul Akram Hospital, and Clinic of Behavioral Sciences and Mental Health (Tehran Psychiatric Institute). A total of 250 patients were recruited. In order to evaluate the test-retest reliability, 106 patients were interviewed after an interval of 7-10 days. Inclusion criteria were being 16-70 years of age, being able to understand and speak Persian, and having no specific physical problems that can interfere in the interview. Also, exclusion criteria were severe irritability, mental retardation, and dementia, as well as acute psychosis to the extent that they were unable to participate in the interview. Following the proper authorization from the ethics committee of the Iran University of Medical Sciences and coordination with the mentioned centers, patients who met the inclusion criteria were invited for interview. Informed consent was acquired, and their rights were explained to them including the freedom to discontinue at any stage of the research. The interviews were conducted privately and without access to the patients' records. Outpatient's interviews were conducted as they were waiting in the hospital premises, and inpatients interviews were conducted during the first week of their stay. Interviewers were carried out by Ph.D. students in clinical psychology. Interviewers were provided with a summary of the hospital admission sheet for inpatients and a report of their first visit for outpatients. This information was only available to those researchers who were not involved in the interview process, and the interviewers were not aware of their interviewees' diagnoses. One of the two interviewers in each room did the interview before both of them made their diagnoses. The gold standard of diagnosis was the records in the hospital/clinic files according to the routine standards of these university-affiliated hospitals/clinic. This routine include (a) early interview with the patient by a resident of Psychiatry, | RE S E ARCH TOOL S • Demographic Questionnaire: Personal information questionnaire about sex, age, level of education, marital status, number of children, history of psychological disorders, and history of suicide attempts. • Structured clinical interview for DSM-5 disorders-clinician version (SCID-5-CV): The SCID-5-CV is a comprehensive standardized tool for evaluating major psychiatric disorders based on DSM-5 definitions and criteria. According to DSM-5, diagnoses categories include schizophrenia spectrum and other psychiatric disorders, bipolar and related disorders, depressive disorders, substance-related and addictive disorders, anxiety disorders, obsessive-compulsive and related disorders, post-traumatic stress disorder, attention-deficit/hyperactivity disorder, and other disorders. This interview is designed for clinical and research purposes. SCID-5-CV is usually implemented in one run which takes between 45 and 90 min | RE SULT This study is a descriptive correlational study. The population was consisted of all outpatient and inpatients referred to the three psychiatric centers in Tehran. Overall, data from 245 patients were analyzed, of whom 105 (42.9%) were male and 140 (57.1%) were female. The age ranged from 17 to 68 years with a mean of 35.91 and standard deviation of 11.64. Table 1 shows the demographic characteristics of participants. Test-retest reliability was assessed by one of the interviewers through visiting 113 (46.1%) patients after 7-10 days. As seen in Chart 1, the prevalence of depressive disorders was the highest. Some of group of disorders (trauma and stressor-related disorders, sleep-wake disorders, eating disorders, maladaptive disorders, impulse control disorders, conduct disorder, and neurodevelopmental disorders) were excluded due to low frequency. Table 2 shows the rate of agreement between SCID and psychiatrists (Kappa criteria), as well as the sensitivity, specificity, positive, and negative likelihood ratios, and LR+/LR ratio of these instruments if the psychiatrist's diagnosis is considered as the gold standard. As can be seen in this table, kappa was above 0.4 for all diagnoses except for anxiety disorders. The schizophrenia spectrum and other psychiatric disorders with a kappa of 0.90 reflect the almost complete agreement between the psychiatric reports and the SCID interviewer. In Bipolar and related disorders, depressive disorders, substance-related and addictive disorders, and obsessivecompulsive disorder, the kappa ranged from 0.76 to 0.80 reflecting the high agreement between the psychiatric reports and the SCID interviewer. Only anxiety disorders with a kappa of 0.34 indicate moderate agreement between the psychiatric reports and the SCID interviewer. If the diagnoses provided by psychiatrists were considered to be the gold standard, the specificity results were generally better than the sensitivity results, meaning the in most of the diagnoses except for anxiety disorders, they were above 0.80, indicating the desirable specificity. The sensitivity of all diagnoses was higher than 0.80. LR+/LR− ratios also showed that this tool made the best diagnosis for the schizophrenia spectrum and other Psychotic Disorders. It also has the potential to be useful for bipolar and related disorders, substance-related and addictive disorders, anxiety disorders, and obsessive-compulsive disorder, but it would not be desirable for depressive disorders. To assess the inter-rater reliability, two examiners completed the interviews separately. The phi coefficients (Table 3) showed that in all diagnoses, there is a very strong correlation at α < 0.0001 significance level. Therefore, the SCID-5-CV has very good inter-rater reliability. In addition, 113 patients were interviewed to assess the test-retest reliability. The results of the first and second interview coefficients of phi showed that there is a strong relationship between first and second interviews with α < 0.0001 in case of obsessive-compulsive disorder. There was also a significant relationship between the schizophrenia spectrum and other Psychotic Disorders, bipolar and related disorders, substance-related and addictive disorders, and anxiety disorders at α < 0.0001. However, a coefficient of 0.397 with a significance level of α < 0.0001 in depressive disorders showed that although there is a significant relationship between the first and second interviewers, this relationship is very weak. Therefore, SCID-5-CV has good test-retest reliability in all diagnostic disorders except for depressive disorders. | D ISCUSS I ON The purpose of this study was to evaluate the psychometric proper- (Shankman et al.). If the psychiatrist's diagnosis is considered to be the gold standard, the specificity and sensitivity of the diagnoses are high, and in most cases, except for anxiety disorders, the specificity was higher than the sensitivity. This indicates that the false-positive rate of the given diagnoses is low. These findings are in line with the results of Amini et al. (2007) and Sharifi et al. (2009). But unlike those studies that found sensitivity indices in most diagnoses to be somewhat low (between 60% and 80%) and concluded that this tool cannot be used for large epidemiological studies, the present study provides good sensitivity for SCID 5-CV and provides applicability for large epidemiological studies. Examination of the LR+/LR− ratio showed that this interview made the best diagnosis for the schizophrenia spectrum and other psychiatric disorders. It also has the potential to be useful for bipolar and related disorders, substance-related and addictive disorders, anxiety, and obsessive-compulsive disorders and related disorders, but will be weaker for depressive disorders than other diagnoses. These results have the highest utility for the schizophrenia spectrum and other psychiatric disorders and the lowest for mood and anxiety F I G U R E 1 Frequency of psychiatry disorders TA B L E 3 Phi coefficients of SCID diagnoses disorders, which is consistent with the results of Amini et al. (2007) and Sharifi et al. (2009), but overall the LR+/LR− were higher than previous studies. Moreover, there was a strong correlation between the first and second interviewers in all diagnoses at a significance level of α < 0.0001, indicating a very good Reliability. Also, a review of the LR+/LR− ratio showed that this tool made the best diagnosis for the schizophrenia spectrum and other psychiatric disorders. It also has the potential to be useful for bipolar and related disorders, substance-related and addictive disorders, anxiety disorders, and obsessive-compulsive disorder, but it will be weaker for depressive disorders. These results have the highest utility for the spectrum of schizophrenia and other psychotic disorders and the lowest for mood and anxiety disorders, that is consistent with the results of Amini et al. (2007) and Sharifi et al. (2009), but overall the LR+/LR− ratio were higher than previous studies. In addition, SCID-5-CV shows a good inter-rater reliability for all diagnoses. In this respect, the present study is in line with the results of Lobbestael et al. (2011). The test-retest reliability results for all diagnoses except for depressive disorders confirm the validity of SCID-5-CV. As with any research, the present study has some limitations that will hurdle the generalization and reliance of the findings. The par- | CON CLUS ION Overall, the acceptable reliability and validity of the SCID-5-CV diagnoses showed that the Persian version of the SCID-5-CV was a valid and reliable instrument for diagnoses. It can be used for clinical, research, and educational purposes and is suitable for most diagnoses, especially the schizophrenia spectrum and other psychiatric disorders. Only with regard to the diagnoses received for anxiety disorders should this be used more carefully. Therefore, the researchers recommend using this interview as a diagnostic aid in clinical settings. CO N FLI C T O F I NTE R E S T The authors declare that they no competing interests. AUTH O R CO NTR I B UTI O N S Dr Amir Shabani conceptualized and designed the study and drafted the manuscript. Dr Samira Masoumian designed the study, collected the data, and drafted the manuscript. Dr Somayeh Zamirinejad, Maryam Hejri, and Tahereh Pirmorad collected the data. Hooman Yaghmaie Zadeh analyzed and interpreted the data and involved in statistical analysis. PEER R E V I E W The peer review history for this article is available at https://publo ns.com/publo n/10.1002/brb3.1894. DATA AVA I L A B I L I T Y S TAT E M E N T The data that support the findings of this study are available from the corresponding author upon reasonable request.
2021-03-18T06:17:41.721Z
2020-06-19T00:00:00.000
{ "year": 2021, "sha1": "d6dc3543d6102e632bbc63186549fa7abf010c4c", "oa_license": "CCBY", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/brb3.1894", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "e47b69a1eaf2a670a0e44258d510ae64d4d9a8df", "s2fieldsofstudy": [ "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
225099797
pes2o/s2orc
v3-fos-license
Naked mole‐rats are extremely resistant to post‐traumatic osteoarthritis Abstract Osteoarthritis (OA) is the most prevalent disabling disease, affecting quality of life and contributing to morbidity, particularly during aging. Current treatments for OA are limited to palliation: pain management and surgery for end‐stage disease. Innovative approaches and animal models are needed to develop curative treatments for OA. Here, we investigated the naked mole‐rat (NMR) as a potential model of OA resistance. NMR is a small rodent with the maximum lifespan of over 30 years, resistant to a wide range of age‐related diseases. NMR tissues accumulate large quantities of unique, very high molecular weight, hyaluronan (HA). HA is a major component of cartilage and synovial fluid. Importantly, both HA molecular weight and cartilage stiffness decline with age and progression of OA. As increased polymer length is known to result in stiffer material, we hypothesized that NMR high molecular weight HA contributes to stiffer cartilage. Our analysis of biomechanical properties of NMR cartilage revealed that it is significantly stiffer than mouse cartilage. Furthermore, NMR chondrocytes were highly resistant to traumatic damage. In vivo experiments using an injury‐induced model of OA revealed that NMRs were highly resistant to OA. While similarly treated mice developed severe cartilage degeneration, NMRs did not show any signs of OA. Our study shows that NMRs are remarkably resistant to OA, and this resistance is likely conferred by high molecular weight HA. This work suggests that NMR is a useful model to study OA resistance and NMR high molecular weight HA may hold therapeutic potential for OA treatment. | INTRODUC TI ON Osteoarthritis (OA) is a chronic, degenerative joint disease characterized by severe pain associated with cartilage degeneration and joint inflammation (Martel-Pelletier et al., 2016). Aging is a major risk factor for OA, and OA is a leading cause of disability in the elderly Vos et al., 2012). OA is a degenerative disease with complex etiology, that results from age-related cartilage degeneration and may also be associated with obesity (Mooney et al., 2011) and traumatic events such as sport-related injuries (Englund, 2010;Roos et al., 1998). In the context of aging, the number of OA patients is predicted to increase as the proportion of elderly increases in the population. By year 2030, OA is predicted to affect one in four adults in the United States becoming the major cause of morbidity among individuals over 40 years of age (Hootman & Helmick, 2006). Current treatment options are limited to pain management, physical therapy, and end-stage surgery. Thus, there is an unmet need to identify disease-modifying treatments for OA. Hyaluronic acid (HA) is a ubiquitous molecule that has been studied in joint biology and OA, with viscous solutions of HA clinically employed as intra-articular injections to serve as a lubricant between the surfaces of articular cartilage and synovial joints. In vitro studies show that HA's lubrication properties increase as molecular weight of HA increases (Kwiecinski et al., 2011). In addition, molecular weight and concentration of HA in joint spaces are known to decrease in OA patients (Balazs, 1974;Balazs et al., 1967). Therefore, as mentioned, intra-articular injections of high molecular weight HA have been used to alleviate OA symptoms (Hunter, 2015). Mice with the knockout of Hyaluronan synthase 2 (Has2) die in mid-gestation (Camenisch et al., 2000). Conditional inactivation of Has2 results in skeletal deformities and a decrease in proteoglycan aggrecan deposition into the ECM (Matsumoto et al., 2009). Aggrecan degradation is the hallmark of cartilage degeneration in OA, and mice hypomorphic for aggrecan show increased incidence of spontaneous OA (Alberton et al., 2019). Another unique feature of NMRs is that they accumulate high levels of very high molecular weight HA in their tissues (Tian et al., 2013). We have previously shown that high molecular weight HA is a key mediator of NMR's cancer resistance (Tian et al., 2013). Hence, in this study, we hypothesized that NMRs can be protected from OA due to their unique resistance to age-related disease and the high levels of high molecular weight HA in the joints. HA is not only a filler of joint spaces but also is a major component of cartilage itself. However, it is not known whether HA molecular weight in cartilage could influence OA development. In cartilage, HA is assembled with proteoglycans, and the HA-proteoglycan assemblies are further interwound with collagen fibers. We hypothesized that the mechanical properties of such biopolymer matrix may be affected by the molecular weight of HA. Indeed, in polymer science it is well-known that a polymer material made of long polymer is stiffer than materials made of shorter polymer (Hendrikson et al., 2015;Jaspers et al., 2014). During OA progression, cartilage becomes softer (Kleemann et al., 2005); hence, stiffer cartilage of the naked mole-rat composed of very high molecular weight HA may protect it from OA. Here, we investigated vulnerability of resident cells and biomechanics of NMR cartilage, as well as NMR's resistance to OA using meniscal-ligamentous injury (MLI) model. We found that in NMRs in situ chondrocytes are less vulnerable, the cartilage is intrinsically stiffer and remarkably resistant to OA as compared to mouse cartilage. Our results suggest that high molecular weight HA protects from OA and that NMR represents a unique model to study OA resistance. | Animals All animal experiments were approved and performed in accordance with guidelines set up by the University of Rochester Committee on Animal Resources. Naked mole-rats were from the University of Rochester colonies. C57BL/6 mice were purchased from Charles River Labs. Tissues were obtained from at least three different animals for each experiment. All animals were young adults: NMRs were 2.5 years old, and mice were 6 months old. In the MLI study, all mice were males, and NMRs were two males and one female. | Quantification of molecular weight of HA in cartilage In order to detect differences in molecular weight of HA in cartilage of C57BL/6 mice and NMRs, HAs were isolated from femoral and tibial cartilage and analyzed by pulse-field gel electrophoresis following a previously established method (Tian et al., 2013). | Specimen preparations To analyze cell vulnerability and Young's modulus of the extracellular matrix (ECM), distal femurs with fully intact cartilage were carefully dissected immediately after sacrifice using a previously established method (Kotelsky et al., 2019). Briefly, the intact cartilage on femoral condyles was exposed by carefully removing soft tissues (muscle, ligaments, and menisci) around the joint. Then, the bones were cut ~8 mm above the knee, and the dissected specimens were placed into Hank's Balanced Salt Solution (HBSS) with osmolarity adjusted to 303 ± 1 mOsm and a physiological pH of 7.4. The specimens were kept hydrated in HBSS throughout the cell vulnerability and ECM mechanical properties quantification experiments. | Quantification of young's modulus of articular cartilage To investigate the possible effect of cartilage HA on mechanical properties, C57BL/6 mice (n = 5) and NMRs (n = 3) were used to quantify and compare the ECM Young's modulus (E) using a previously established inverse finite element-based method (Kotelsky et al., 2018). Briefly, cartilage of the dissected specimens was fluorescently stained with 10 µg/ml of 5′-DTAF (Sigma-Aldrich) for 1 h at 23°C followed by a wash in iso-osmotic HBSS (303 mOsm, pH = 7.4) to eliminate unbound fluorescent dye. The specimens were then placed on a custom microscope-mounted testing device with the cartilage on the distal femoral condyles against the cover glass ( Figure 2a). Cartilage on the medial femoral condyles was imaged under laser scanning confocal microscope (Olympus FV1000) with a 40× dry lens (LUCPLFLN, NA = 0.6) before and 5 min after a static load of 0.5 N was applied on top of the specimens, resulting in 3D confocal z-stacks with a resolution of 0.31 µm/pixel in x-y plane and 2.18 µm/slice along the z-direction ( Figure S1). Note that according to load distribution analysis (see "Assessment of contact forces on medial femoral condyles of C57BL/6 mice and NMRs"), the 0.5 N force induced 0.19 N and 0.22 N contact forces between the cover glass of the testing device and the cartilage on medial femoral condyles of C57BL/6 mice and NMRs, respectively (vide infra). The confocal z-stacks were used to obtain tissue stretch maps (λ z ) and measure the peak compressive deformation (i.e., minimal tissue stretch min z ). The tissue stretch maps were quantified by dividing the spatially varying thickness of the compressed cartilage (under a load of 0.5 N) by the thickness measured before compression (0 N). Stretch calculations were performed after fitting the normalized depth-wise intensity profiles of the acquired confocal z-stacks (before and during cartilage compression) with Gaussian curves using a previously established MATLAB algorithm (Kotelsky et al., 2018). The resulting spatially varying fit parameter σ (indicate depth-wise intensity spread) is proportional to the cartilage thickness and was used to obtain stretch maps by dividing σ at each location after compression by the value of σ at the same location before loading (λ z = σ 0.5 N /σ 0 N ). Peak cartilage compressive deformation ( min z ) was obtained from the λ z maps by circumferentially averaging tissue stretch values observed within 5 µm from the global minima. Specimen-specific 3D finite element models (FEMs) of murine/NMR cartilage on medial condyles were constructed in FEBio (Maas et al., 2012) to determine the Young modulus of the cartilage ECM in tested specimens using a previously established method (Kotelsky et al., 2018). The cartilage on medial condyles was approximated as a uniform hemispherical shell with the cartilage thickness (42.8 ± 1.5 µm for C57BL/6 mice, 47.5 ± 3.9 µm for NMRs), and outer radius of curvature (913.8 ± 204 µm for C57BL/6 mice, 882.1 ± 5.7 µm for NMRs) measured from confocal z-stacks acquired at baseline. The cartilage thickness was measured in ImageJ, and the radius of curvature was quantified in MATLAB by fitting a circle to the curvature of articular surface. The FEMs were constrained from movement in any directions on the top/inner surface (the bone-cartilage interface), while the bottom/outer surface (the articular surface) was allowed to freely deform. The boundary displacement (u z ), defined as the difference between the thicknesses of the compressed and resting-state cartilage, was calculated from the experimentally measured peak tissue stretch (u z = (1 − min z ) × Thickness at the baseline ). The boundary displacement was prescribed to a rigid platen at the cartilage-glass interface along the z-direction. The cartilage of both C57BL/6 mice and NMRs was modeled as a neo-Hookean hyperelastic material with a Poisson ratio (ν) of 0.2 (Cao et al., 2006). The Young moduli of the articular cartilage on medial condyles were determined through an iterative parameter optimization algorithm implemented in FEBio by matching the experimentally measured forces (0.19 N and 0.22 N for C57BL/6 and NMRs, respectively) to the reaction forces determined from the corresponding FEMs on the rigid platen ( Figure S2). | Assessment of contact forces on medial femoral condyles of C57BL/6 mice and NMRS The application of an 0.5 N load on top of the dissected specimens resulted in 3 distinct contact points: two cartilage-glass and one bone-glass contact. The contact forces on the medial femoral condyles used to quantify Young's modulus were measured using a previously established method (Kotelsky et al., 2018). Briefly, F I G U R E 1 Naked mole-rat cartilage is composed of higher molecular weight HA than mouse cartilage. Purified HA separated on pulse-field gel. Samples were either run intact or pre-digested with hyaluronidase (Hyal). the locations of three contact points (medial condyle-glass, lateral condyle-glass, and bone-glass contacts) and the contact between the 0.5 N weight and the top of the specimen were first determined by placing the specimen on a sheet covered with black ink prior to compressing with the 0.5 N weight, which also was stained with black ink. The ink-stained contact areas were then imaged using both inverted and upright microscopes. The contact locations, assessed from the acquired micrographs, were used to quantify reaction forces on medial femoral condyles through a moment Figure S3) were used to quantify and compare Young's moduli in these two groups. | Assessment of chondrocyte vulnerability to injurious loads Differences in mechanically induced chondrocyte vulnerability between C57BL/6 mice and NMRs were assessed using a previously established method (Kotelsky et al., 2019). Briefly, the dissected specimens were vitally stained for 30 min at 37°C with 10 µM calcein AM (Life Technologies), an indicator of intact cell membranes (i.e., live cells). The specimens were then washed in HBSS for at least 10 min to wash out the excess calcein AM from the tissue. Next, the specimens were placed on the same mechanical testing device used to quantify the Young modulus of articular cartilage ( Figure 2a) and statically loaded for 3 min with 0.5 N weight applied on top of the specimen (n = 7 for C57BL/6 mice, n = 3 for NMRs). The loading duration of 3 min was assumed to be sufficient for the ECM of both types of animals to reach poroelastic equilibrium based on the calculated gel diffusion time t gel , a measure of time constant for poroelastic equilibrium (Armstrong et al., 1984). In particular, t gel was approximated as ( | Assessment of cartilage degeneration after meniscal-ligamentous injury Post-traumatic OA was surgically induced on a limb by employing a meniscal-ligamentous injury (MLI) model (Hamada et al., 2014). Briefly, this model dissects small part of anterior horn of meniscus, and that leads to the development of post-traumatic OA joint changes by 4 weeks post-injury, with progression to terminal disease by 4 months in mice. In our study, animals were administered MLI injury, and at 12 weeks post-MLI, joints were harvested. In both mice and NMRs, the contralateral limb provided a sham control. Tissue fixation and histology preparation were performed by a previously established systematic approach ). | Statistical analysis The extent of cartilage deformation, cartilage ECM Young's moduli, areas of injured/dead cells, and percent loss of uncalcified cartilage layer after MLI was compared between C57BL/6 mice and NMRs using a Student's t test conducted in GraphPad Prism (version 6.01). | NMR cartilage is composed of high molecular weight HA and is stiffer than mouse cartilage We previously reported that HA in NMR tissues (skin, heart, kidney, and brain) has higher molecular weight than in mice (Tian et al., 2013). However, NMR cartilage had not been previously analyzed. Using pulse-field electrophoresis, we first examined the molecular weight of HA extracted from mouse and NMR cartilage. We found that HA in NMR cartilage had a molecular weight of, at least, twofold higher than the murine form ( Figure 1). We next compared mechanical properties (Young's modulus) of mouse and NMR cartilage using a previously developed method (Kotelsky et al., 2018) | NMR chondrocytes are resistant to traumatic damage We tested whether NMR chondrocytes are more protected from traumatic injuries using the setup shown in Figure 2a. When a static load of 0.5 N was applied on top of the dissected femurs for 3 min, substantially smaller areas of dead cells (~10-fold smaller) were observed on the surface of NMR condyles compared to C57BL/6 mouse specimens (Figure 2e,f). Taken together, these results demonstrate that the ECM in NMR cartilage is stiffer and NMR chondrocytes are more protected from damage than in C57BL/6 mice. These findings suggest that NMR may be protected from OA. | NMR is resistant to post-traumatic OA To test whether NMR is resistant to OA, we employed MLI model of post-traumatic OA, in both NMR and mouse. By surgical transection of the medial collateral ligament and removal of a part of the anterior horn of the medial meniscus, the knee joint becomes destabilized and post-traumatic OA ensues. The development of OA, evidenced by articular cartilage degeneration, is depicted by representative histologic images and staged via OARSI scoring. As shown in Figure 3, 12 weeks after MLI, mouse cartilage exhibited degenerative changes as typically seen in the MLI model in 12 weeks post-injury: loss of uncalcified layer; depletion of proteoglycan staining; fibrillation and thinning of cartilage. Remarkably, NMR cartilage showed no signs of degenerative changes in the MLI group. This result shows that NMR is extremely resistant to OA, which is the one of most prevalent age-related diseases. | DISCUSS ION In the present study, we investigated the biomechanics of NMR cartilage and the resistance of NMRs to the development of OA. We demonstrated that NMR cartilage is composed of higher molecular weight HA; NMR cartilage is intrinsically stiffer; NMR chondrocytes are less vulnerable to traumatic injury; and NMRs are more resistant to OA compared to mice. These results suggest that high molecular weight HA in cartilage may be a key factor in preventing cartilage degradation in OA. There are two main reasons NMR cells secrete higher molecular weight HA than mouse and human cells (Tian et al., 2013). NMRs have two unique amino acid changes in the hyaluronan synthase HAS2 gene and show the increased HAS2 protein levels compared with human and mouse cells. In addition, NMR cells have low hyaluronidase activity. Because of those combined effects of the increased HA anabolism and decreased HA catabolism, the molecular weight of HA in NMR tissues is higher than in any other mammal examined (Tian et al., 2013). In the present study, we confirmed that NMR cartilage is also composed of much higher molecular weight HA compared to mouse cartilage ( Figure 1). Using the recently developed method to study biomechanics of whole-mount distal femoral cartilage, we demonstrated that NMR cartilage is greater than threefold intrinsically stiffer than the cartilage of C57BL/6 mice ( Figure 2). This difference in cartilage material stiffness led to smaller cartilage deformation in NMRs under 0.5 N static loading. Since the extent of tissue deformation scales with the extent of cell injury/death (Chen et al., 2003;Levin et al., 2005), intrinsically stiffer cartilage of NMRs significantly reduced vulnerability of chondrocytes to traumatic cell death, exhibiting chondroprotective effects against mechanical injury as compared to mice (Figure 2e,f). Importantly, health of articular chondrocytes is crucial for maintaining cartilage homeostasis and thus may play a role in preventing injury-induced OA. In addition, stiffness of human cartilage was reported to decrease with the progression of OA (Kleemann et al., 2005), suggesting that stiff NMR cartilage may be protective from OA. By employing an MLI injury model, we demonstrated that NMRs are extremely resistant to post-traumatic OA. NMRs presented no signs of cartilage degradation, while mice subjected to the same injury presented severe OA phenotypes including significant cartilage loss. Importantly, NMRs are highly active animals and the lack of cartilage degradation cannot be explained by the lack of movement. In our facility, NMRs move around in the habitats composed of multiple chambers connected by tubes. Following MLI injury, the experimental animals were returned to their native colonies and continued their active lifestyle. Since mice were housed in standard cages, their movements were more limited than NMRs, which would give them an advantage in recovering from the injury. Therefore, the observed resistance of NMR to OA is best explained by the decreased cell vulnerability and by the increased mechanical strength of NMR cartilage due to the presence of high molecular weight HA. Another contributing factor is likely to be NMR high molecular weight HA in the synovial fluid, as lubrication properties of HA increase as molecular weight of HA increases (Kwiecinski et al., 2011), and thereby further reducing vulnerability of in situ chondrocytes. Cellular senescence has been implicated in the pathogenesis of osteoarthritis (Deng et al., 2019;Jeon et al., 2017) by promoting inflammation. Although NMRs cells undergo senescence in response to stress this response is attenuated (Zhao et al., 2018). Reduced senescence in NMR joints may be another factor protecting from OA. As HMW-HA has cytoprotective properties , it may be acting to attenuate senescence. To unequivocally prove that NMR high molecular weight HA is responsible for OA resistance, it would be important to generate a transgenic mouse model expressing NMR-size HA in cartilage and test whether it confers mechanical advantages and post-traumatic OA resistance. Overexpression of NMR version of HAS2 in chondrocytes would suffice to test this. However, since mouse tissues rapidly degrade HA, achieving the same molecular weight and abundance of HA in the mouse as observed in the NMR, would also require inhibiting hyaluronidase genetically or pharmacologically. Our study suggests that increasing HMW-HA in the joints may have therapeutic effect. Although intra-articular injections of HA have been used in the clinic, more efficient and less invasive methods may be desirable. These may include gene therapy vectors expressing HAS2 enzyme delivered directly to the joint, or systemic inhibitors of hyaluronidases that would slow down the degradation of HMW-HA leading to accumulation of HMW-HA in both cartilage and synovial fluid. In summary, our study demonstrates that NMRs are resistant to yet another age-related disease, OA. Our results show that NMR cartilage is composed of very high molecular weight HA that is significantly stiffer than the mouse cartilage, which is composed of lower molecular weight HA. These results suggest that very high molecular weight HA in cartilage protects NMRs from OA. Intra-articular injections of HA have been used to treat OA in human patients; however, the length of HA used in these treatments is typically 1 MDa, which is shorter than HA in NMR cartilage. Our work may encourage further investigation into the beneficial effects of very high molecular weight HA both in cartilage and synovial fluid for maintaining cartilage biomechanics and potential therapeutic interventions to delay or prevent OA progression. ACK N OWLED G EM ENTS This work was supported by grants from the US National Institutes of Health to MRB, MZ, A.S., and V.G. and NIH T32 to T.T. CO N FLI C T O F I NTE R E S T Authors declare no conflict of interest. wrote the manuscript with input from all authors. DATA AVA I L A B I L I T Y S TAT E M E N T The data that support the findings of this study are available in the main text and supplementary material (Figures S1-S3) of this article.
2020-10-29T09:02:41.509Z
2020-10-28T00:00:00.000
{ "year": 2020, "sha1": "e3c81ecd743a7aa76afdda8b9023b166d20b648d", "oa_license": "CCBY", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1111/acel.13255", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "04869d3830f505ec6c732f71f3865564e0e3652f", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
232293897
pes2o/s2orc
v3-fos-license
Comparison of Red versus Blue Laser Light for Accurate 3D Measurement of Highly Specular Surfaces in Ambient Lighting Conditions Inspection or Quality Control is an essential stage of the production line. For some products, their accurate three-dimensional (3D) reconstructions are necessary for inspection [1]. The type of surface of the product plays a critical part in choosing a suitable 3D reconstruction method. The inspection of highly specular surfaces is still a limitation of most of the state-of-art 3D measurement techniques. Most of the available commercial solutions cannot inspect specular surfaces in ambient lighting conditions. This research paper focuses on a simple and accurate 3D measurement technique using laser and stereo cameras for the inspection of reflective surface objects. In this technique, a single laser line is projected onto the surface, and its stereo images are captured and processed for 3D reconstruction. The method overcomes the limitation of traditional methods and works robustly in ambient lighting conditions. As our experiments are performed in ambient lighting conditions, it is essential to project the right type of laser light on the object. Two different colours (Red and Blue) of laser lights are considered. Here, we reconstructed 3D profiles of three different shapes and estimated sizes of these objects using these two lights. This research article compares the output 3D profiles obtained using red laser light with which achieved using blue laser light. The results are quantitatively evaluated in terms of accuracy with the ground truth 3D model of the acquitted objects for accuracy evaluation. We also discuss the dependency on the specularity of the surface. Introduction For customer satisfaction, reliable inspection and quality control of the product is necessary. It also assures confidence to the manufacturer and reduces manu- facturing cost by eliminating scrap losses. The quality results help to simulate failure modes and verify strength criteria to validate functional product design [4]. Same as the manufacturing process, the quality checking process also needs to be fast, accurate, simple, cost-effective and automatic. "Machine vision is the technology and methods used to provide imaging-based automatic inspection and analysis [3]." This research has been carried out in collaboration with Facteon Intelligent Technology Limited. Facteon manufactures different parts of consumer appliances such as drums, doors and panels, cabinets and cases, water heater cases and refrigeration foaming lines [5]. The base material of most of the products is stainless steel which makes the surface highly specular in the presence of light. As seen in Fig. 1, any slight variations in the viewing plane, the angle of view, and camera position; they can significantly affect the appearing colour. In general, the more direct the reflection, the brighter the colour and vice versa. Also, the ambient lighting conditions of the working environment cause significant difficulty for quality inspection. A significant number of 3D shape measurement techniques have been proposed in the last few decades. Time-of-flight, stereo vision, laser range scanning and structured lighting are some example of the surface non-contact techniques used for high-speed inspection of objects [6]. These techniques provide accurate results for non-reflective surfaces. Structured lighting and stereo vision are considered as the most effective techniques for specular surfaces. However, the shape or curvature of the specular surface will cause reflection in ambient lighting conditions. Even by using structured lighting or stereo vision techniques, it is challenging to observe every small feature of the object in the region of reflection [7]. Therefore, we use the method which combines the concepts of sheet-of-light and stereo vision for the inspection of highly specular surface. Here, a narrow band of light is projected onto a 3D surface. "The projection produces a distorted line of illumination, which represents the profile of an object [2]." Stereo cameras capture the image of the distorted laser line in a calibrated environment. An algorithm is developed to detect the laser line accurately in both photos. After accurately detecting the laser line in both images, stereo matching is performed for 3D reconstruction of the laser profile in World Coordinate System (WCS). The output accuracy mainly depends on the detection of projected laser line. To get accurate results in ambient lighting conditions, it is important that the projected narrow band of laser line is sharp and it should resemble the shape of the product. Here, we have used red and blue light lasers as a source of light. A laser emits coherent light. As a result, the laser beam stays narrow and focused over a great distance [8]. The effect of the projected narrow band of light onto the accuracy of the output 3D profile is studied here. The wavelength of Redlight laser diode is generally 638 nm, 650 nm or 670 nm. On the other hand, the wavelength of Blue-light laser diode is normally 450 nm, 473 nm or 488 nm [12]. All these wavelengths come under the visible light region of the electromagnetic radiation spectrum [13]. The visible-beam lasers are classified into four classes based on its maximum output power: Class 2, Class 3R, Class 3B and Class 4 [14]. Figure 2 shows that the eye injury hazard increases as the laser's power increases [15]. As the experiments are performed in an open working environment, we have used class 2 lasers for this research. In this paper, we projected a narrow band of red and blue laser line onto three different shapes of objects. The first object is the drum of washing machine manufactured by Facteon. The drum is made of stainless steel which makes its surface highly specular. The other two items are a cube and a prism. They are wrapped in an aluminium foil to create the effect of reflective surfaces. The 3D profiles of each object are created for red and blue laser lights using the abovespecified technique. For each object, the output 3D profile obtained using red laser light is compared with the output 3D profile obtained using blue laser light. Later, they are also compared with the ground truth three-dimensional model of the acquired objects for accuracy evaluation. The remainder of this paper is structured as follows: Sect. 2 briefly describes the reviewed literature and shows a comparison of commercial solutions for Red vs Blue laser light. Section 3 explains our 3D measurement technique in detail and illustrates the dependency of the type of laser light on the specularity of the surface. All steps of our approach with experimental results are shown in Sect. 4. Section 5 concludes the paper. Commercial Solutions In this section, the commercial solutions which use Red-light laser are compared with the commercial solutions which use Blue-light laser. Table 1 compares products which are based on the concept of sheet-of-light. A trade-off between the accuracy of the output and the field of view (FOV) covered by the system is seen in all these solutions. Also, the FOV covered by these products is very small for higher resolution. Therefore, multiple laser profilers are required to inspect large objects. This increases the cost of the inspection process. Another disadvantage is that most of the solutions do not work in the ambient lighting conditions of the working environment. Moreover, the resolution of the red-light laser is low compared to the blue-light laser. Also, some of the red-light laser profilers do not work for highly specular surfaces. Therefore, blue-light laser profilers are considered as a better choice for the inspection of small specular objects. Methodology The flow chart in Fig. 3, represents our suggested approach for the inspection of reflective surfaces. A narrow band of light is projected on the surface of the object. The stereo cameras capture the images of the product in the calibrated environment. These captured images are first rectified using calibration parameters. To generate the Region of Interest (ROI) automatically, we have used the concept of two-dimensional (2D) shape matching. In Fig. 4, the Red curve depicts the intensity distribution of the projected laser line for the first row. As we can see, the intensity distribution for the projected laser profile resembles a bell-shaped curve. The first step of the detection algorithm is to smooth the curve using a Gauss function [17]. The blue curve shows the smoothed function. The next step is to find local maximums for the blue curve [18]. Also, we find the location of the pixel with the highest grey value intensity for the red curve. Now, we compare the pixel location of each local maximum with the location of the highest grey value pixel. We try to find the area which is nearest to the location of the highest grey value pixel. The nearest local maximum would be considered as the detected point of the laser line. In the case of multiple values, an algorithm is developed to choose the best suitable amount. This algorithm also considers the grey value intensity at each local maximum point for the accurate decision-making process. However, we can not repeat the same process for each row. If the ROI contains highlights caused by ambient light, the distribution of intensity would be affected [2]. As we can see in Fig. 5, there are two possible circumstances. In the first case, the highlight is separate from the projected laser line. The second case is where the highlight is merged with the laser line by making the intensity distribution a wide bell-shaped curve. In Fig. 5a, the first peak is the peak of the highlight. The same method will assume the highest intensity of highlight as a projected laser profile. Therefore, the location of the detected laser point in the previous row is taken as a reference for the next row. "If the location of the detected laser profile is (x, y), then in the next row, we search pixel locations (x+1, y-10 ) to (x+1, y+10 ) for finding the pixel with the highest grey value [2]." Now, we repeat the process used for the first line to detect the laser line. The problem of detecting the laser line when it is merged with the highlight is solved in the next stage of the experiment. Here, we compare the detected laser line in both images. Typically, the highlights caused by ambient lighting conditions will not be visible in both photos. Therefore, the highlights which are present in the left image will not be present in the right image and vice versa. For the case where the laser line is merged with the highlight in one shot, we would still be able to detect the laser line accurately in the other image. By comparing, we can specify the region which has inaccurate output because of the reflection. Another critical factor of the detection process is that the reflected region should not be considered as a part of the projected laser profile because of its high intensities. The red-light laser penetrates deeper into the target surface compared to the blue-light laser. Therefore, the red-light laser light looks blurry and diffused and merges with the reflected region. On the other hand, the bluelight laser generates a much more focused laser band when projected on to the surface [19]. Figure 6 show the images of the projected red-light and bluelight lasers on the washing machine drum in ambient lighting conditions. Here, both the lasers have the same maximum output power. As seen in Fig. 6, the red-light laser has low intensities compared to the blue-light laser. Therefore, while trying to detect the red-light laser in the highlighted region, the reflected region is detected as a part of the laser line. In contrast, the projected blue-light laser is detected accurately, even in the presence of the reflected area. This is one of the advantages of using blue-light laser instead of red-light laser for 3D reconstruction of highly specular surfaces. Experiments and Results The setup of the experiment is shown in Fig. 7. In this experiment, we are using red and blue light lasers of class 2M. The maximum output power of both lasers is 20 mW. Both single-line lasers have the same fan angle of 45 degrees. Now, the position of the laser is one of the most critical parameters of this technique. To understand the concept, we projected a narrow band of a horizontal laser line on to a flat surface. Figure 8 shows the captured image of the projected laser line. As seen in Fig. 8, the intensity of the laser decreases as the distance from the centre of the image increases. Also, the laser line diffuses and causes reflection when it is projected directly in the centre of the image. However, if the laser line is projected too far from the centre of the image, then the intensity of the laser line is low, and it merges with the background. Therefore, we need to choose the position of the laser in such a way that it does not cause any reflection and does not get merged with the background in both images. For the cameras, we are using two Genie Nano M4020 monochrome camera, which has 4112 × 3008 resolution. The focal lengths of both cameras are identical. Also, the stereo cameras are placed in Canonical Stereo Geometry, which means their optic axes are parallel. The baseline distance, which is the distance between the optical axes of two cameras is 130 mm for this experiment [16]. The setup is the same for both lasers. The experiment is first performed using red-light laser and afterwards using blue-light laser. Moreover, HALCON software is used to perform image processing tasks. The stereo cameras are calibrated using Halcon calibration plate, which has a pattern of hexagonally arranged black marks printed on white background. A narrow band of the red-light laser line is projected on the object after calibrating stereo cameras. Figure 9 show images of different shaped objects with a projected laser band. Stereo cameras capture the images of the object with a projected laser line. The next step is to rectify the captured stereo images. After rectification, we use the fundamentals of 2D shape matching to get the Region of Interest (ROI) automatically. Now, we apply the algorithm to detect the projected laser line in both images. The left and right ROI images with detected laser profiles are shown in Fig. 10. Stereo matching is performed only on the detected laser profiles to calculate its disparity. We can reconstruct the projected laser line in the World Coordinate System (WCS) using the calculated disparity value and calibration parameters. "For a full 3D reconstruction of the object, the object is rotated at specific intervals. At each interval, the projected laser profile is reconstructed. By merging these reconstructed laser profiles, we can reconstruct the shape of the object in 3D for accurate measurements [2]." For each object, we repeat the whole process by replacing the red-light laser with the blue-light laser. No changes have been made in the setup for bluelight laser. Figure 11 shows the left and right ROI images with detected laser profiles for blue-light laser. In Fig. 12, we compare the output reconstructed 3D profiles by red-light laser with the output reconstructed 3D profiles by blue-light laser. The accuracy of the output mainly depends on the detection of the laser profile. The Table 2 shows the number of scan required to reconstruct each object using red and blue light laser. Here, false positive specifies how many times the reflection is falsely detected as a part of a laser line. From the deviation results, we can tell that the detected red-light laser is quite noisy. On the other hand, the detected blue-light laser is quite sharp, and it detects all small features of the object accurately. Conclusion To conclude, blue-light laser is proven to be more accurate compared to redlight laser for the 3D shape measurement of highly specular surfaces in identical conditions. Also, the projected beam of red-light laser diffuses and merges with the reflection caused by ambient light. On the other hand, the blue-light laser does not penetrate the surface. It provides a sharp narrow beam when projected onto a highly specular surface. Therefore, we can accurately detect the blue-light laser even in the presence of reflection. The accuracy of the output is improved by using the blue-light laser. Moreover, the proposed method is proven to be a simple, fast, feasible, accurate and cost-effective solution for the inspection of reflective objects even in ambient lighting conditions. Unlike commercial laser profilers, there is no trade-off between the field of view and the accuracy of the output.
2021-03-22T17:34:44.012Z
2021-03-18T00:00:00.000
{ "year": 2021, "sha1": "dc27c8c15dd1f23dae9892c3570ac5779bfd3a77", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "dc27c8c15dd1f23dae9892c3570ac5779bfd3a77", "s2fieldsofstudy": [ "Engineering", "Materials Science" ], "extfieldsofstudy": [ "Computer Science" ] }
236272341
pes2o/s2orc
v3-fos-license
miRNAs Associated with Nasopharyngeal Carcinoma: New Prognostic Biomarkers and Therapeutic Targets Background: Most of the studies included in this analysis highlighted miRNAsrole in the prognosis of nasopharyngeal carcinoma (NPC) patients. Our aim in this bioinformatics study was to synthesize relevant data associated with NPC to identify new prognostic markers. Methods: miRNA and mRNA data related to NPC were downloaded from the Gene Expression Omnibus (GEO) database. A variety of online analytical tools (starBase, TargetScanHuman7.2, Metascape, Cytoscape_v3.6.0, and GEPIA) were used to analyze the downloaded data to identify biomarkers with extremely high sensitivity and further research value. Results: Changes in the expression levels of 13 miRNAs played crucial roles in the overall survival of NPC patients (miR-375 was down-regulated, miR-96-5p, miR-155-5p, miR-320a, miR-378a-3p, miR-15b-5p, miR-21-5p, miR-25-3p, miR-93-5p, miR-493-3p, miR-493-5p, miR-494-3p, and let-7i-5p were up-regulated). Additionally, eight central genes (CDK1, CCNB1, CCNA2, TOP2A, AURKA, MAD2L1, CDC6, and CHEK1) were identied as target genes for further NPC therapy. Conclusions: Our ndings further elucidate the underlying relationship between miRNAs and prognosis in NPC. The identied miRNAs are closely related to NPC prognosis and have extremely high research value for future medical treatment. analysis conducted in the form of a PPI network for the in-depth detection and study of target genes. We identied eight genes as central genes based on connectivity scores ≥ 110 and conducted follow-up survival analysis on these genes. The results showed that the central genes (CDK1, CCNB1, CCNA2, TOP2A, AURKA, MAD2L1, CDC6, and CHEK1) played a remarkable role in the 10-year survival rates of patients with NPC. Introduction Nasopharyngeal carcinoma (NPC), which has a high mortality rate, remains a medical challenge in Southwest China. In 2018, there were 129,000 new cases of NPC worldwide. However, due to insu cient reliable evidence in the literature, several issues regarding the pathogenesis and clinical management of NPC remain unresolved 1 . Approximately one-quarter of patients with NPC are in the advanced stage 2 . Although radiotherapy remains the most essential treatment for NPC, the recurrence rate of the disease is as high as 82% due to radiotherapy resistance 3,4 . In recent years, there has been abundant research on radioresistance, drug resistance, and angiogenesis to uncover the underlying mechanisms for miRNA involvement in NPC progression 5 . Changes in radiotherapy sensitivity and the prognosis of NPC patients can also occur due to changes in miRNA expression. By in uencing signal pathways, MiRNAs can participate in radiotherapy resistance in NPC cells. For example, NPC cell proliferation and tumor progression are regulated by miR-375, which is up-regulated through the promotion of the pyruvate dehydrogenase kinase 1 (PDK1) / phosphatidylinositol (3,4,5)-trisphosphate (PIP3) / protein kinase B (Akt) axis 6 . In contrast, the down-regulation of miR-429 functions as a tumor suppressor 7 . For these reasons, new strategies to treat NPC need to be explored. There is su cient evidence closely relating miRNAs to the prognosis of NPC patients. Gene chip is a highly reliable technology that can quickly detect differentially expressed genes (DEGs), as demonstrated by more than 10 years of research 8 . The technology simpli es the identi cation of speci c items in a database and provides various possibilities for further research. Some authors conducted indepth research on miRNAs related to NPC prognosis using a bioinformatics approach 9 . However, the research outcomes were limited. Therefore, we aimed to conduct more detailed research with broader applications. The Gene Expression Omnibus (GEO) can provide us with various data required for our research, including the respective expression data for genes and miRNAs. The expression data for patients with NPC were downloaded from this database for further research, which yielded substantial results. We found miRNAs that could serve as prognostic biomarkers for NPC, the key genes they target, and the network of connections between them. We subsequently visualized these connections. The results may help us establish a clearer regulatory network of miRNAs that target mRNAs and understand the role of the above networks in NPC prognosis. Microarray data information We obtained two public datasets (GSE70970 and GSE12452) on NPC and NPC tissues from GEO. The microarray data for GSE70970 and GSE12452 consisted of 246 NPC tissues with 17 normal tissues and 31 NPC tissues with 10 normal tissues, respectively. The experimental platforms used for GSE70970 and GSE12452 were as follows. Data processing of DEGs DEGs between NPC specimens and normal NPC specimens were selected via the GEO2R online tool 10 based on the following standard: |log fold change (FC)| > 1.5 and an adjusted P-value of < 0.05. The raw data were processed in Venn software to detect commonly expressed genes present in both GSE70970 and GSE12452. The criterion for the down-regulation of miRNA was logFC < 0, and the criterion for the upregulation of miRNA was logFC > 0. miRNA target prediction In order to verify the target genes of miRNAs and the accuracy of the results, we combined starBase (http://starbase.sysu.edu.cn/) 11 with TargetScanHuman7.2 (http://www.targetscan.org/) 12 for accurate identi cation of the target genes. We then performed miRNA target prediction and analyzed the mRNA gene variance. The results of these procedures jointly revealed that the target genes were related to the prognosis of NPC. To better understand the organic capabilities of the selected genes, we used an online tool to perform Gene Ontology (GO) and pathway enrichment analyses. GO analysis is a generally accepted method for de ning genes and their RNA or protein products to identify distinct molecular biological functions through high-throughput screening for tumor-speci c transcripts 13 . Metascape (https://metascape.org) is an online tool that produces results with high credibility. Metascape has a wealth of functions, including providing a comprehensive gene list annotation, analyzing resources, and visual enrichment analysis 14 . Metascape provides a biologist-oriented resource for the analysis of system-level datasets, and we used these functions to perform enrichment and pathway analyses of DEGs (P < 0.05). Protein interaction network We imported the predicted target genes from the STRING search tool (http://string-db.org) to evaluate the reciprocity within the protein-protein interaction (PPI) network for the genes. There were 589 nodes and 5,095 edges in the PPI network generated from the results. We then selected the top 50 nodes (ranked by degree) from the above results and visualized them with Cytoscape software 15 using a threshold value of 0.4 for the intermediate ducial interval. Survival analysis and central gene expression To identify the central genes among the predicted target genes, we used CentiScaPe (an application in Cytoscape_v3.6.0 software) and prudently selected central genes with a rigid threshold of ≥110 as highconnectivity nodes for subsequent analysis 16 . We established a network of targeting miRNAs and regulating mRNAs with Cytoscape software and performed a visual analysis. Subsequently, the online analysis tool ONCOMIR (http://www.oncomir.org/) 17 was used to evaluate the prognostic value of the central genes. The identi ed genes accorded with the P < 0.05 criterion signi cantly correlated with survival and were estimated to be prognostic genes. We obtained the expression data for the central genes in normal tissues and tumor tissues through the Oncomine database. Immunohistochemistry staining results obtained from the Human Protein Atlas con rmed that the expression levels of the central genes were closely related to NPC prognosis. Identi cation of differentially expressed miRNAs in NPC We downloaded the GSE12452 dataset from the GEO database as described above. This dataset contains data for 41 NPC-related mRNA samples (10 normal and 31 tumors). The dataset was con rmed and studied by using the GEO2R online tool to identify differentially expressed mRNAs. P < 0.05 and | logFC | > 1.5 were speci ed as the standard (Fig. 2). MiRNA target prediction and study of the functions The online target gene prediction tools starBase and TargetScanHuman 7.2 were employed to complete our target gene prediction, resulting in a total of 1976 genes. We employed a Venn diagram taking into consideration the accuracy of the biological analysis to evaluate genes closely related to the miRNAs and mRNAs identi ed and nally selected 589 target genes. To investigate the effects of these genes at the molecular level, we used Metascape to perform GO enrichment analysis and the Kyoto Encyclopedia of Genes and Genomes (KEGG) pathway enrichment analysis on the genes to predict their biological functions and the related pathways among distinct DEGs. Figure 3A shows that the functions of the DEGs were related to cellular processes, regulation of biological processes, and cellular component organization or biogenesis. The DEGs were signi cantly enriched in the cell cycle, non-integrin membrane-extracellular matrix (ECM) interactions, transcriptional regulation of E2F6, interleukin-4, and interleukin-13 signaling, as well as signaling by the KIT stem cell factor (KIT-SCF), as shown in Fig. 3B. Furthermore, to perform PPI enrichment analysis and con rm the connection of the network modules, we employed MCODE. Each MCODE module pathway and process enrichment analysis was applied independently. The results provided evidence of a close correlation between the biological functions of the selected genes and the "cell cycle," "non-integrin membrane-ECM interaction," "cilium organization," "regulation of DNA recombination," "DNA replication," "response to radiation," "organelle ssion," and "cell division" (Figs. 4, Fig. 5 and Fig. 6). PPI network and veri cation of central genes The STRING online tool was utilized to further validate the interactions between our target genes ( Fig. 7A). We also used Cytoscape software to visualize the top 50 results, providing a direct view of the most critical results from the study as a PPI network ranked by degree (Fig. 7B). There were 583 nodes and 5,095 edges between proteins in the network. We calculated the connectivity between the nodes using CentiScaPe software by following highly connected genes with regard to the disease. Drawing conclusions from the results, eight nodes (with degree ≥ 110) were selected as central genes (CDK1, CCNB1, CCNA2, TOP2A, AURKA, MAD2L1, CDCC, and CHEK1). During the retrospective analysis of the miRNA-mRNA regulatory network, most of the genes mentioned above were found to be regulated by the differentially expressed miRNAs related to NPC. The eight central genes were regulated by nine miRNAs (miR-96-5p and miR-375 regulated the expression of CDK1, miR-493-3p regulated the expression of CCNB1; miR-320a regulated the expression of CCNA2; miR-21-5p and miR-96-5p regulated the expression of TOP2A; miR-25-3p and let-7i-5p regulated the expression of CDC6; miR-493-3p regulated the expression of MAD2L1, miR-493-3p and miR-493-5p regulated the expression of AURKA; miR-378a-3p and let-7i-5p regulated the expression of CHEK1). Survival analysis for central genes and miRNAs We used the online tool ONCOMIR to further assess the clinical characteristics of the 13 miRNAs and their correlations with survival, based on previous research. The results demonstrated that the expression of the 13 miRNAs was crucial for the survival of patients with NPC (Fig. 8). Moreover, the P values for the central eight genes were far less than 0.05 in the survival analysis. Signi cantly elevated expression of the central genes was observed in NPC as compared to normal tissues during the analysis using the Oncomine database (Fig. 9). This indicates that the signi cant increase in the 10-year survival rate of NPC patients is related to the low-level expression of the eight central genes (Fig. 10). Moreover, immunohistochemistry staining data acquired from the Human Protein Atlas database also con rmed the ndings regarding the expression of central genes (Fig. 9). Discussion Some uncertainties remain in the research and exploration of NPC treatment, especially oncogenic mutation patterns in NPC 1, 2, 18 . This study was performed using tumor-related data from GEO to identify NPC-related miRNAs and mRNA regulatory molecules, with the aim of achieving a breakthrough in the research process for NPC gene therapy. In recent years, a growing number of studies have demonstrated the potential of miRNAs in NPC research. This study was conducted using bioinformatical methods based on two datasets (GSE70790 and GSE12452) from the GEO database. Through a tool named GEO2R, with | logFC | > 1.5, and adjusted P-value < 0.05 as the research standard, we demonstrated that one miRNA (miR-375) was up-regulated and 12 were down-regulated (miR-96-5p, miR-155-5p, miR-320a, miR-378a -3p, miR-15b-5p, miR-21-5p, miR-25-3p, miR-93-5p, miR-493-3p, miR-493-5p, miR-494-3p, and let-7 i-5p) in NPC. Identi cation of these miRNAs and mRNA targets that are closely related to the prognosis of NPC lays a solid foundation for future research. The abovementioned molecules may be related to the prognosis of NPC, as suggested in previous related studies. In NPC, high expression of miR-375 was reported to regulate PDK1 and modulate the progression of NPC cells. MiR-96-5p is a target of CDK1 and serves as an inhibitor of NPC. MiR-494-3p promotes NPC cell growth, migration, and invasion by targeting Sox7. Although there are no reports about miR-155, miR-320a, miR-15b-5p, miR-21-5p, miR-23-3p, miR-93-5p, miR-493-3p, miR-494-3p, and let-7i-5p related to NPC, they have been reported as tumor inhibitors in other carcinomas such as lung cancer and hepatocellular carcinoma [19][20][21][22][23][24] . This indicates that they also have considerable potential in further research of NPC. To gain a deeper understanding of the uncharted territory of miRNAs, we constructed a PPI network using Metascape and performed target gene prediction and functional analysis of 344 generated target genes. These target genes were found to possess various functions related to tumor progression in previous studies, such as the cell cycle, non-integrin membrane-ECM interactions, cilium organization, DNA recombination, DNA replication, and response to radiation. The main pathological type of NPC is squamous cell carcinoma, but cell cycle genes have been reported to affect the differentiation of squamous cells 25,26 . ECM-related genes play an indispensable role in the tumor microenvironment, and their potential as target genes and biomarkers has been described in many studies 27 . The susceptibility to NPC in the Chinese population is closely related to the interleukin (IL)-13 gene 28 , while IL-4 has been shown to affect the progression of NPC by affecting the signal transducer and activator of transcription (STAT) pathway 29 . There is much evidence to support the indispensable role of KIT-SCF in other cancers 30,31 . An analysis was conducted in the form of a PPI network for the in-depth detection and study of target genes. We identi ed eight genes as central genes based on connectivity scores ≥ 110 and conducted follow-up survival analysis on these genes. The results showed that the central genes (CDK1, CCNB1, CCNA2, TOP2A, AURKA, MAD2L1, CDC6, and CHEK1) played a remarkable role in the 10-year survival rates of patients with NPC. miRNAs are non-coding small RNA molecules that can inhibit transcription and translation and cleave target mRNAs to accelerate their degradation. In his review, Sya rah suggested that a single miRNA marker is not adequate for NPC prognosis. Therefore, a study on manifold miRNA biomarker pro les can help improve the reliability of prognostic research on NPC 32 . Our study has advanced the existing knowledge on combined applications for miRNAs and mRNAs, and the results of our research support the hypothesis we put forward. MiR-96-5p and miR-375 regulate the expression of CDK1, miR-493-3p regulates the expression of CCNB1, miR-320a regulates the expression of CCNA2, miR-21-5p and miR-96-5p regulate the expression of TOP2A, miR-25-3p and let-7i-5p regulate the expression of CDC6, miR-493-3p regulates the expression of MAD2L1, miR-493-3p and miR-493-5p regulate the expression of AURKA, and miR-378a-3p and let-7i-5p regulate the expression of CHEK1. There have been quite a few studies on the abovementioned central genes in the past. CDK1 can be integrated with CCNB1 to actuate the G2-M transition and bind to other cyclins to further adjust and control the G1 process and G1-S transition 33 . In a previous study, CDK1 was con rmed as a direct regulated site of miR-96-5p and reportedly served as a tumor inhibitor for NPC 34 . As shown in some studies, CDK1 may be related to radiosensitivity in tumor cells 35 . CCNA2 was reported to be a downstream target of miR-29c-3p and mainly enriched in the cell cycle 36 . A study on promoter methylation revealed that CCNA2 might be a tumor suppressor gene for NPC 37 . TOP2A is a subtype of TOP2, which plays an important role in DNA synthesis and transcription 38 . CCNB1 is an important component of the cell cycle pathway and one of the hub genes that substantially in uences cancer development. Some studies have revealed the potential role of CHK1-related pathways and CDC6 in reversing radioresistance 39,40 . Some previous bioinformatics studies related to NPC showed that AURKA, CCNB1, and MAD2LI were genes with considerable value for further research 41 . However, to our knowledge, there have been no reports about the abovementioned genes (CCNB1, CDC6, TOP2A, CHK1, and AURKA) in the eld of NPC research to date. These genes are expected to become new research targets in the future. Conclusion Drawing conclusions from the results of the present study, we methodically analyzed mRNAs and miRNAs related to NPC based on data from GEO and the systematic bioinformatics prediction approach. Our research deepens the comprehension of NPC at the molecular biology level and lays the foundation for future studies regarding the prognosis of patients with NPC. This study is expected to become a cornerstone for future studies on NPC. The datasets generated and analyzed during the present study are available from the corresponding author on reasonable request. None Author contributions JYX and SL designed/performed most of the investigation, XHL data analysis and wrote the manuscript; CSZ provided pathological assistance; WW and LCF contributed to interpretation of the data and analyses. All of the authors have read and approved the manuscript. Con ict of Interest The authors declare that they have no con ict of interest. Figure 1 Volcano plot of differentially expressed mRNAs. The red dots represent up-regulated miRNAs, and the blue dots represent down-regulated mRNAs. Validation of the expression of central genes at the transcriptional and translational levels using the Oncomine database and the Human Protein Atlas database (immunohistochemistry).
2021-07-26T00:06:16.598Z
2021-06-07T00:00:00.000
{ "year": 2021, "sha1": "e066d7cdffe25a3ade60dc1b96fd03546d5d5d8b", "oa_license": "CCBY", "oa_url": "https://www.researchsquare.com/article/rs-578780/v1.pdf?c=1631898577000", "oa_status": "GREEN", "pdf_src": "ScienceParsePlus", "pdf_hash": "62d17123d73a2e91b454d3e95b938da438a9a2c0", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
248573493
pes2o/s2orc
v3-fos-license
Identifying Methylation Signatures and Rules for COVID-19 With Machine Learning Methods The occurrence of coronavirus disease 2019 (COVID-19) has become a serious challenge to global public health. Definitive and effective treatments for COVID-19 are still lacking, and targeted antiviral drugs are not available. In addition, viruses can regulate host innate immunity and antiviral processes through the epigenome to promote viral self-replication and disease progression. In this study, we first analyzed the methylation dataset of COVID-19 using the Monte Carlo feature selection method to obtain a feature list. This feature list was subjected to the incremental feature selection method combined with a decision tree algorithm to extract key biomarkers, build effective classification models and classification rules that can remarkably distinguish patients with or without COVID-19. EPSTI1, NACAP1, SHROOM3, C19ORF35, and MX1 as the essential features play important roles in the infection and immune response to novel coronavirus. The six significant rules extracted from the optimal classifier quantitatively explained the expression pattern of COVID-19. Therefore, these findings validated that our method can distinguish COVID-19 at the methylation level and provide guidance for the diagnosis and treatment of COVID-19. INTRODUCTION Coronavirus disease 2019 (COVID-19) was announced as a "public health emergency of international concern" by the World Health Organization (WHO) on January 30, 2020 and was assessed as a global pandemic on March 11, 2020(Rodríguez-Morales et al., 2020Eurosurveillance Editorial Team, 2020). The causative agent of COVID-19 is a new type of coronavirus, whose complete gene sequence is approximately 79.5% similar to that of severe respiratory syndrome coronavirus SARS-CoV. Therefore, it was named SARS-CoV-2 by the International Virus Laboratory Classification (Zhou et al., 2020;Zhu et al., 2020). SARS-CoV-2 is a group 2B ß-coronavirus, which is a linear single-stranded positive-stranded RNA virus. It is similar to other coronaviruses and consists of four structural proteins, namely, spike protein, envelope protein, membrane protein/matrix protein, and nucleocapsid protein. COVID-19 has a huge impact on global public health. According to WHO, SARS-CoV-2 had caused 156,496,592 infections and 3,264,143 deaths worldwide until May 8, 2021, and a total of 1,171,658,745 vaccine doses had been administered worldwide till May 5, 2021(World Health Organization, 2020. However, definite and effective treatments for COVID-19 are still lacking, and no antiviral drug has been confirmed by a rigorous "randomized, double-blind, placebo-controlled" study. As early as 1975, researchers (Holliday and Pugh, 1975;Riggs, 1975) found that in vertebrates, cytosine methylation at the CpG site can be used as a genetic marker and can be passed on to the next generation by cell division. In plants and mammals, methylation on the 5th carbon atom of cytosine residues is the most widely studied epigenetic modification. In mammals, cytosine methylation mostly exists on the CG sequence; plants have CHG and CHH methylation (H = A, C, or T) in addition to CG methylation. DNA methylation is relatively stable and can exist persistently during DNA replication. The position of DNA methylation can be determined and its relationship with gene regulation can be explored with the development of whole-genome methylation sequencing technology. The DNA methylation of the promoter region can suppress gene expression by preventing transcription factor accessibility. Moreover, the DNA methylation of the gene body can affect chromatin structure, alternative splicing, and transcription efficiency (Lorincz et al., 2004). In mammals, such as Homo sapiens and Mus musculus, DNA methylation is necessary to maintain normal embryonic development (Yin et al., 2012;Guo et al., 2014), and abnormal methylation has remarkable effects on diseases (Robertson, 2005). In addition, methylation plays an important role in the regulation of the expression of tissue-specific genes or developmental stagedependent genes (Gehring and Henikoff, 2007). Strong evidence showed that epigenetic markers, including histone modifications, DNA methylation, chromatin remodeling, and non-coding RNAs, affect gene expression profiles and increase individual vulnerability to virus infections (Fang et al., 2012;Menachery et al., 2014). Meanwhile, viruses have developed a complex, highly evolved, and coordinated process that can regulate the host's epigenome, control the host's innate immune and antiviral defense processes, and thus promote the powerful replication of the virus and the onset of disease (Schäfer and Baric, 2017). Circulating blood DNA methylation profiles are altered in patients with severe diseases, including severe sepsis and pediatric critical illness (Binnie et al., 2020;Güiza et al., 2020). In this study, we obtained methylation data from 106 SARS-CoV-2-positive patients and 26 SARS-CoV-2-negative patients. Machine learning algorithms, such as Monte Carlo feature selection (MCFS) (Dramiński et al., 2007) and decision tree (DT) (Safavian and Landgrebe, 1991), were applied to identify methylation features and decision rules that clearly distinguish different cases and to build classification models with excellent performance to provide insight into the diagnosis, susceptibility, and potential pathogenesis of COVID-19. Datasets We downloaded the methylation data of the 128 samples from Gene Expression Omnibus with accession number GSE174818 (Balnis et al., 2021), which contains 102 samples from patients with COVID-19 and 26 samples from patients without COVID-19. For each sample, 86,5807 methylation sites were identified by the Illumina Human Methylation EPIC platform. Monte Carlo Feature Selection For the investigated methylation data, features (methylation sites) were much more than sample numbers. Evidently, not all features were related to COVID-19. It is necessary to analyze all features and extract essential ones. As different feature selection methods may produce quite different results, selection of proper methods was quite essential. To our knowledge, MCFS is good at dealing with data containing few samples and large features. Thus, it was adopted in this study. The MCFS method (Dramiński et al., 2007) is an effective and broadly adopted feature selection method that is composed of various DTs and builds various bootstrap sets with subsets of randomly selected features. First, m bootstrap sets and t feature subsets are created from the primary data set. Then, one tree is constructed for m bootstrap sets and t feature subsets. Overall, m × t DTs are created. The relative importance (RI) score for each feature can be calculated based on the resultant DTs. RI score is calculated as the frequency of a target feature in the growing DT as follows: where f indicates a feature; wAcc means the weighted accuracy of the DT τ; IG(n f (τ)) refers to the information gain of node n f (τ); no.in n f (τ) and no.in τ denote the number of samples of n f (π) and τ, respectively; and u and v are weighting factors, which are set to 1. More significant features have higher RI values. Therefore, the features were sorted in decreasing order based on their RI values in the new feature list after MCFS processing. The MCFS program used in this study was loaded from http:// www.ipipan.eu/staff/m.draminski/mcfs.html. For convenience, the program was run using the default parameters, and u and v were set to 1. Incremental Feature Selection Although the MCFS method can rank features by their importance, it cannot determine which features are essential. Therefore, the incremental feature selection (IFS) method (Liu and Setiono, 1998) was used to determine the optimal number of essential features required for the classification algorithm. First, IFS yields a series of feature subsets based on step size from the list of features received from the MCFS method described above. For example, when the step size is 5, the first feature subset includes the top 5 features, the second feature subset includes the top 10 features, and so on. Afterward, IFS trains the classifier on the training samples, which contain these features on each feature subset. The best subset was determined based on the evaluation metrics of the obtained model by evaluating this classifier through 10-fold cross-validation (Kohavi, 1995;Chen et al., 2021;Liu et al., 2021;Li X. et al., 2022;Tang and Chen, 2022;Yang and Chen, 2022). Decision Tree DT is one of the most classic machine learning algorithms (Safavian and Landgrebe, 1991). Although it is not very powerful, even much weaker than several strong machine learning algorithms, it also has its merits. In fact, DT is a white-box model, meaning it is possible for users to understand its classification principle. This cannot be achieved for all black-box models, which is always more powerful than DT. In the field of biomedical research, such merit is quite helpful as investigators want to not only build efficient models but also obtain helpful clues to understand the complicated underlying mechanism. Accordingly, DT is widely accepted in the field of biomedical research (Zhang et al., 2021a;Zhang et al., 2021b;Huang et al., 2021;Li Z. et al., 2022;Chen et al., 2022;Ding et al., 2022). Generally, DT uses the IF-THEN format to accomplish classification or regression tasks through a tree structure. It often yields satisfactory performance at a low computational cost. In this work, we applied the Scikit-learn module in Python to build the DT classifier. Synthetic Minority Oversampling Technique As described in the Datasets Section, a considerable variation in the sample sizes of patients with COVID-19 was observed. In this case, a classifier with excellent performance is difficult to build, because the predicted results are suitable for the type with the largest sample size. Synthetic minority oversampling technique (SMOTE) (Chawla et al., 2002) was performed in the present work to address this problem. This method ensures that the number of samples in the minority class is equal to the number of samples in the majority class after processing by adding new samples to the minority class. In detail, x is a randomly selected sample in the minor class, and some samples with the same class that are closest to x are identified. Next, sample y is randomly selected from the closest samples mentioned above, and a novel sample is produced by choosing a randomly selected point between x and y in the feature space. The newly produced sample is deeply associated with x and y; thus, it has a high probability of belonging to the same class as x and y and is therefore considered to be in the same class. The above procedure executes several times until the minority class has same number of samples in the majority class. In this study, we employed the SMOTE procedure acquired from https://github.com/scikit-learn-contrib/imbalanced-learn and directly used the default parameters. Performance Measurement For the binary model used in this study, its predicted results can be counted as a confusion matrix, which contains four entries: true positive (TP), false positive (FP), true negative (TN) and false negative (FN). According to these entries, several measurements can be calculated. In this study, we adopted the following measurements: sensitivity (SN, also called recall), specificity (SP), prediction accuracy (ACC), Matthews correlation coefficient (MCC), precision and F 1 -measure (Sasaki, 2007;Powers, 2011;Zhao et al., 2018;Wu and Chen, 2022). They can be computed by Among above measurements, we selected F1-measure as the key measurement as it can better reflect the stability of the model. A higher F1-measure indicates a more robust classification model. Figure 1, we applied an analysis flow to extract key features and build the classification model and rules. The results are summarized in the following sections. Results of MCFS Method on Methylation Profiles We employed the MCFS method to assess the importance of each feature and select key sites from the COVID-19 methylation dataset. These features were ranked in decreasing order of RI scores, and the results are presented in Supplementary Table S1. Results of IFS Method With DT After the MCFS analysis, we brought the obtained feature list into the IFS with the DT algorithm. The step size of the IFS was set to 5. Since the list was very large, it would take lots of time to consider all possible feature subsets. Furthermore, not all methylation features are related to COVID-19. Thus, only top 10,000 methylation features in the list were considered, that is, 2000 feature subsets were investigated. DT model was constructed on each feature subset and was evaluated by 10fold cross-validation. The obtained evaluation metrics, including SN, SP, ACC, MCC, precision and F1-measure are listed in Supplementary Table S2. To display the DT models on different feature subsets, we plotted an IFS curve using the number of features as the X-axis and F1-measure as the Y-axis, which is shown in Figure 2. It can be observed that the highest F1-measure was 0.990, which was obtained by using top 50 features in the list. The other five measurements of such model are illustrated in Figure 3. The ACC and MCC were 0.984 and 0.954, respectively. As for SN, SP and precision, they were 0.980, 1.000 and 1.000, respectively. This result indicated that the constructed optimal DT model has a near-perfect performance and proved the effectiveness of the analysis method. Comparison of DT Models With Informative Features In this study, the IFS method was adopted to extract best features for DT and the best DT model was constructed with these features. In fact, MCFS can yield essential features, called informative features, by only analyzing the methylation data. With these informative features, a DT model can also be built. It is interesting to compare the performance of these two models. For the methylation data, 257 informative features were obtained by MCFS. The DT model with these features was evaluated by 10-fold cross-validation. The F1-measure was 0.944, which was much lower than that of the best DT model (0.990). As for other five measurements, they are provided in Figure 3. It can be observed that each measurement was lower than that produced by the best DT model. This indicated the superiority of the best DT model. The employment of IFS method can help to build a more efficient model. Classification Rules As described in the previous section, DT yielded the highest F1measure on the COVID-19 methylation dataset when the top 50 features are used. Therefore, we applied DT to all samples using these 50 features to obtain six rules, which are provided in Table 1. Three rules each were related to COVID-19 and non-COVID-19. These rules clearly expressed the expression patterns of these features. These rules were described in detail in Discussion Section. FIGURE 1 | Flowchart of the computational method in this study. A systematic analysis process that integrates feature selection, DT algorithms, and rule learning was applied to identify COVID-19 methylation site features. The optimal classifier, methylation sites, and rules were determined based on the performance of the DT model and the importance of the features in each model. (Menachery et al., 2018). Therefore, we aimed to explore how DNA methylation influences SARS-CoV-2 infection. Meanwhile, our research proposed a novel and creative pattern with a high distinguishing degree in confirmed and suspected COVID-19 cases through MCFS. Although the realtime polymerase chain reaction test of sputum is the gold standard for the diagnosis of COVID-19, it takes a long time to confirm the diagnosis of patients because of the high level of false negatives. Therefore, researchers conducted various methods to better identify SARS-CoV-2 infection. (Hemdan et al., 2020) constructed a novel deep learning classifier to diagnose COVID-19 through X-ray images, which are cheaper, more convenient, and accessible compared with traditional chest X-ray and computed tomography. (Siddiqui et al., 2020) focused on the correlation of temperature with suspected, confirmed, and death cases by machine learning and found that temperature presents diverse trends in most cities and cannot be the decisive factor in different cases or situations. We focused on several top features and decision rules because they have a crucial impact on the classification and discussed them further through a wide literature publication to prove that our findings are reliable and convincing. Epithelial stromal interaction 1 (EPSTI1, probeID: cg03753191) is an interferon (IFN)-responsive gene that was originally isolated from mixed cultured human breast cancer cells and fibroblasts (Nielsen et al., 2002). This gene is located on chromosome 13q13.3; is 104.2 kb in length; contains 11 exons; and is involved in tumor cell metastasis, epithelial-mesenchymal transition, chronic inflammation, tissue reconstruction, embryonic development, and other biological processes (De Neergaard et al., 2010). EPSTI1 plays an important role in the regulation of cell apoptosis. (Capdevila-Busquets et al., 2015) confirmed by in vitro experiments that EPSTI1 can inhibit breast cancer cell apoptosis by interacting with caspase 8. In addition, EPSTI1 has an antiviral effect against hepatitis C virus (HCV) by affecting the life cycle, viral replication, assembly, and release of HCV. (Meng et al., 2015) confirmed that EPSTI1 can promote the expression of protein kinase-R (PKR)-dependent genes, including IFNβ, IFIT1, OAS1, and RNase L, by activating the promoter of PKR to play an antiviral effect during the process of HCV infection. Without the involvement of IFN treatment, EPSTI1 overexpression effectively suppressed HCV replication, whereas the lack of EPSTI1 enhanced the viral activities. Current research discovered that EPSTI1 expression influences the performance of immune cells. (Kim et al., 2018) found that EPSTI1 expression is remarkably upregulated after macrophage activation with IFNγ and lipopolysaccharide. The proportion of M2-type macrophages is increased in the bone marrow-derived macrophage deficiency of EPSTI1. In EPSTI1 knockout mice, the number of M1 macrophage cells in the peritoneal cavity was significantly reduced. These findings demonstrate an important regulatory role of EPSTI1 in macrophage polarization. Therefore, EPSTI1 methylation may participate in the process of SARS-CoV-2 infection and affect inflammatory and immune function by regulating EPSTI1 expression. NACAP1 (probeID: cg15959262) is a pseudogene of nascent polypeptide-associated complex-alpha (NACA). Phosphatase and tensin homolog (PTEN) pseudogene 1 (PTENP1) has been first revealed to contain microRNA response elements (MREs), which also exist in its corresponding protein-coding gene, PTEN (Poliseno et al., 2010). Increasing pseudogenes are found to have a similar phenomenon, that is, pseudogenes and their corresponding protein-coding genes function as competitive endogenous RNAs for binding to the same microRNAs (Lujambio and Lowe, 2012;Karreth et al., 2015). NACA encodes the a chain of nascent polypeptide-associated complex (NAC), which performs multiple functions, including protecting newborn peptides and regulating the translocation of new peptides into the endoplasmic reticulum and mitochondria (Rospert et al., 2002). The alpha chain of NAC alone acts as a transcriptional co-activator for developmental regulation (Yotov et al., 1998). Furthermore, NACA can regulate the conformation of Fas-associated death domain protein oligomer, which is an important mediator in the signal transduction pathway and can be activated by several members of the tumor necrosis factor (TNF) receptor family (Liguoro et al., 2003). NACA is related to neurodegenerative diseases. Patients with Alzheimer's disease and Down's syndrome have lower NACA expression levels in their brain cells (Kim et al., 2002). More importantly, the inhibition of NACA can induce the proliferation and differentiation of CD8 + T cells and enhance cytotoxicity. (Al- Shanti and Aldahoodi, 2006) used anti-sense technology to reduce the concentration of mRNA that translates NACA and found that CD8 + T cells will differentiate and activate to a higher degree in the presence of antisense oligonucleotide chains. Compared with the control group, the lethality of CD8 + T cells on target cells was enhanced. SHROOM3 (probeID: cg17439158), a member of the Shroom family, encodes an actin-binding protein, which is important in epithelial cell shape and tissue morphogenesis (Haigo et al., 2003;Hildebrand, 2005). Shoom3 overexpression in epithelial cells induces rho kinase (Rock) recruitment and increases myosin 2 (Myo2) accumulation through phosphorylation and activation. The activation of the Rock/Myo2 signaling pathway leads to the local contraction of the actomyosin network on the top surface of the cell, which results in changes in cell morphology. Recent research has proved the indispensable role of SHROOM3 in glomerular filtration barrier integrity (Yeo et al., 2015). Forced Shroom3 expression in fawn-hooded hypertensive rat and endogenous shroom3 knockdown zebrafish improved kidney glomerular function. Moreover, multiple genome-wide association studies and in vivo experiments strongly demonstrated the correlation between SHROOM3 and congenital kidney disease (Khalili et al., 2016). C19ORF35 (probeID: cg08399733), also named PEAK3, is a member of the New Kinase Family 3 (NKF3) that can regulate cytoskeleton stability and cell motility by binding with an adaptor protein, CrkII (Lopez et al., 2019). C19ORF35 is associated with cancer progression. C19ORF35 overexpression has been detected in various cancers, including pancreatic, breast, and colon cancers (Wang et al., 2010;Kelber et al., 2012;Fujimura et al., 2014). C19orf35 methylation is related to early carcinogenesis. According to a DNA methylation sequencing study on 12 patients with early gastric cancers (EGCs), C19orf35 is remarkably hypomethylated in the diffuse type of EGC tissue compared with adjacent corresponding non-tumor mucosal tissue (Chong et al., 2014). IFN-induced with helicase C domain 1 (IFIH1, probeID: cg21060789) and IFN-induced protein 44-like (IFI44L, probeID: cg13452062) are IFN-stimulated genes. IFIH1, also known as melanoma differentiation-associated gene-5, is a cytoplasmic RNA receptor protein composed of 1025 amino acids. IFIH1 recognizes double-stranded RNA with a length of more than 1 kb. It is an important member of the retinoic acid-inducible gene I (RIG-I)-like receptor family, which can activate type I IFN signaling pathway and participate in the pathogenesis of a variety of autoimmune diseases. IFIH1 and IFN-β interact to activate the body's anti-tumor immune response. IFIH1 can promote type I IFN response and increase the secretion of TNF-α and IFN-β. The upregulation of IFIH1 expression may increase the effectiveness of IFN therapy (Pappas et al., 2009). IFN-β can also stimulate the upregulation of IFIH1 and RIG-I, mediate innate immune response, kill tumor cells with low neurotoxicity, and therefore inhibit tumor growth (Wu et al., 2017;Bufalieri et al., 2020). IFI44L is a paralog gene of IFI44 and functions as a regulator of cell apoptosis, virus infection, and congenital immune response. The DNA methylation level of the IF144L promoter may be related to kidney damage in patients with systemic lupus erythematosus (SLE). (Zhao et al., 2016) found that the DNA methylation level of IFI44L promoter in patients with SLE was remarkably lower than that of the normal control group. In addition, the DNA methylation level of the IFI44L promoter in patients with SLE and renal involvement was also remarkably lower than that of patients with SLE without renal involvement. IFI44L participates in the antiviral process of IFN-mediated innate immune response and is a confirmed marker of early viral infection. IFN is the earliest discovered cytokine that can inhibit viral infection and replication and is activated in the early stage of viral infection (within a few minutes to a few hours) (Zaas et al., 2009). (Henrickson et al., 2018) pointed out that when influenza virus and respiratory syncytial virus infections occur, IFI44L acts as an IFN-stimulated factor regulatory gene, and its expression level increases. Therefore, the mRNA expression of IFI44L can be used as an early indicator of virus infection. According to (Kaforou et al., 2017), in 111 bacterial-infected children less than 60 days old, the detection sensitivity of IFI44L mRNA expression is 88.8% (95% CI, 80.3-94.5%), and the specific degree is 93.7% (95% CI, 87.4-97.4%). Therefore, aberrant IFI44L methylation may occur during SARS-CoV2 infection and lead to abnormal IFI44L expression. MX1 (probeID: cg26312951) belongs to human mycovirus resistance genes (MX) with biological functions, such as GTP binding and GTP enzyme activities. The two kinds of MX proteins, namely, MX1 and MX2, differ greatly in virus specificity and mechanism of action. MX1 has antiviral activity against a variety of RNA viruses and certain DNA viruses induced by type I and type II IFNs, including negative-strand RNA viruses and hepatitis B virus. MX1 is enriched in the IFN-γ and Toll-like receptor signaling pathways (Haller et al., 2015). A 2020 study detailed the expression of MX1 in 403 patients with COVID-19 and 50 patients without COVID-19 (Bizzotto et al., 2020). The expression of MX1, MX2, ACE2, and BSG/CD147 can cluster individuals with and without COVID-19 through principal component analysis, which indicated that the expression levels of MX1 and MX2 between patients with and without COVID-19 are remarkably different. MX1 can directly act on the ribonucleoprotein complex of the virus, so it has a wide range of antiviral activity. This feature has been proven to be suitable for RNA viruses and DNA viruses. (Verhelst et al., 2012) have reported that Mx1 interfering with functional viral ribonucleoprotein complex assembly led to inhibition of influenza virus and high MX1 expression can result in a better prognosis to influenza A (H1N1) pandemic in 2009. It is worth noting that GTPase activity is also positively correlated with its antiviral function. Different from MX1, the antiviral function of MX2 is limited to certain viruses, such as HIV. Although the expression of MX1 and MX2 in patients with COVID-19 were significantly higher than those in non-COVID-19 groups, MX1 shows a greater positive correlation with patients with COVID-19 and may be more specific than MX2 in response to SARS-CoV-2. Therefore, MX1 is a key responder to SARS-CoV-2 infection. Collectively, the top identified discriminative feature genes and settled rules have a crucial role in virus infection and IFNmediated immune response. This method demonstrates that our method is reliable and convincing. Our newly presented computational approach based on methylation profiles also provides a new perspective for exploring the mechanism of COVID-19. Furthermore, it is a new method that distinguishes confirmed and suspected COVID-19 cases and has applicable clinical value in the differential diagnosis of patients with confirmed and suspected COVID-19. CONCLUSION The current study aimed to apply computational methods to extract the best biological features and decision rules from COVID-19 methylation profiles. This study has shown that the extracted optimal methylation site signatures and expression rules have been validated by previous work and are reliable and valid for distinguishing COVID-19. This study provides a new set of potential biomarkers/rules that can be used to differentiate patients with COVID-19 at the methylation level. These findings enhance our understanding of COVID-19 expression at the methylation level and could offer guidance for future studies on COVID-19. DATA AVAILABILITY STATEMENT Publicly available datasets were analyzed in this study. This data can be found here: https://www.ncbi.nlm.nih.gov/geo/query/acc. cgi?acc = GSE174818. AUTHOR CONTRIBUTIONS TH and Y-DC designed the study. SD, LC, and KF performed the experiments. ZL, ZM, and HL analyzed the results. ZL, ZM, and SD wrote the manuscript. All authors contributed to the research and reviewed the manuscript.
2022-05-10T13:25:21.170Z
2022-05-10T00:00:00.000
{ "year": 2022, "sha1": "49d5760237404eeac758c6908a425d85f36b9882", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Frontier", "pdf_hash": "49d5760237404eeac758c6908a425d85f36b9882", "s2fieldsofstudy": [ "Computer Science", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
55459644
pes2o/s2orc
v3-fos-license
Construction of EFL Student Teachers’ Beliefs about Method: Insights from Postmethod Student Teachers’ beliefs and their teaching behaviors are interactive and closely related. Student teachers’ any adoption of teaching methods in micro-teaching or teaching practicum is largely hidden behind their beliefs. In this paper, starting with the origin and changes of methods in language teaching method era, the author explains certain terms such as method and postmethod, sorts out the theoretical base of this study which contains postmethod condition and postmethod pedagogy and proposes the idea that postmethod is the heritance, transcendence and development of method. Finally, the author discusses the ways to help student teachers construct their own proper teaching beliefs about method. Introduction Since the People's Republic of China was founded, the Ministry of Education has published more than ten versions of National English Syllabus/Curriculum till 2011, all of which reflect the historical development of beliefs concerning method (BM) in China.With the influences of language teaching innovations in western countries, many scholars, among whom Zhang Shiyi is the most famous one, have advocated to adopt Direct Method in China from 1912 to 1949.After the founding of new China, due to the special relationship with former Soviet Union, Russian became the main foreign language learned in the language classrooms where Grammar-Translation has dominated from 1950 to1956.Till 1960s, Audio-lingual Method was advocated to adopt in the language classrooms.The long period of language teaching history from 1949 to 1976 can be generally divided into three stages: the development of Russian language teaching (1949)(1950)(1951)(1952)(1953)(1954)(1955)(1956), initial recovery of English language teaching (1957)(1958)(1959)(1960)(1961)(1962)(1963)(1964)(1965), and the destruction of English language teaching (1966)(1967)(1968)(1969)(1970)(1971)(1972)(1973)(1974)(1975)(1976).Since then, English language teaching has developed rapidly, and Situational Language Teaching and Audio-lingual Method were mainly advocated in English language teaching in China assisted by Direct Method and the Inductive method.At the end of the twentieth century, Communicative Language Approach was widely used in language classroom followed by Task-based Language Teaching in the new century.Compared the previous Syllabus/Curricula with National English Curriculum for Compulsory Education & Common Senior Middle Schools (2011), the former versions emphasize the specific use of methods, and the latter one lays much emphasis on the teaching beliefs, most of which are within the theoretical framework of postmethod.In fact, teachers' adoption of teaching methods is largely hidden behind their BM.English student teachers' BM has been explored in recent years.However, most of the researches are from the theorists' points of view.In this paper, the author intends to explore the way to help student teachers construct their beliefs about method from the perspective of postmethod, which is not only beneficial for student teachers to construct their own BM from their teaching practice such as micro-teaching, teaching practicum or classroom observation, but also to modify their previous BM. Method Different people hold different views towards "method", there is no uniform definition in the field of English language teaching at present.There isn't any English word which is completely corresponding to "method" in Chinese.The Modern Chinese Dictionary (Chinese Academy of Social Science, 2005) defines "method" as "The way, procedure used to solve issues of thoughts, speeches and actions, etc." "Method" in English usually refers to method, approach or technique. Obviously, the three conceptions are easily confused.A three-level method (approach, method and technique) identified by Anthony (1963) clarifies the confusion (See Figure 1).Approach is a set of correlative assumptions dealing with the nature of language teaching and learning.An approach is axiomatic.Method is an overall plan for the orderly presentation of language material, no part of which contradicts, and all of which is based upon the selected approach.A method is procedural.Technique is implemental-that which actually takes place in a classroom.It is a particular trick, stratagem, or contrivance used to accomplish an immediate object.Techniques must be consistent with a method, and therefore in harmony with an approach as well.In other words,different approaches and methods lead to different techniques. Figure 1.Levels of method Following Anthony, Jack Richards and Theodore Rodgers (1982: 153-68) reconceptualized "method", and proposed that "method" is made up of three levels "approach, design and procedure", which are represented by "approach, method and technique" respectively.Method is only a concept of upper describing the three levels "approach, design and procedure" (See Figure 2).An approach defines assumptions, beliefs, and theories about the nature of language and languages learning.Designs specify the relationship of the theories to classroom materials and activities.Procedures are the techniques and practices derived from one's approach and design.Figure 2. Three levels "approach, design and procedure" Postmethod At the end of 1980s, with criticism on the concept of method itself in the field of language teaching, Kumaravadivelu (1994: 29) first proposes the concept of postmethod, and redefines "method" as "consists of a single set of theoretical principles derived from feeder disciplines and a single set of classroom procedures directed at classroom teachers.The disciplines can be linguistics, psychology and sociolinguistics, etc., and we can use our own teaching procedures derived from our own teaching experience."As to postmethod, he regards it as a concept which signifies a search for an alternative to method rather than an alternative method.Postmethod is not one real teaching method, but a pedagogy which involves teaching strategy, teaching material, curriculum assessment and politics, history and personal experiences influencing foreign language learning. Postmethod Condition Laying a solid foundation of postmethod pedagogy, postmethod condition includes three interrelated attributes.First, postmethod condition signifies a search for "alternative methods" rather than "alternative to methods" (Kumaravadivelu, 1994).While alternative methods are primarily products of top-down processes, alternatives to methods are mainly products of bottom-down processes (Kumaravadivelu, 2003).Products of top-down processes usually refer to theories constructed by theorists based on one or more theories, while products of bottom-down processes are the theories created by practitioners.On condition that "method" gives theorists the right to construct professional theories of pedagogy, the postmethod condition gives practitioners the authority to construct their personal theories of practice.Hunt (1980) also holds the view, "Whether implicit or explicit, theories of teaching and learning should be based on classroom experience: practice into theory".Palmer (1997) states: "Good teaching cannot be reduced to technique; good teaching comes from the identity and integrity of the teacher". Theory is normally separated from practice, and practitioners are isolated from theorists.Then, the relationship between theorizers and practitioners should be refigured.What practitioners do is to construct their personal theories of practice based on the classroom experiences.Anyhow, to bridge the gap between the researchers and the practitioners is of great importance.Fitzgerald (2003) points out, theory and practice should inform each other, that is, theory should inform practice; it is also undoubtedly the case that practice should inform theory. Secondly, postmethod condition signifies teacher autonomy (Kumaravadivelu, 1994).Autonomy implies a general idea that the individual should "freely direct the course of his or her own life" (Young, 1986).As to teacher autonomy, there is not a uniform definition at present.MacGrath (1995) defines it as capacity to self-direct one's teaching, Benson (2000) defines it as freedom to self-direct one's teaching and Smith (2003) argues that the concept of teacher autonomy has different dimensions as professional action or development (qtd. in Lamb, Terry, et al., 2008)(See Table 1).The conventional concept of 'method' typically prescribes teachers what and how to teach, and there is often little room for the teacher's own personal initiative and teaching style (Richards & Rodgers, 2008).However, the postmethod condition recognizes the teacher's potential to know not only how to teach but also know how to act autonomously within the academic and administrative constraints imposed by institutions, curricula, and textbooks (Kumaravadivelu, 2003). The third attribute of postmethod condition is principled pragmatism (Kumaravadivelu, 1994).Principled pragmatism means that attitudes and actions are influenced by practical day-to-day consequences (Wenger, 2007).According to Widdowson (1990), the relationship between theory and practice, ideas and their actualization, can only be realized within the domain of application, that is, through the immediate activity of teaching.Thus, principled pragmatism focuses on how teachers shape and manage the process of specific English language learning as a result of self-reflection and critical assessment.Conducting classroom observation, writing journals and peer consultation are good ways to reflect and evaluate teaching. As Kumaravadivelu (1994) points out, one of the ways in which teachers can follow principled pragmatism is to develop a sense of plausibility.A sense of plausibility (Prabhu, 1990) is "teachers" subjective understanding of the teaching they do.Teachers need to operate with some personal conceptualization of how their teaching leads to desired learning".In addition, such subjective understanding of language teaching is often derived from our own experiences as teachers or learners. The above three attributes of postmethod condition lay a solid foundation on the construction of the three parameters of postmethod pedagogy. Postmethod Pedagogy Based on postmethod condition, Kumaravadivelu proposes postmethod pedagogy which is a three-dimensional system composed of parameters of particularity, practicality and possibility.The first parameter of postmethod pedagogy is the pedagogy of particularity.That is to say, language pedagogy must be sensitive to a particular group of teachers teaching a particular group of learners pursing a particular set of goals within a particular institutional context embedded in a particular sociocultural milieu (Kumaravadivelu, 2001).Context, which has been neglected in previous methodologies, is the crucial aspect of language pedagogy.The key aspects of the context are students' learning needs, wants, styles, strategies, as well as the coursebook, local conditions, the classroom culture, school culture, national culture, and so on (Bax, 2003).Compared with methods and approaches, context has the priority.Only when analyzing and understanding the contexts carefully and systematically can teachers utilize teaching methods and approaches automatically. The pedagogy of practicality, the second parameter of postmethod pedagogy, pertains to a much large issue that has a direct impact on the practice of classroom teaching, namely, the relationship between theory and practice (Kumaravadivelu, 2001).The theory and practice are not simply the guiding and being guided relations, they mutually inform.A pedagogy of practicality, as Kumaravadivelu (1999) visualizes it, seeks to overcome some of the deficiencies inherent in the theory-versus-practice, theorists' theory versus teachers' theory dichotomies by encouraging and enabling teachers themselves to theorize from their practice and practice what they theorize.That is, the pedagogy of practicality aims for the teacher to construct a theory of practice. The third parameter of postmethod pedagogy is the pedagogy of possibility.First, this parameter is regarded with students' and teachers' subject positions-class, gender, race, ethnicity, and the experiences that they bring to the pedagogical settings (Kumaravadivelu, 2001).The past experiences which tend to influence ESL/EFL learning/teaching could be shaped not only by learning/teaching environment but also social, political and economical environment.Secondly, this parameter is concerned with individual identity.Language education is a process in which participants seek for and construct their own identity.As Wendon (1987) indicates it, "Language is the place where our sense of ourselves, our subjectivity, is constructed".Thus, teachers and learners' identity should be preserved and protected in the classroom.Thirdly, language teachers can ill afford to ignore the social cultural reality that influences identity formation in the classroom, and nor can they afford to separate the linguistic need from the social needs (Kumaravadivelu, 2001). From all above, the three parameters of postmethod pedagogy are interrelated, interacted, and interweaved, and their boundaries are blurred.As the following figure shows, the characters of these parameters overlap (See Figure 3). EFL/ESL Teaching Methods/Approaches Training It is important to train the student teachers specific methods constructed in method era.Although we are in the postmethod era, method and postmethod do not conflict.Postmethod is just an alternative to method.Knowledge of methods is crucial to the development and growth of student teachers.Training of the techniques and procedures of a specific method is probably essential for student teachers' entrance into teaching, because it provides them with the confidence they will need to face learners and it provides techniques and strategies for presenting lessons (Richards, 2001).As Adamson ( 2004) mentions, methods are still useful props for teachers in constructing their own pedagogy.Ancker (2001) says, "I would not want to impose a method on anybody, but it seems to me the more methods we have, the more we see the variety of human experience, the more we have a bigger palette from which to paint our picture.We have more choices". During the training of methods, we should change the transmission model of training as experts' lectures. According to the belief "theorizing from practice" in the theory of postmethod, we can train the student teachers based on teachers' own classroom experiences. When training teachers the specific methods, it is unnecessary for us to argue what is the best method.Every method has its own advantages.What student teachers should do in their teaching practice is to select different methods based in the light of different learners, teaching objectives and learning stages.While selecting the methods, the student teacher should take these elements into consideration.For example, we can use the TPR more while teaching the first year middle school students differently from the third year middle school students, as the potential for losing face becomes greater the older the learners get (Harmer, 1998).Besides, student teachers should select methods in light of their own features.Otherwise, it will lead to Dongshixiaopin. Prerequisites of Implementing the Postmethod Theory The author argues that there are at least four prerequisites for the student teachers to implement postmethod theory in their classrooms.First, teaching experience is the precondition for implementing postmethod beliefs. The postmethod theory has many advantages.However, its philosophy is too idealistic and abstract.It is hard for the student teachers to grasp.To better carry out the language beliefs, a great amount of practical experiences are and fundamental and necessary, and the application of these beliefs contributes to a great deal of micro teaching practice, teaching practicum or classroom observation. Secondly, understanding various methods or approaches is the basis of implementation of postmethod beliefs. The postmethod theory provides us an alternative to method, and relevant researches reveal that the student teachers lack the knowledge of methodology, that is, they are not familiar with the alternative methods.To better search for an alternative to method, student teachers should understand some alternative methods which are created by the theorists based on some language (learning) theories.Knowing some knowledge of methodologies will help them to form their own teaching beliefs. Thirdly, constructing appropriate beliefs about methods is the guarantee of implementation of postmethod beliefs. Postmethod is a great challenge to most of student teachers.They might not know how to apply the beliefs although they often consider the theory or teaching beliefs are important.Only when they have the right beliefs about methods can they understand the real significance of the postmethod theory.Thus, an appropriate belief about methods is the key point to better put the beliefs into practice. At last, the context-based research is a key step to ensure the successful implementation of postmethod beliefs. Context-based research is the research activity in which student teachers use some research methods to find out the solutions to the main problems and practical needs in their teaching.During the context-based research, student teachers can do some applied research or action research to practice the postmethod beliefs.The lesson example, which can be used to analyze the appliance of postmethod beliefs, is a good choice. A Training Model for Beliefs about Methods (BM) The author argues that it is very difficult for student teachers to construct their own teacher beliefs about methods (BM).So it is necessary to help student teachers establish proper BM in the teacher training courses.To achieve this goal, we can use lesson examples.After listening to the lesson example, teachers can be instructed to judge whether the selection or usage of certain method is appropriate, to reflect their teaching behaviors, teaching strategy and BM.And then, modify the used method to get a better one.Finally, their own teaching BM can be formed.The training model (See Figure 3) shows how to help student teacher to build their own teaching BM. The dimension affecting the formation of BM is not simply the method itself; it contains many other elements such as the teachers' roles, the learners' roles, ELT Pedagogy.Richards and Rodgers say that method is a way of teaching a language which is based on systematic principles and procedures (2001).The systematic principles and procedures have been developed from different views of the nature of language, the nature of language learning, goals in teaching, the teachers' roles, the learners' roles, etc. "Method" in "Teacher Beliefs about Methods" is at the level of methodology which relates to beliefs themselves (ditto).It is not at the level of method, approach, technique or skill.Teacher beliefs about methods determine the implementation of the classroom teaching.Figure 4 shows that there are four main ways student teachers can follow to form appropriate BM: learning knowledge about teaching skills, critical reflection, action research, and theorizing from practice.Besides, all the ways can be used to minimize the discrepancy between what student teachers believe and the ways they act in the classrooms. Conclusion When it comes to "method", what occur to student teacher's mind is the techniques she/he uses in the classroom. Based on the analysis of "method", the meaning of the term in a student teacher's mind is much different from that in a theorist's mind.The method created by theorists is a one-size-fits-all, cookie-cutter approach and can't meet different needs, situations of language teaching.One limitation of the very concept of "method" itself is that "method" neglects various complicated factors in language teaching such as society, politics, teacher's needs, student's needs, and culture and so on.On the contrary, postmethod greatly concerns about the above factors which have great influences on English language teaching and learning.Postmethod provides another alternative to methods, it provides us useful inspirations for student teachers, especially when they feel confused or at a loss towards the current popular complicated teaching methods.So it also empowers student teachers to theorize from their practice, and to be more autonomous.This research offers teacher educators the ways to help student teachers build their own teacher beliefs about methods. Figure 4 . Figure 4.A training model for beliefs about method Table 1 . Dimensions of teacher autonomy A. Self-directed professional action i.e. 'self-directed teaching' B. Capacity for self-directed professional action i.e. 'teacher autonomy (capacity to self-direct one's teaching) ' C. Freedom from control over professional action i.e. 'teacher autonomy (freedom to self-direct one's teaching) ' In relation to professional development: A. Self-directed professional development i.e. 'self-directed teacher-learning' B. Capacity for self-directed professional development i.e. 'teacher-learner autonomy (capacity to self-direct one's learning as a teacher)' C. Freedom from control over professional development i.e. 'teacher-learner autonomy (freedom to self-direct one's learning as a teacher)'
2018-12-11T16:39:59.678Z
2017-12-10T00:00:00.000
{ "year": 2017, "sha1": "7543b26022b3011c49e82d21f74a6fc940f1912e", "oa_license": "CCBY", "oa_url": "https://www.ccsenet.org/journal/index.php/elt/article/download/72360/39593", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "7543b26022b3011c49e82d21f74a6fc940f1912e", "s2fieldsofstudy": [ "Education" ], "extfieldsofstudy": [ "Psychology" ] }
52228709
pes2o/s2orc
v3-fos-license
Crystallographic Features of Microstructure in Maraging Steel Fabricated by Selective Laser Melting : This study characterizes the microstructure and its associated crystallographic features of bulk maraging steels fabricated by selective laser melting (SLM) combined with a powder bed technique. The fabricated sample exhibited characteristic melt pools in which the regions had locally melted and rapidly solidified. A major part of these melt pools corresponded with the ferrite ( α ) matrix, which exhibited a lath martensite structure with a high density of dislocations. A number of fine retained austenite ( γ ) with a <001> orientation along the build direction was often localized around the melt pool boundaries. The orientation relationship of these fine γ grains with respect to the adjacent α grains in the martensite structure was (111) γ //(011) α and [-101] γ //[-1-11] α (Kurdjumov–Sachs orientation relationship). Using the obtained results, we inferred the microstructure development of maraging steels during the SLM process. The results depict that new and diverse high-strength materials can be used to develop industrial molds and dies. Introduction Metal additive manufacturing based on three-dimensional computer-aided design (CAD) models is a promising technology for fabricating metal products with arbitrary complex geometries within short time-frames [1][2][3][4]. A popular additive manufacturing process for metals is powder bed fusion (PBF) [2] in which the powder particles of metals (alloys) are melted and fused using either laser or electron beams. PBF technologies include the commonly used selective laser melting (SLM), selective laser sintering, selective heat sintering, and electron beam melting [3,4]. The SLM process has been recently applied to various steel powders [3,4], yielding geometrically complex components of maraging steels [5]. Maraging steels with high strength and adequate toughness [6] are extensively applied as tool steels in the mold and die-making industries. When applied to maraging steels, SLM could enable efficient manufacturing of an extensive variety of high-performance molds and dies for pressing or forging complex-shaped metal products. The maraging steel would be favorable for the SLM process in terms of microstructure development. The local heating by laser-beam irradiation produces melt pools, which is followed by rapid solidification at an extremely high cooling rate [7,8]. During the cooling process, the locally irradiated areas are rapidly quenched from the austenitic region to reach temperatures that are lower than the martensite start temperature, resulting in the formation of a martensite structure that contributes strengthening materials. The strength level of SLM-fabricated maraging steels is approximately 1 GPa [9][10][11]. The subsequent aging at elevated temperatures (~460 • C) enhances the precipitation of fine intermetallic phases within the martensite structure, which further strengthens the materials [9][10][11]. However, most previous studies have investigated the effect of laser scanning conditions and aging treatments on the mechanical properties (strength and fatigue) of SLM-fabricated maraging steels [11][12][13][14]. Although the crystallographic orientation relation has been determined in as-quenched maraging steels [15], the microscopic features of the SLM-generated martensite structure remain unclear. The current study characterizes the microstructural and crystallographic features of the martensite structure in the SLM-fabricated maraging steel using electron microscopy and electron backscatter diffraction (EBSD). Based on the results, the current study further discusses the development of microstructure in maraging steel during the SLM process. Table 1 presents the nominal and measured compositions of the maraging steel powder (depicted in Figure 1) and fabricated bulk sample, analyzed by inductively coupled plasma-atomic emission spectrometry (ICP-AES). A SEM image of the studied powder is shown in Figure 1. The proportions of major alloy elements were observed to be almost identical in both the initial powder and the SLM-fabricated bulk samples. The SLM processing was conducted at room temperature using a 3D systems ProX 200 (3D SYSTEMS, Rock Hill, SC, USA) additive-manufacturing system equipped with a Yb-fiber laser operating at 255 W ( Figure 2a). The hexagonal grid laser-scanning pattern that has been applied in this study is depicted in Figure 2b. The fabrication parameters were as follows: laser spot size = approximately 100 µm, applied scan speed = 2.083 m/s, bedded-powder layer thickness = 30 µm, hatch spacing between adjacent laser-scanning tracks = 50 µm, and rotation angle between the bedded-powder layers = 90 • . The optimization of the laser scanning parameters will be described in the following papers. The oxidation during the SLM process was prevented using high-purity Ar gas. Hereafter, the directions that are normal and parallel to the bedded-powder layer are designated as the Z and X/Y directions, respectively. To perform optical microscopy and scanning electron microscopy (SEM), the built bulk samples were cut out from the base plate and then mechanically polished, followed by etching with a natal solution at room temperature. The microstructures were observed using an SEM operating at 20 kV. The orientation was analyzed by EBSD with a 0.3 µm step size. The thin foil sample for transmission electron microscopy (TEM) was ion-polished by an ion-slicer (JEOL, Akishima, Japan) at 6.0 V. The TEM observation was performed using a JEM-2100 plus (JEOL, Akishima, Japan) operating at 200 kV. (Figure 3a,b) comprises semi-cylindrical melt pools corresponding to the locally melted and rapidly solidified regions that are exposed to scanning laser irradiations [16][17][18]. The melt pools were approximately 50-100 µ m high ( Figure 3a) and approximately 50 µ m wide (Figure 3b). They contained elongated cellar structures with a mean spacing of approximately 500 nm (Figure 3c,d), as reported in the literature [10,11,[19][20][21]. The elongation direction of the cellar structure appears to be independent of the presence of the melt pool boundaries. Fine grains with relatively granular morphologies were (Figure 3a,b) comprises semi-cylindrical melt pools corresponding to the locally melted and rapidly solidified regions that are exposed to scanning laser irradiations [16][17][18]. The melt pools were approximately 50-100 µ m high ( Figure 3a) and approximately 50 µ m wide (Figure 3b). They contained elongated cellar structures with a mean spacing of approximately 500 nm (Figure 3c,d), as reported in the literature [10,11,[19][20][21]. The elongation direction of the cellar structure appears to be independent of the presence of the melt pool boundaries. Fine grains with relatively granular morphologies were (Figure 3a,b) comprises semi-cylindrical melt pools corresponding to the locally melted and rapidly solidified regions that are exposed to scanning laser irradiations [16][17][18]. The melt pools were approximately 50-100 µm high ( Figure 3a) and approximately 50 µm wide (Figure 3b). They contained elongated cellar structures with a mean spacing of approximately 500 nm (Figure 3c,d), as reported in the literature [10,11,[19][20][21]. The elongation direction of the cellar structure appears to be independent of the presence of the melt pool boundaries. Fine grains with relatively granular morphologies were locally observed at the boundaries between the melt pools (indicated by arrows in Figure 3d). These morphologies differ from the cellar morphologies observed inside the melt pools. Figure 4 presents a TEM bright filed image showing the dislocation substructure in the SLM-fabricated maraging steel. The TEM observation reveals a lath structure with a high density of dislocations. The mean lath width is approximately 200 nm. The electron diffraction pattern obtained from the observed area indicates the diffused orientation inside the lath structure. These features correspond well to previous studies on microstructural characterization of as-quenched maraging steels [15]. At the resolution level of conventional TEM, no precipitates larger than 10 nm were observed inside the lath structure, which is consistent with a previous result of atomic-probe tomography [19][20][21]. locally observed at the boundaries between the melt pools (indicated by arrows in Figure 3d). These morphologies differ from the cellar morphologies observed inside the melt pools. Figure 4 presents a TEM bright filed image showing the dislocation substructure in the SLM-fabricated maraging steel. The TEM observation reveals a lath structure with a high density of dislocations. The mean lath width is approximately 200 nm. The electron diffraction pattern obtained from the observed area indicates the diffused orientation inside the lath structure. These features correspond well to previous studies on microstructural characterization of as-quenched maraging steels [15]. At the resolution level of conventional TEM, no precipitates larger than 10 nm were observed inside the lath structure, which is consistent with a previous result of atomic-probe tomography [19][20][21]. locally observed at the boundaries between the melt pools (indicated by arrows in Figure 3d). These morphologies differ from the cellar morphologies observed inside the melt pools. Figure 4 presents a TEM bright filed image showing the dislocation substructure in the SLM-fabricated maraging steel. The TEM observation reveals a lath structure with a high density of dislocations. The mean lath width is approximately 200 nm. The electron diffraction pattern obtained from the observed area indicates the diffused orientation inside the lath structure. These features correspond well to previous studies on microstructural characterization of as-quenched maraging steels [15]. At the resolution level of conventional TEM, no precipitates larger than 10 nm were observed inside the lath structure, which is consistent with a previous result of atomic-probe tomography [19][20][21]. Figure 5 presents the EBSD result of the SLM-fabricated sample. The EBSD analysis of the melt-pool microstructure revealed that a number of fine austenite (γ) grains were distributed in a ferrite (α) matrix (Figure 5a,b). The retained γ phases often appeared along the grain boundaries in the α-Fe matrix. This result strongly agrees with the results of previous studies [9,10,21]. Notably, the fine γ grains were often localized at the melt pool boundaries. The microstructural morphologies observed in the α phase (Figure 5c) corresponded well with the lath martensite structure characterized by EBSD analyses [15,22], indicating that a martensite structure developed in the SLM-fabricated maraging steels. Many of the fine γ grains were oriented at <001> along the Z direction (Figure 5d), forming a {001} texture of the retained γ phase (Figure 5e). Note that no particular crystallographic textures were observed in the α phase (Figure 5c). Figure 5 presents the EBSD result of the SLM-fabricated sample. The EBSD analysis of the meltpool microstructure revealed that a number of fine austenite (γ) grains were distributed in a ferrite (α) matrix (Figure 5a,b). The retained γ phases often appeared along the grain boundaries in the α-Fe matrix. This result strongly agrees with the results of previous studies [9,10,21]. Notably, the fine γ grains were often localized at the melt pool boundaries. The microstructural morphologies observed in the α phase (Figure 5c) corresponded well with the lath martensite structure characterized by EBSD analyses [15,22], indicating that a martensite structure developed in the SLMfabricated maraging steels. Many of the fine γ grains were oriented at <001> along the Z direction (Figure 5d), forming a {001} texture of the retained γ phase (Figure 5e). Note that no particular crystallographic textures were observed in the α phase (Figure 5c). Figure 6a). The stereographic projections (Figure 6e,f) represent the fine γ grain also has a different variant of (111)γ//(011)α and [-101]γ//[-1-11]α with respect to another adjacent α grain (indicated by αB in Figure 6a). The determined orientation relationship corresponds to the Kurdjumov-Sachs (K-S) orientation relationship between lath martensite and Figure 6a,b presents the EBSD color maps of the α and γ phases. The corresponding stereographic projections depicting the low-index orientations obtained by performing the EBSD analyses are displayed in panels c-f. EBSD analysis revealed the orientation relationship between the fine retained γ grains oriented at <001> along the Z direction (Figure 6b) and the adjacent α grains in the martensite structure ( Figure 6a). As exhibited in the obtained stereographic projections (Figure 6c,d), the fine γ grain has an orientation relation of (111) γ //(011) α and [-101] γ //[-1-11] α with respect to the adjacent α grain (indicated by α A in Figure 6a). The stereographic projections (Figure 6e,f) represent the fine γ grain also has a different variant of (111) γ //(011) α and [-101] γ //[-1-11] α with respect to another adjacent α grain (indicated by α B in Figure 6a). The determined orientation relationship corresponds to the Kurdjumov-Sachs (K-S) orientation relationship between lath martensite and austenite [22] and is consistent with the crystallographic features of the lath martensite structure in conventionally quenched maraging steels [15]. austenite [22] and is consistent with the crystallographic features of the lath martensite structure in conventionally quenched maraging steels [15]. Discussion In general, the characteristic microstructures of SLM-fabricated metals and alloys develop through local melting and rapid solidification during SLM [7,8]. To assess the phase transition in the maraging steel sample in solidification (during SLM process), the Fe-Ni-Co-Mo-Ti system was predicted via thermodynamic equilibrium calculations performed based on the CALPHAD approach [23] using an existing thermodynamic database for Fe-based multi-component systems (PanIron) [24]. Figure 7 shows the composition of the studied maraging steel in a vertical section representing Fe-18Ni-9Co-5Mo-1Ti (wt. %) of the Fe-Ni-Co-Mo-Ti system. The ICP-AES analyzed composition (Table 1) was used for the alloy composition. In the studied composition, the initial solid phase is γ (fcc) below 1440 °C, whereas a γ single-phase region forms below the solidus temperature of approximately 1400 °C, and has a wide temperature range from 800 °C to 1400 °C. Below 800 °C, µ -Fe7Mo6 phase appears and then α (bcc) phase form at lower temperature than 650 °C. The thermodynamic calculation assesses γ phase initially forms in liquid (L) phase. The assessment indicates the γ grains solidifying in the direction of the hottest point of the melt pool, considering the <001> preferential solidification direction of the fcc solid phase [25] as well as other fcc metals (Ni [26,27] or Al alloys [16][17][18]). This result is consistent with the observed retained γ grains oriented at <001> along the Z direction ( Figure 5). Although the calculated phase diagram predicts the formation of inter-metallics phases (µ -Fe7Mo6 [28] and η-Ni3Ti [29]) at lower temperature than 800 °C (Figure 7), the martensite transformation could occur inside the initially solidified γ phase because of sluggish (c) 1 1 1 g / 0 1 1 aA (d) 0 1 1 g / 1 1 1 aA Discussion In general, the characteristic microstructures of SLM-fabricated metals and alloys develop through local melting and rapid solidification during SLM [7,8]. To assess the phase transition in the maraging steel sample in solidification (during SLM process), the Fe-Ni-Co-Mo-Ti system was predicted via thermodynamic equilibrium calculations performed based on the CALPHAD approach [23] using an existing thermodynamic database for Fe-based multi-component systems (PanIron) [24]. Figure 7 shows the composition of the studied maraging steel in a vertical section representing Fe-18Ni-9Co-5Mo-1Ti (wt. %) of the Fe-Ni-Co-Mo-Ti system. The ICP-AES analyzed composition (Table 1) was used for the alloy composition. In the studied composition, the initial solid phase is γ (fcc) below 1440 • C, whereas a γ single-phase region forms below the solidus temperature of approximately 1400 • C, and has a wide temperature range from 800 • C to 1400 • C. Below 800 • C, µ-Fe 7 Mo 6 phase appears and then α (bcc) phase form at lower temperature than 650 • C. The thermodynamic calculation assesses γ phase initially forms in liquid (L) phase. The assessment indicates the γ grains solidifying in the direction of the hottest point of the melt pool, considering the <001> preferential solidification direction of the fcc solid phase [25] as well as other fcc metals (Ni [26,27] or Al alloys [16][17][18]). This result is consistent with the observed retained γ grains oriented at <001> along the Z direction ( Figure 5). Although the calculated phase diagram predicts the formation of inter-metallics phases (µ-Fe 7 Mo 6 [28] and η-Ni 3 Ti [29]) at lower temperature than 800 • C (Figure 7), the martensite transformation could occur inside the initially solidified γ phase because of sluggish kinetics for the formation of α phase in the maraging steels [30] in the subsequent cooling (after the solidification). kinetics for the formation of α phase in the maraging steels [30] in the subsequent cooling (after the solidification). Figure 7. A composition of maraging steel studied on a vertical section of Fe-9Co-5Mo-1Ti (wt. %) in a Fe-Ni-Co-Mo-Ti phase diagram calculated utilizing the reported thermodynamic database [24]. Based on the aforementioned experimental and calculated results, we can infer the development process of the microstructure in maraging steels during the SLM process. Figure 8 is a schematic of this development process. The laser-beam irradiation locally heats up and melts the bedded alloy powder layer, forming the melt pools. The primary solid phase (γ phase in the studied composition as indicated in Figure 7) is observed to form at the interface between the solid and liquid phases and grows to the center of the melt pool (the hottest point under the laser irradiation) in solidification (Figure 8a), as reported in the literature [16,17]. The γ grains solidify along the preferential <001> solidification direction, as reported in other fcc metals (Ni [26,27] or Al alloys [16][17][18]) that have been fabricated using the SLM process. The preferential solidification direction can explain the observed {001} texture of the retained γ phase (Figure 8e). During the rapid cooling process, the melt pool can rapidly solidify to a number of {001} oriented γ grains (Figure 8b). At temperatures that were lower than the initial martensite temperature (approximately 200 °C in the studied composition [30]), the solidified melt pool transformed into martensite (α phase) with a K-S orientation relation (determined in Figure 6), resulting in the development of the lath martensite structure (Figure 8c). The higher cooling rate around the liquid-solid interfaces in the irradiated regions would enhance the formation of the retained γ phase at the melt pool boundaries rather than inside the melt pools. The present study revealed a number of fine retained γ phases in the maraging steels fabricated by SLM, which are renowned to improve the ductility of steels with the martensite structure. The tensile ductility of SLM-fabricated alloys (Al, Ni, and Co alloys [31]) depends on the building direction, which is responsible for preferential fracturing along the melt pool boundaries [16,17,32]. However, in the SLM-fabricated maraging steel, the tensile ductility is apparently independent of direction [33] because the retained γ phase that is localized at the melt pool boundaries could suppress the preferential fracture along the melt pool boundaries. Consequently, controlling the retained γ phase inside the lath martensite structure could improve the mechanical performance of SLM-fabricated maraging steel. The proposed mechanism of microstructure development provides new insights about microstructure control by subsequent heat treatments. Furthermore, it has been interestingly reported that introducing carbides in the alloy powder enhances the formation of γ phase during the SLM process [34]. To control the microstructure (in particular the distribution of γ Based on the aforementioned experimental and calculated results, we can infer the development process of the microstructure in maraging steels during the SLM process. Figure 8 is a schematic of this development process. The laser-beam irradiation locally heats up and melts the bedded alloy powder layer, forming the melt pools. The primary solid phase (γ phase in the studied composition as indicated in Figure 7) is observed to form at the interface between the solid and liquid phases and grows to the center of the melt pool (the hottest point under the laser irradiation) in solidification (Figure 8a), as reported in the literature [16,17]. The γ grains solidify along the preferential <001> solidification direction, as reported in other fcc metals (Ni [26,27] or Al alloys [16][17][18]) that have been fabricated using the SLM process. The preferential solidification direction can explain the observed {001} texture of the retained γ phase (Figure 8e). During the rapid cooling process, the melt pool can rapidly solidify to a number of {001} oriented γ grains (Figure 8b). At temperatures that were lower than the initial martensite temperature (approximately 200 • C in the studied composition [30]), the solidified melt pool transformed into martensite (α phase) with a K-S orientation relation (determined in Figure 6), resulting in the development of the lath martensite structure (Figure 8c). The higher cooling rate around the liquid-solid interfaces in the irradiated regions would enhance the formation of the retained γ phase at the melt pool boundaries rather than inside the melt pools. The present study revealed a number of fine retained γ phases in the maraging steels fabricated by SLM, which are renowned to improve the ductility of steels with the martensite structure. The tensile ductility of SLM-fabricated alloys (Al, Ni, and Co alloys [31]) depends on the building direction, which is responsible for preferential fracturing along the melt pool boundaries [16,17,32]. However, in the SLM-fabricated maraging steel, the tensile ductility is apparently independent of direction [33] because the retained γ phase that is localized at the melt pool boundaries could suppress the preferential fracture along the melt pool boundaries. Consequently, controlling the retained γ phase inside the lath martensite structure could improve the mechanical performance of SLM-fabricated maraging steel. The proposed mechanism of microstructure development provides new insights about microstructure control by subsequent heat treatments. Furthermore, it has been interestingly reported that introducing carbides in the alloy powder enhances the formation of γ phase during the SLM process [34]. To control the microstructure (in particular the distribution of γ phase) by laser-scanning strategies during the SLM process and subsequent heat-treatments, it must await our future works to fundamentally investigate the austenite reversion of the SLM-fabricated maraging steel at elevated temperatures. phase) by laser-scanning strategies during the SLM process and subsequent heat-treatments, it must await our future works to fundamentally investigate the austenite reversion of the SLM-fabricated maraging steel at elevated temperatures. Figure 8. Schematics showing (a-c) the formation process of microstructure in the maraging steel during the selective laser melting process, together with (d) a schematic thermal history of the sample by local laser heating. Summary In this study, we have characterized microstructure and its associated crystallographic features of bulk maraging steels fabricated by selective laser melting (SLM) combined with a powder bed technique. The fabricated sample exhibited characteristic melt pools in which the regions had locally melted and rapidly solidified. A major part of these melt pools corresponded with the α-matrix, which exhibited a lath martensite structure with a high density of dislocations. A number of fine retained γ phase with a <001> orientation along the build direction was often localized around the melt pool boundaries. The orientation relationship of these fine γ grains with respect to the adjacent α grains in the martensite structure was determined as (111)γ//(011)α and [-101]γ//[-1-11]α (Kurdjumov-Sachs orientation relationship). Utilizing the obtained results, we inferred the microstructure development of maraging steels during the SLM process. These results can provide new insights to control microstructure of SLM-fabricated maraging steels by SLM process and subsequent heat treatments for developing materials for industrial molds and dies. Acknowledgments: The support of "Knowledge Hub Aichi," which is a Priority Research Project of the Aichi Prefectural Government, Japan, is gratefully acknowledged. We are grateful for the technical support to prepare samples provided by Dr. Y. Ishida. Conflicts of Interest: The authors declare no conflict of interest. Summary In this study, we have characterized microstructure and its associated crystallographic features of bulk maraging steels fabricated by selective laser melting (SLM) combined with a powder bed technique. The fabricated sample exhibited characteristic melt pools in which the regions had locally melted and rapidly solidified. A major part of these melt pools corresponded with the αmatrix, which exhibited a lath martensite structure with a high density of dislocations. A number of fine retained γ phase with a <001> orientation along the build direction was often localized around the melt pool boundaries. The orientation relationship of these fine γ grains with respect to the adjacent α grains in the martensite structure was determined as (111) γ //(011) α and [-101] γ //[-1-11] α (Kurdjumov-Sachs orientation relationship). Utilizing the obtained results, we inferred the microstructure development of maraging steels during the SLM process. These results can provide new insights to control microstructure of SLM-fabricated maraging steels by SLM process and subsequent heat treatments for developing materials for industrial molds and dies.
2018-09-01T01:31:53.346Z
2018-06-09T00:00:00.000
{ "year": 2018, "sha1": "2576cdcbeacc1aafc7dcd1a201f0d6391091bcb4", "oa_license": "CCBY", "oa_url": "https://res.mdpi.com/d_attachment/metals/metals-08-00440/article_deploy/metals-08-00440.pdf", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "e970c18a9db4c20376a9112483c51100ef77812d", "s2fieldsofstudy": [ "Materials Science" ], "extfieldsofstudy": [ "Materials Science" ] }
221241563
pes2o/s2orc
v3-fos-license
Orbital order-disorder transition in La(1-x)Nd(x)MnO(3) (x = 0.0-1.0) and La(1-x-y)Nd(yx)Sr(y)MnO(3) (x = 0.1; y = 0.05,0.1) The nature of orbital order-disorder transition has been studied in the La(1-x)Nd(x)MnO(3) (x = 0.0-1.0) series which covers the entire range between two end points - LaMnO(3) and NdMnO(3) - as well as in La(0.85)Nd(0.1)Sr(0.05)MnO(3) and La(0.8)Nd(0.1)Sr(0.1)MnO(3). It has been observed that the first-order nature of the transition gives way to higher order with the increase in"x"in the case of pure manganites. The latent heat (L) associated with the transition, first, drops with a steeper slope within x = 0.0-0.3 and, then, gradually over a range 0.3<x<0.9. This drop could, possibly, be due to evolution of finer orbital domain structure with"x". In the case of Sr-doped samples, the transition appears to be of higher-order nature even for a doping level 5 at%. In both cases, of course, the transition temperature T(JT) rises systematically with the drop in average A-site radius<r(A)>or rise in average Mn-O-Mn bond bending angle<cos^2(phi)>while no apparent correlation could be observed with doping induced disorder sigma^2. The cooperative nature of the orbital order, therefore, appears to be robust. The issue of orbital order-disorder transition in pure and doped manganites has attracted a great deal of attention in recent times. 1 It has been widely noted that there is a close interplay among charge, spin, and orbital degrees of freedom in rare-earth perovskite manganites which gives rise to inter-dependence among orders in different degrees of freedom. Recent studies 2 have shown that the lattice distortion plays an important role in governing the interplay. More specifically, the interplay between cooperative Jahn-Teller (JT) distortion and GdFeO 3 -type distortion in several transition metal oxide systems leads to a change in the JT distortion from a-to d-type. 3 The nature of the static orbital order, therefore, depends strongly on such interplay and can vary from conventional antiferro C-type to CE-type and even to a geometrically frustrated type for 90 o metal-oxygen-metal bonds. 4 The nature of the order-disorder transition too, appears to be dependent on coupling between JT order and lattice or interplay between JT and GdFeO 3 -type distortion. Recent works 5,6 have pointed out that while the transition in the case of pure LaMnO 3 is first order in nature it becomes second order in the case of PrMnO 3 or NdMnO 3 where average A-site radius <r A > is smaller. In the case of Badoped systems, the transition becomes second order beyond a very low doping level (2.5 at%). 7 The reason behind such cross-over in the nature of transition is not quite clear at this moment. In the case of Ba-doped systems, the higher order transition is proposed to be due to drop in anharmonic coupling between JT distortion and volume strain. 8 Strong coupling results in a large volume contraction at the transition, as observed in LaMnO 3 , and hence a first order transition. Weak coupling, on the other hand, results in a higher order transition. In the case of single-valent systems, enhanced Mn-O-Mn bond bending due to smaller ion at A-site, possibly, results in a finer orbital domain structure. Such a system undergoes a broader, reminiscent of higher order, transition near T JT . Given such rich background, it is pertinent to raise few important points which have, to the best of our knowledge, not been addressed so far in the published literature: (i) how and when the orbital order-disorder transition turns higher order as a function of rise in <cos 2 ö> (which quantifies the 180 o -ö bending angle of Mn-O-Mn bonds) or drop in <r A >; (ii) whether the cross-over from first to second order transition follows similar patterns in both the cases: with drop in <r A > in pure manganites and with rise in "Sr" doping level in doped ones; (iii) nature of the trans ition and drop in transition temperature (T JT ) with Sr-doping level in a smaller <r A > system; (iv) whether doping induced disorder has any role to play in governing T JT . In this paper, we address these issues by studying the orbital order-disorder transition in the series La 1-x Nd x MnO 3 (x = 0.0-1.0) which covers the entire range between LaMnO 3 and NdMnO 3 . In addition, we have also studied the role of Mn-O-Mn bond bending in governing the drop in T JT with Sr-doping level. We have employed resistivity, differential thermal analysis (DTA) and differential scanning calorimetry (DSC) to observe the transition and its nature in all such systems. We found that, although, the latent heat of transition does drop with Nd-substitution level (x) as expected, the pattern of drop varies with 'x': it drops sharply within x = 0.0-0.3 and, then, gradually to zero across the 0.3<x<0.9. On the other hand, Sr-doping gives rise to higher order transition even below a doping level 5-10 at%. The T JT , in the case of pure manganites, rises almost linearly with 'x' having no apparent d ependence on variance ó 2 while its drop, in the case of Sr-doped systems, appears to be influenced by bond bending due to smaller average A-site radius <r A >. In both the cases, there is no apparent signature of phase segregation which points out the robustness of the cooperative nature of orbital order network. The experiments have been carried out on high quality single phase bulk polycrystalline samples which have been prepared by using the powder obtained from solution chemistry technique -autoignition of a gel formed following boiling of mixed aqueous metal nitrate solution in presence of citric acid. The heat treatment has been carried out in argon atmosphere for maintaining the oxygen stoichiometry. The actual Mn 4+ concentration has been estimated by redox titration technique 9 and is found to vary between 2-10% in the samples belonging to La 1-x Nd x MnO 3 series. The samples have been further characterized by scanning electron microscopy (SEM), energy dispersive Xray spectroscopy (EDX), and X-ray diffraction (XRD) study. The four-probe resistivity has been measured across a temperature range 300-1200 K under vacuum (~10 -3 torr) while DTA (Shimadzu, DTA-50) and DSC (Pyris Diamond DSC, Perkin-Elmer) studies have been carried out under inert atmosphere (nitrogen) across the temperature ranges 300-1273 K and 300-973 K, respectively. High quality platinum paste is used for making the contacts for resistivity measurement. In Fig. 1, we show the lattice parameters and lattice volume, estimated from room temperature XRD, for the undoped samples. For all the samples, the room temperature phase appears to be orbital-ordered orthorhombic O' with c/√2<a<b (space group Pbnm). The lattice volume collapses with the increase in 'Nd' substitution, as expected. The collapse is sharp within x = 0.0-0.3 and gradual beyond that. This pattern appears to be similar to the pattern of drop in latent heat and could possibly have a correlation. The room temperature orthorhombic lattice distortion D increases systematically with t he drop in average A-site radius <r A >. We have also estimated the disorder ó 2 using the . These parameters are listed in Table-I for all the compositions. In Fig. 2, we show the resistivity vs. temperature curves for the pure manganites. Table-I. There is an overall trend of increase in E a with 'x' which tallies with the pattern reported in Ref. 5. In one or two cases, however, the resistivity as well as E a appear to be anomalous which could be due to variation in sintering and hence the microstructure of the samples. In Fig. 3 presence of "Sr"; the Mn 4+ concentration in these cases is found to be within 14-17% and thus, close to the boundary of insulator-metal transition; (iii) the order-disorder transition appears to be of higher order nature. In Fig. 4, we show the resistivity vs. temperature patterns. T JT could be clearly identified from the resistivity pattern. However, no peak could be observed in DTA/DSC patterns which signals higher order transition at T JT . In the inset of Fig. 4, we show the drop in T JT with Sr-doping level in presence and absence of Nd at A-site. This particular result demonstrates that the drop in T JT due to rise in "Sr" can be, at least, partially compensated by reducing the <r A >. This has also been observed, previously, when the drop in T JT with doping level 'x' has been studied in La 1-x Sr x MnO 3 and La 1-x Ca x MnO 3 ; the pattern turns out to be broader in the latter case. 10 These results again point out that the cooperative structure of the orbital order network is robust and does not collapse because of high ó 2 . Ho wever, one significant observation is the cross-over in the nature of transition. In the case of Sr-doped samples, the cross-over appears to be taking place within even smaller doping regime (<5 at%). Earlier, it has been shown 7 for the Ba-doped samples that the cross-over is taking place beyond 2.5 at%. This is, certainly, in contrast to the observation made in the case of undoped manganite series and is another important result of this paper. The reason behind the cross-over from first to higher order transition is still not well established. Recently, it has been proposed 8 that the anharmonic coupling between JT distortion and volume strain has a role to play in determining the volume collapse or nature of the transition near T JT . Below a certain value of the coupling parameter, the nature of transition changes from first to higher. On the other hand, local measurements of the orbital order network using coherent X-ray beam 11 indicates presence of orbital domain structure and reduction in the domain size with the increase in lattice distortionfrom 4000-7000 Å in LaMnO 3 to roughly 320 Å in Pr 0.6 Ca 0.4 MnO 3 . It has also been pointed out that with the increase in Mn-O-Mn bond bending a frustrated structure evolves. 4 In a recent report, observation of frustration of e g 1 level due to smaller ion at Asite and consequent drop in Neel point T N has been presented. 12 The mechanism of drop in coupling between JT distortion and volume strain could be useful in describing the cross-over in 'Ba' or 'Sr' doped systems where one observes a rise in orbitally disordered metallic ferromagnetic phase. However, it is not immediately clear whether it holds good for the pure manganites where lattice distortion systematically increases and static orbital order is found to preva il. In fact, in "Sr"-doped systems, possibility of "orbital liquid" phase formation due to quantum fluctuations has been highlighted. 13 We thank A. Kumar for assistance in chemical analysis and P. Choudhury for helpful discussion. This work is carried out under the CSIR networked program "custom-tailored special materials" (CMM 0022).
2019-04-14T02:07:09.755Z
2004-11-11T00:00:00.000
{ "year": 2004, "sha1": "0fe7f6c15edfdd0eb06fba27f6d6a78792f50803", "oa_license": null, "oa_url": "http://arxiv.org/pdf/cond-mat/0412170", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "0fe7f6c15edfdd0eb06fba27f6d6a78792f50803", "s2fieldsofstudy": [ "Materials Science" ], "extfieldsofstudy": [ "Physics" ] }
250199402
pes2o/s2orc
v3-fos-license
Structural and Vibrational Investigations of Mixtures of Cocoa Butter (CB), Cocoa Butter Equivalent (CBE) and Anhydrous Milk Fat (AMF) to Understand Fat Bloom Process : Some studies found that the proportions of cocoa butter (CB), cocoa butter equivalent (CBE) and milk fatty acid (AMF) tend to influence the blooming delay when mixing them. The goal of our research is to determine the effects of the proportion of CB, CBE and AMF on the structural organization of the final mixtures. X-ray, DSC, MIR and Raman spectroscopy were used to analyze the structural features and the vibrational modes of four mixtures: CB + 0.5AMF, CB + AMF, CB + 0.5AMF + CBE and CB + AMF + CBE. At room temperature, the triglycerides are ingredients of CB, and CBE and AMF do not fully exhibit the known crystalline forms V or VI, unlike a recent CB sample. Part of these triglycerides is in the form IV instead. The presence of the latter seems to be a key parameter that favors the deceleration of the transformation to the form VI, which is responsible for the development of fat Introduction The fatty blooming of chocolate is characterized by the loss of the initial gloss of its surface, giving it a more or less white appearance.For milk chocolate, this phenomenon is directly linked to the physico-chemical structure of cocoa butter (CB), cocoa butter equivalent (CBE) and white milk fatty acids (anhydrous milk fat: AMF) [1][2][3][4][5][6]. Sonvaï and Rousseau [1] have shown that the proportion of triglycerides in cocoa butter (CB), cocoa butter equivalent (CBE) and anhydrous milk fats (AMF) of chocolate strongly influences the speed of appearance of blooming.These authors have shown that the time in weeks before the appearance of polymorphic crystals in form VI responsible for the blooming of chocolate depended on the proportion by mass of the product of cocoa butter (CB), cocoa butter equivalent (CBE) and milk fatty acids (AMF): for the mixtures CB + 0.5AMF and CB + AMF, the blooming of chocolate appears after two weeks, the mixture CB + 0.5AMF + CBE after 20 weeks while for the mixture CB + AMF + CBE it appears after 30 weeks. We notice that products containing cocoa butter equivalent (CBE) take much longer to bloom than products not containing it."CB + AMF" mixture, not containing CBE, only takes two weeks before blooming, compared to thirty weeks (i.e., fifteen times more) for "CB + AMF + CBE" mixture or to twenty weeks for "CB + 0.5AMF + CBE" mixture.CBE, therefore, appears to delay very significantly the transition of cocoa butter crystals in V-form to VI-form.In addition, it can be noted that a suitable amount of milk fatty acids (AMF) in a product containing CBE further delays the appearance of crystals in form VI. Indeed, "CB + AMF + CBE" mixture containing twice as much milk fatty acids as "CB + 0.5AMF + CBE" mixture takes 10 more weeks to bloom, that is to say, an additional delay in the transition of 50%.The more milk "chocolate" contains milk powder, in the presence of equivalent cocoa butter, the longer it would take to bloom.Adding fatty acid to milk to delay blooming has also been shown for dark chocolate.Indeed, a few years earlier, it had already been demonstrated that the addition of 1 to 2% of milk fatty acid in dark chocolate delayed the blooming of the product [7].This could be explained by the presence of triglycerides in the fatty acids of milk but not in cocoa butter.Another, more recent, study, conducted by Bisvas et al. [8], also demonstrated that the presence of cocoa butter substitute (from palm oil, noted CBS) in the composition of a dark chocolate delayed the blooming of the product.Indeed, they showed that after two weeks of storage at 29 • C ± 1 • C, a dark chocolate containing no cocoa butter substitute (CBS) bloomed, unlike a dark chocolate containing 20% CBS.However, their experiments also showed that chocolate containing only 5% CBS bloomed just like chocolate not containing CBS.The presence of CBS in dark chocolate therefore makes it possible to delay blooming, provided that its proportion in the product is sufficient.Da Silva et al. [9] also conclude that when the chocolate was subjected to temperature cycling, the resistance of CBS and CBE to the formation of fat bloom became more evident. ATR-FTIR and Raman spectroscopy are non-destructive vibrational spectroscopy techniques which give sensitive information about molecular structure in solid and liquid TG conformational dependence [6,[10][11][12].Some bands of Raman spectra are particularly interesting to investigate the polymorphic structure of AMF.The bands in the spectral region 3200-2700 cm −1 correspond to the ν(C-H) stretching modes.The ν(C=O) ester carbonyl stretching region appears at 1800-1700 cm −1 , and the ν(C=C) stretching region (olefinic band) near 1660 cm −1 . After separately studying the cocoa butter, the equivalent cocoa butter and the fatty acids of milk [13][14][15], we present a structural and vibrational study of the four mixtures studied by Sonvaï and Rousseau by X-ray diffraction, DSC, MIR and Raman spectroscopies.The objective is to better understand the phenomenon of fatty blooming of chocolate observed by Sonvaï and Rousseau for the same four mixtures.For this, we looked for markers of differentiation between these samples by structural studies according to the temperature and by vibrational studies. Mixtures Preparation Cocoa butter (CB) came from the Ivory Coast.CB, CBE and AMF were both purchased from the industry Cadbury (Canada).From mass spectrometry experiments by two different techniques, ESI-HRMS and MALDI-HRMS [6], we can conclude that our samples of CB and CBE are identical with respect to the types of triglycerides in them, but regarding the three main triglycerides, the components POS, SOS and POP, the proportions are different: in CB, POS dominates in quantity compared to the other two (practically 50% of the total), while for CBE, POP plays this role (practically 46% of the total) with a slight increase of SOS. Cocoa butter, cocoa butter equivalent and milk fatty acids are solid fats at room temperature.They therefore had to be melted to make the mixtures at T = 60 • C.Then, the samples were stored in a refrigerator at T = 4 • C until used for the experiments at room temperature.Using a weighing scale accurate to one hundredth of a gram, the required mass of each sample was weighed.The samples were then melted using a heating magnetic stirrer, then mixed using the magnetic stirrer to obtain a homogeneous liquid.The cooling then took place naturally in the open air.In Table 1, the mixtures are presented by mass. SAXS-WAXS Experiments X-ray diffraction patterns were acquired using a microfocus X-ray tube (IµS, Incoatec, Geesthacht, Germany), selecting the Cu Kα radiation.It was used with an intensity of 1000 µA and a voltage of 50 kV.The incident beam was focused at the detector with multilayer Montel optics and 2D Kratky block collimator.Small-angle (SAXS) and wideangle (WAXS) X-ray scattering analyses were performed simultaneously using two positionsensitive linear detectors (Vantec-1, Bruker, Billerica, MA, USA) set perpendicular to the incident beam direction, up to 7 • (2θ) and at 19 • to 28 • (2θ) from it, respectively.The direct beam was stopped with a W-filter.The scattered intensity was reported as a function of the scattering vector q = 4π sin θ/λ where θ is half the scattering angle and λ is the wavelength of the radiation.The repeat distances d, characteristic of the structural arrangements, were given by q (Å −1 ) = 2π/d (Å).Silver behenate and tristearin (β form) were used as standards to calibrate SAXS and WAXS detectors, respectively. All samples (~10 mg) were introduced into thin-walled glass capillaries (GLAS, Müller, Berlin, Germany) of 1.5 mm external diameter which were then placed in a specially designed temperature-controlled sample holder (Microcalix, Setaram, Lyon, France).For static measurements at 20 • C, the acquisition time was 10 min.For measurements with temperature, samples were heated at 2 • C/min and acquisition time was 1 min, leading to frame recording every 2 • C. Differential Scanning Calorimetry (DSC) DSC experiments were carried out on a Netzsch DSC 204 F1 Phoenix ® heat flux differential calorimeter at a heating rate of 2 • C/min under a constant argon flow with 200 mL/min.Each sample was heated from room temperature to T = 60 • C. Samples were weighed in aluminum sample pans covered with a pierced lid.An empty aluminum sample pan with a pierced lid was used as a reference.Three temperatures could be measured: T onset , T max and T offset , which correspond respectively to the beginning, the top and the end of thermal events. MIR Spectroscopy The MIR measurements were carried out at the Walloon Agricultural Research Center CRA-W, in Belgium.The apparatus consists of an FT-MIR Vertex 70 spectrometer (Brukeroptics, Ettlingen, Germany) equipped with a Golden Gate ATR (Attenuated Total Reflectance).This ATR consists of a monolithic diamond crystal.The incident beam contains radiations of 4000 to 600 cm −1 , which correspond to the medium infrared.The incident light penetrates the sample to a depth of 7 µm maximum, after having passed through the diamond.The reflected beam emerges through the diamond before reaching the detector.The spectral resolution was set at 1 cm −1 and the number of co-added spectra was set to 128 scans.The measurements were carried out at room temperature.Spectra of the ambient air was used as background.The ATR-FTIR spectra had undergone special processing, in order to be able to compare the intensity ratios between the samples.Before normalizing the spectra on the peak at 1729 cm −1 , which corresponded to the most intense peak in the spectral region 2000-50 cm −1 , we subtracted the baseline for each spectrum. Raman Spectroscopy The measurements were carried out at the Walloon Agricultural Research Center (CRA-W), Belgium. RAMAN spectra were acquired using a SENTERRA II Bruker RAMAN spectrometer.This fully automated instrument combines excellent sensitivity and high resolution of 1.5 cm −1 .The experiments were carried out with a laser of wavelength λ 0 = 532 nm, of maximum power, Pmax = 25 mW, an acquisition time of 100 s and an addition of two spectra.This instrument makes it possible to obtain spectra ranging from 50 to 3470 cm −1 .The Raman spectra had undergone special processing, in order to be able to compare the intensity ratios between the samples.Before normalizing the spectra on the peak at 2885 cm −1 , which corresponded to the most intense peak in the spectral region 3200-50 cm −1 , or on the peak at 1743 cm −1 , which corresponded to the most intense peak in the spectral region 1800-1600 cm −1 , we subtracted the baseline for each spectrum. Statistical Analysis Regarding the determination of the Raman and ATR spectra, we chose as a research protocol to make five different spectra for each sample in order to observe the reproducibility of the results.In micro Raman, we proceeded to the determination of five spectra on five different positions of the incident laser for each sample, while in ATR-FTIR we carried out the same experiment five times by taking the same sample five times.Whether with ATR or Raman spectra, we observed no difference between the spectra for the same sample, confirming the reproducibility of our results. In order to refine our study, we can perform models by Lorentzian functions of MIR and Raman spectra to determine the values of the wavenumbers as well as the areas of the associated peaks.The values in frequencies of the modes are obtained by the modeling carried out by the software ORIGIN 5.0 professional (from OriginLab, Northampton, MA, USA).The modeling method was proposed by Bresson et al. [16].Statistical analysis of the data was performed by analysis of variance (ANOVA) by the software ORIGIN 5.0.The level of significance was defined as p ≤ 0.05. We used as peak function the following Lorentz function: where x c represents the value of the fit mode wavenumber, A the area of the peak and w the width at mid-height.We took an iteration number equal to 100.The error was estimated to be ±0.5 cm −1 . Polymorphic Discrimination at 20 • C Figure 1 exhibits the intensity of SAXS (Small Angle X-ray Scattering) (Figure 1a) and WAXS (Wide Angle X-ray Scattering) (Figure 1b) configurations at room temperature for "pure" compounds, CB, CBE and AMF, and for mixtures CB + 0.5AMF, CB + AMF, CB + 0.5AMF + CBE and CB + AMF + CBE.Peaks at small angles are assigned to long d-spacings, reflecting the lamellar structure of TG; peaks at wide angles correspond to short d-spacing, defining distances between chains of TG. WAXS (Wide Angle X-ray Scattering) (Figure 1b) configurations at room temperature for "pure" compounds, CB, CBE and AMF, and for mixtures CB + 0.5AMF, CB + AMF, CB + 0.5AMF + CBE and CB + AMF + CBE.Peaks at small angles are assigned to long d-spacings, reflecting the lamellar structure of TG; peaks at wide angles correspond to short dspacing, defining distances between chains of TG. Based on studies on CB, CBE and AMF [7,13,14], the diffraction peaks can be assigned: for SAXS, the peak at 63.5 Å corresponds to the first order of the triple chain structure 3L001, the peak at 32.1 Å to the second order of the same structure (3L002) and Based on studies on CB, CBE and AMF [7,13,14], the diffraction peaks can be assigned: for SAXS, the peak at 63.5 Å corresponds to the first order of the triple chain structure 3L 001 , the peak at 32.1 Å to the second order of the same structure (3L 002 ) and the peak of center at 43.0 Å to the first order of a double length chain structure 2L 001 .For WAXS, we can identify CB V(β 1 ) form due to the last six peaks (4.03, 3.90, 3.79, 3.70 and 3.59 Å).The presence of the peaks at 4.03 and 3.79 Å indicates that V form is involved, and not VI form, according to the literature.Indeed, during the preparation of the mixtures, the pure compounds were melted up to T = 60 • C: the samples' fusion erased their polymorphic history.It is therefore not surprising that some TGs in the mixture are still in V form, especially when the sample has been stored under optimal conditions (at a temperature of 4 • C) as Bresson et al. [13] showed with the obtaining of the cocoa butter polymorph isolated protocol.The peak at 4.27 Å was present in the diffractogram of CBE at room temperature and was assigned to the 2L-β' structure of POP triglycerides [14,17]. In addition, we notice a strong variation in the intensity of the peak at 43 Å compared to the other two peaks at 63 Å and 33 Å depending on the type of mixture, or compared to CB and CBE.This will lead us to consider a new parameter ρ, which takes into account the part of the intensity of the I 43 peak at 43 Å compared to I 63 peak in SAXS: ρ = I 43 I 63 = I(2L 001 ) I(3L 001 ) .From the study of this parameter, we will be able to propose an approximation of the proportion of TGs in β' form relative to the whole mixture.At 20 • C temperature, this parameter ρ becomes worthy for CB and CBE: ρ CB = 0.127 and ρ CBE = 0.729.These results seem to indicate that for CBE there are 83% = ρ CBE −ρ CB ρ CBE more TGs in IV or β' form than for CB. Concerning AMF, SAXS shows the typical 2L long spacing around 41 Å (q = 0.154 Å −1 and third order at 0.453 Å −1 ); WAXS are noisy, certainly because the solid fraction is low at room temperature and it is not possible to see the weak b' signature described in the literature [15]. Regarding the SAXS, it is interesting to note that the 2L structure seems to be imposed by the CB as the position of its third order, which is at 0.431 Å −1 instead of 0.453 Å −1 for AMF.This is less discriminant for the first orders, because the peak positions are very close but the peak at 0.145 Å −1 is similar to that of CB alone (0.146 Å −1 ) and slightly different to AMF alone (0.154 Å −1 ). At T = 20 • C, the parameter ρ becomes worthy for this mixture: ρ CB+0.5AMF = 0.938.Compared to CB and CBE, this parameter is more important: it increases by 87% between CB and CB + 0.5AMF, whereas by mass TGs in β' form have increased by 50%, and by 22% between CBE and CB + 0.5AMF. After increasing the amount of AMF in the CB + AMF mixture it can be seen that the peak positions are practically the same as for CB + 0.5AMF mixture.This is also true for 2L structure which keep the "positions of CB" even if the proportion of AMF is more important.In addition, the intensity of 2L 001 peak compared to 3L 001 and 3L 002 peaks is much greater for "CB + AMF" mixture than for "CB + 0.5AMF" mixture. At T = 20 • C, we measure a value for the parameter ρ for CB + AMF of 1.531 while for ρ CB + 0.5AMF (T = 20 • C) = 0.936, i.e., a 38% increase of this ratio.We have to compare this parameter with the proportion by mass of TGs in IV or β form: between CB + 0.5AMF and CB + AMF, these TGs have increased by 33%.It would seem once again that the parameter ρ linked to the results obtained in X-ray for the SXAS follows the same evolution as the parameter µ linked to the mass distribution of TGs according to their polymorphic forms. We find the same peaks as the last two mixtures presented.The peak assignment is therefore the same: the peaks at 64.8 Å, 44.2 Å and 32.9 Å correspond respectively to the structures 3L 001 , 2L 001 and 3L 002 ; and the peaks observed in WAXS correspond to β 1 form and to β form. At T = 20 • C, we measure a value for the parameter ρ for CB + 0.5AMF + CBE of 0.694 while ρ CBE (T = 20 • C) = 0.729, i.e., very similar values.Regarding the CB + AMF + CBE mixture, the parameter ρ reaches 1.233 while ρ CB + 0.5AMF + CBE (T = 20 • C) = 0.694, i.e., a 44% increase of this ratio.For this mixture, the proportion by mass of TGs from AMF has increased by 40% comparatively to CB + 0.5AMF + CBE.It would seem that the ρ parameter is directly related to the mass distribution of TGs in IV form at the temperature T = 20 • C. Behavior with Temperature Figures 2 and 3 deal with the thermal behavior of the mixtures.More precisely, DSC curves obtained during first heating from room temperature to 50 • C are plotted in Figure 2.For all samples, the curve is not flat at the start of the experiment, which indicates that the samples are not completely solid at room temperature.Indeed, the mixtures have a pasty structure and are quite flexible at room temperature, unlike the pure compounds which seemed solid to the touch. and to form. At T = 20 °C, we measure a value for the parameter ρ for CB + 0.5AMF + CBE of 0.694 while ρCBE(T = 20 °C) = 0.729, i.e., very similar values.Regarding the CB + AMF + CBE mixture, the parameter ρ reaches 1.233 while ρCB + 0.5AMF + CBE(T = 20 °C) = 0.694, i.e., a 44% increase of this ratio.For this mixture, the proportion by mass of TGs from AMF has increased by 40% comparatively to CB + 0.5AMF + CBE.It would seem that the ρ parameter is directly related to the mass distribution of TGs in IV form at the temperature T = 20 °C. Behavior with Temperature Figures 2 and 3 deal with the thermal behavior of the mixtures.More precisely, DSC curves obtained during first heating from room temperature to 50 °C are plotted in Figure 2.For all samples, the curve is not flat at the start of the experiment, which indicates that the samples are not completely solid at room temperature.Indeed, the mixtures have a pasty structure and are quite flexible at room temperature, unlike the pure compounds which seemed solid to the touch.Several events are clearly visible on the DSC traces with a mean peak between 30.8 • C and 34 • C (position of the peak maximum) and shoulders before (as is the case for CB + 0.5AMF, CB + 0.5AMF + CBE and CB + AMF + CBE mixtures) or after (as in the case of CB + AMF mixture).This suggests the presence of a second endothermic event of lower intensity.We deduce the coexistence of two distinct crystal structures at room temperature: one relating to the 2Lβ' form and the other in the 3Lβ form.This is confirmed by the SAXS patterns with temperature.For all the mixtures, the 3L structure (peaks at 64 and 32 Å) finishes to melt first, followed by the 2L organizations (peak at 44 Å) due to AMF and/or CBE.Appl.Sci.2022, 12, 6594 9 of 16 Bresson et al. [6] showed that CBE exhibits the same vibrational behavior in MIR spectroscopy as CB.It is questionable whether the presence of CBE, which delays chocolate blooming, manifests on MIR spectra. We can distinguish different regions on the spectra shown in Figure 4a.Here, we will focus on the two following regions: • the spectral region 3200-2700 cm −1 corresponding to the ν(C-H) stretching mode (Figure 4b) • the spectral region 1800-1700 cm −1 corresponding to the ν(C=O) ester carbonyl stretching region (Figure 4c).If CBE has the same vibrational behavior in MIR spectroscopy MIR spectra of "CB + 0.5AMF" and "CB + AMF + CBE" mixtures s deed, both of these two mixtures contain one third (1/3) of AMF a cocoa butter (only CB or a mixture of CB and CBE). In the spectral region 1800-1700 cm −1 (Figure 4c), it actuall no notable difference between the spectra of "CB + 0.5AMF" an mixtures.The models of MIR spectra of the four mixtures in this s sented in Figure 4b.In Figure 4a, the spectra obtained for CB, C temperature are presented for comparison [6].We observe four whose values of σ remain almost stable from one mixture to anot cm −1 ; 1743 cm −1 and 1751 cm −1 . Moreover, it should be noted that the pure compounds also components for the vibrations ν(C=O) (see Figure 5).Bresson et al. [6] showed that CBE exhibits the same vibrational behavior in MIR spectroscopy as CB.It is questionable whether the presence of CBE, which delays chocolate blooming, manifests on MIR spectra. We can distinguish different regions on the spectra shown in Figure 4a.Here, we will focus on the two following regions: • the spectral region 3200-2700 cm −1 corresponding to the ν(C-H) stretching mode (Figure 4b) • the spectral region 1800-1700 cm −1 corresponding to the ν(C=O) ester carbonyl stretching region (Figure 4c). If CBE has the same vibrational behavior in MIR spectroscopy as CB in the mixtures, MIR spectra of "CB + 0.5AMF" and "CB + AMF + CBE" mixtures should be identical.Indeed, both of these two mixtures contain one third (1/3) of AMF and two thirds (2/3) of cocoa butter (only CB or a mixture of CB and CBE). In the spectral region 1800-1700 cm −1 (Figure 4c), it actually seems that there is no notable difference between the spectra of "CB + 0.5AMF" and "CB + AMF + CBE" mixtures.The models of MIR spectra of the four mixtures in this spectral region are presented in Figure 4b.In Figure 4a, the spectra obtained for CB, CBE and AMF at room temperature are presented for comparison [6].We observe four components in total, whose values of σ remain almost stable from one mixture to another: 1729 cm −1 ; 1735.5 cm −1 ; 1743 cm −1 and 1751 cm −1 . Moreover, it should be noted that the pure compounds also presented these four components for the vibrations ν(C=O) (see Figure 5).The values of the peaks of the four mixtures as well as those of the compou alone are grouped together in Table 2.The values of the peaks in cm −1 of the mixt and of the pure compounds are almost identical, except for the component at 17 cm −1 for AMF which is located at 1739.8 cm −1 .In Table 2, we present the values o areas of modes with each component.It can be seen that "CB + 0.5AMF" areas "CB + AMF + CBE" mixtures are similar.In conclusion, it would seem that the p ence of CBE instead of CB is not visible in MIR spectroscopy in the 1800-1700 area.In Figure 5b, it can be seen that some components have larger areas in some mixt than in others.It is therefore interesting to comment on the evolution of the intensity the area of each peak from one mixture to another, using the values of the areas of component (Table 3).For example, the peak area at 1743 cm −1 is the highest for "C AMF" mixture, which contains the most AMF (50%) (area = 26.2au) and the lowes The values of the peaks of the four mixtures as well as those of the compounds alone are grouped together in Table 2.The values of the peaks in cm −1 of the mixtures and of the pure compounds are almost identical, except for the component at 1735.5 cm −1 for AMF which is located at 1739.8 cm −1 .In Table 2, we present the values of the areas of modes with each component.It can be seen that "CB + 0.5AMF" areas and "CB + AMF + CBE" mixtures are similar.In conclusion, it would seem that the presence of CBE instead of CB is not visible in MIR spectroscopy in the 1800-1700 cm −1 area.In Figure 5b, it can be seen that some components have larger areas in some mixtures than in others.It is therefore interesting to comment on the evolution of the intensity and the area of each peak from one mixture to another, using the values of the areas of each component (Table 3).For example, the peak area at 1743 cm −1 is the highest for "CB + AMF" mixture, which contains the most AMF (50%) (area = 26.2au) and the lowest for "CB + 0.5AMF + CBE" mixture, which contains the least (20% of AMF) (area = 6.1 au)."CB + 0.5AMF" and "CB + AMF + CBE" mixtures, which both contain 33% of AMF, have an area of intermediate and almost identical values (area = 13.5 au for CB + AMF + CBE and area = 14.8 for CB + 0.5AMF).It would therefore seem that the peak at 1743 cm −1 mainly reflects the vibrations ν(C=O) of TGs of AMF.In conclusion, CBE TGs have the same vibrational behavior in MIR spectroscopy as CB TGs in mixtures.This is not very surprising, since the study of CB alone and CBE alone did not reveal any vibrational differences in MIR [14].On the other hand, the detailed study of spectra in MIR spectroscopy in the spectral zone 1800-1700 cm −1 allows us to know the major contribution of one of the elements of the mixtures for three vibrational modes: the peaks at 1751 and cm −1 are predominantly sensitive to AMF while the peak at 1735.5 cm −1 is sensitive to CB and CBE. Raman Investigations Bresson et al. [14] showed that MIR spectroscopy study of CB and CBE failed to find vibrational modes that could differentiate CB from CBE, unlike Raman spectroscopy.In addition, A. Lambert et al. showed a vibrational behavior for AMF quite different from CB and CBE in Raman spectroscopy at room temperature [15].Raman spectra at room temperature for the four mixtures are represented in Figure 7a, 7b and 7c respectively in the spectral zones 3100-600, 3100-2700 and 1770-1710 cm −1 . Raman Investigations Bresson et al. [14] showed that MIR spectroscopy study of CB and CBE failed to find vibrational modes that could differentiate CB from CBE, unlike Raman spectroscopy.In addition, A. Lambert et al. showed a vibrational behavior for AMF quite different from CB and CBE in Raman spectroscopy at room temperature [15].Raman spectra at room temperature for the four mixtures are represented in Figure 7a, 7b and 7c respectively in the spectral zones 3100-600, 3100-2700 and 1770-1710 cm −1 .The spectral region 1800-1700 cm −1 corresponding to the vibration of elongation of the carbon-oxygen double bonds, noted ν(C=O), was found to be very rich for the three pure compounds [14,15].It is therefore interesting to study this zone for mixtures.For this, one carries out modeling by Lorentzian functions for the four mixtures (see Figure 8b).In order to analyze this spectral region more easily, Raman spectra of the pure compounds at room temperature are presented (see Figure 8a).Modeling makes it possible to determine the value in cm −1 of the center of the peaks (Table 4) as well as the areas of each peak (Table 5).The spectral region 1800-1700 cm −1 corresponding to the vibration of elongation of the carbon-oxygen double bonds, noted ν(C=O), was found to be very rich for the three pure compounds [14,15].It is therefore interesting to study this zone for mixtures.For this, one carries out modeling by Lorentzian functions for the four mixtures (see Figure 8b).In order to analyze this spectral region more easily, Raman spectra of the pure compounds at room temperature are presented (see Figure 8a).Modeling makes it possible to determine the value in cm −1 of the center of the peaks (Table 4) as well as the areas of each peak (Table 5).The component at 1735 cm −1 does not exist in AMF, whereas it exists in all other sam ples [13].It can thus be used as a witness of the contribution of AMF in mixtures.In Tab 1, it can be seen that between "CB + 0.5AMF" and "CB + AMF" mixtures, the variations i the area ratios are very significant.The modes at 1743 and 1730 cm −1 are common to a mixtures.Between these two mixtures, AMF's contribution is doubled.We notice that th Figure 5 . Figure 5.The MIR carbonyl stretching region (1770-1710 cm −1 ) fitted by Lorentzian curves of CB, CBE and AMF (a) and of the four mixtures (b): CB + 0.5AMF, CB + AMF, CB + 0.5AMF + CBE and CB + AMF + CBE, at room temperature.The solid lines represent the components fitted by Lorentzian functions. Figure 6 . Figure 6.The MIR C-H stretching region (3050-27500 cm −1 ) fitted by Lorentzian curves of CB + 0.5AMF + CBE at room temperature.The solid lines represent the components fitted by Lorentzian functions. Figure 6 . Figure 6.The MIR C-H stretching region (3050-27500 cm −1 ) fitted by Lorentzian curves of CB + 0.5AMF + CBE at room temperature.The solid lines represent the components fitted by Lorentzian functions. 1 )Figure 8 . Figure 8.The Raman carbonyl stretching region (1770-1710 cm −1 ) fitted by Lorentzian curves of C CBE and AMF (a) and of the four mixtures (b): CB + 0.5AMF, CB + AMF, CB + 0.5AMF + CBE an CB + AMF + CBE, at room temperature.The solid lines represent the components fitted by L rentzian functions. Figure 8 . Figure 8.The Raman carbonyl stretching region (1770-1710 cm −1 ) fitted by Lorentzian curves of CB, CBE and AMF (a) and of the four mixtures (b): CB + 0.5AMF, CB + AMF, CB + 0.5AMF + CBE and CB + AMF + CBE, at room temperature.The solid lines represent the components fitted by Lorentzian functions. Table 1 . Composition of the mixtures of cocoa butter (CB), cocoa butter equivalent (CBE) and milk fatty acids (AMF). Table 2 . Areas of the components of MIR spectra for C=O carbonyl group for CB, CBE, A and mixtures. Table 2 . Areas of the components of MIR spectra for C=O carbonyl group for CB, CBE, AMF and mixtures. Table 4 . Values of the ν(C=O) (cm −1 ) of the peaks of Raman spectra of CB, CBE, AMF and mi tures at room temperature.* values obtained by Lorentzian modeling. Table 5 . Evolution of the area ratios of the components of Raman spectra for the carbonyl grou C=O for the four mixtures. Table 4 . Values of the ν(C=O) (cm −1 ) of the peaks of Raman spectra of CB, CBE, AMF and mixtures at room temperature.* values obtained by Lorentzian modeling. Table 5 . Evolution of the area ratios of the components of Raman spectra for the carbonyl group C=O for the four mixtures.
2022-07-02T15:26:14.957Z
2022-06-29T00:00:00.000
{ "year": 2022, "sha1": "e11d2020a549b64bbf985e0822b350cf4eae42e0", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2076-3417/12/13/6594/pdf?version=1656665558", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "516d9645d23263dfd56ba5fc05160dd69013711b", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "extfieldsofstudy": [] }
260812548
pes2o/s2orc
v3-fos-license
Observation of the efficacy of parathyroidectomy for secondary hyperparathyroidism in hemodialysis patients: a retrospective study Purpose Parathyroidectomy (PTX) is commonly performed as a treatment for secondary hyperparathyroidism (SHPT) in patients with end-stage renal disease (ESRD). We aimed to evaluate the efficacy of PTX in patients with SHPT who underwent hemodialysis. Methods This retrospective study analyzed the clinical treatment of 80 hemodialysis patients with SHPT who underwent either total PTX with forearm auto transplantation (TPTX + AT) or subtotal parathyroidectomy (SPTX). We compared the changes in biochemical indices before and after surgery as well as the attenuation of intact parathyroid hormone (iPTH) in the TPTX and SPTX groups. We also evaluated clinical symptoms and quality of life using the Visual Analog Scale (VAS) and the Short Form-36 Questionnaire (SF-36) before and at 3, 6, and 12 months after surgery. Results Serum iPTH and serum phosphorus levels decreased significantly after surgery in 80 patients with SHPT (P < 0.05). Within one month of surgery, there was a difference in iPTH levels between the TPTX + AT and SPTH groups, but there was no difference over time. Patients experienced significant improvement in their clinical symptoms of restless leg syndrome, skin itching, bone pain, and joint pain at 1 week post operation (P < 0.001). Quality of life significantly improved after surgery, as assessed by SF-36 scores (P < 0.05). Hypocalcemia was the most common postoperative complication, occurring in 35% of patients. Within the first 12 months post surgery, 5 patients had a recurrence. Conclusion PTX is effective in rapidly reducing iPTH levels, improving calcium and phosphorus metabolism disorders, and enhancing patients’ quality of life by safely and effectively relieving clinical symptoms. Introduction Chronic kidney disease (CKD) frequently complicates secondary hyperparathyroidism (SHPT), which is characterized by abnormal metabolism of calcium and phosphorus that triggers compensatory hyperplasia of the parathyroid gland and increased secretion of intact parathyroid hormone (iPTH) throughout the gland.Patients suffering from SHPT are prone to various complications, including hyperparathyroidism, hypercalcemia or hypocalcemia, persistent hyperphosphatemia, and disturbances in the functions of the bone (e.g., bone pain, deformities) [1, neurological system (e.g., insomnia) [2, hematological system (e.g., anemia and coagulation dysfunction) [3, and cardiovascular system (e.g., arterial sclerosis) [4.Additionally, patients may experience skin itching, and joint pain which can significantly impair their quality of life.Moreover, long-term chronic electrolyte and biochemical disorders can increase the incidence of cardiovascular events and all-cause mortality [5,6]. At present, primary treatments for SHPT include medical and surgical interventions, alongside emerging therapies such as photodynamic therapy [7.Although low-phosphate diets, hemodialysis, and drug treatment can to a certain extent reduce iPTH levels in early and middle-stage patients, drug treatment has little effect on improving high iPTH, ion metabolism disorders, and symptoms for refractory or late-stage SHPT patients [8.Since Stanbury first proposed SHPT in 1960, parathyroidectomy (PTX) has been an effective treatment [9.The 2017 clinical practice guidelines for chronic kidney disease-mineral and bone disorder (CKD-MBD) recommend PTX as the preferred therapy for refractory SHPT.This surgical intervention is recommended over medical treatments due to its superior effectiveness in controlling iPTH levels and addressing the underlying complications of SHPT [10.There are three main types of PTX: subtotal parathyroidectomy (SPTX), total parathyroidectomy (TPTX), and total parathyroidectomy with auto transplantation (TPTX + AT) [11,12].According to reports, PTX can increase the survival rate of dialysis patients by 15-57% and can improve hypercalcemia, hyperphosphatemia, tissue calcification, bone mineral density, and health-related quality of life (QOL) [13.QOL is a significant indicator used to assess the health status of SHPT patients.It renders a better evaluation of their disease prognosis and the effectiveness of PTX treatment and has a close association with long-term survival.The Short Form-36 Questionnaire (SF-36) may be a reliable instrument for evaluating quality of life in individuals with SHPT who have undergone parathyroid surgery as well as a powerful predictor of morbidity and unfavorable outcomes in dialysis patients [14,15].Inaddition, the Visual Analog Scale (VAS) is the commonly used subjective evaluation method for quantifying sensation or experience, such as pain, emotion, and satisfaction.However, to date, no studies have combined these two scales to offer a more thorough evaluation of the effectiveness of PTX.Herein, this retrospective study meticulously examines and juxtaposes blood biochemical indices, quality of life, and clinical symptoms in hemodialysis patients both pre and post PTX treatment over an extended duration, presenting a comprehensive evaluation of the efficacy of PTX. Study sign and patients This retrospective single-center study reviewed the electronic medical records of 160 uremic patients who underwent long-term hemodialysis and parathyroidectomy (PTX) at our hospital between October 2014 and April 2022.Following the exclusion of 80 individuals for noncompliance, the study also included an additional 80 participants, and 60 patients underwent TPTX + AT surgery, while 20 underwent SPTX surgery.The review and grouping of the study are shown in Fig. 1.The inclusion criteria were as follows: (1) age ≥ 18 years, (2) hemodialysis maintenance at least twice weekly for more than 3 months, and (3) met the diagnostic criteria for refractory secondary hyperparathyroidism (SHPT): serum intact parathyroid hormone (iPTH) > 600-800 ng/L and accompanying high calcium level or hyperphosphatemia, severe bone pain, itchy skin, extraosseous calcification and deformity, failure of medical treatment, and imaging examination identifying at least one enlarged parathyroid gland (4).Successful PTX surgery.Compared with preoperative monitoring, intraoperative monitoring of iPTH resulting in a rate of decline > 70% was the criterion for a successful operation [16.Exclusion criteria included significant cardiovascular disease, active inflammation or infection, malignancy, and treatment with steroids and/ or immunosuppressants.The study was approved by the Surgical methods and perioperative management All patients underwent a comprehensive preoperative evaluation upon admission to exclude any surgical contraindications.The evaluation included a series of tests such as complete blood cell counts, serum electrolytes, chest X-rays, electrocardiograms, coagulation function screening, and parathyroid ultrasound.TPTX + AT or SPTX are the surgical procedures employed by experienced attending surgeons in our hospital.Hemodialysis is performed within 24 h postoperation using high-calcium dialysate of 1.75 mmol/L.To avoid postoperative bleeding, no heparin dialysis was used for a week after surgery.The primary aim of hemodialysis is to remove toxins in a timely manner and maintain electrolyte stability, and patients receive hemodialysis three times a week at our hospital's hemodialysis center thereafter.Intravenous calcium gluconate of 8-16 g/24 h is administered within 2-3 weeks postsurgery, with routine administration of calcitriol of 0.25-2.5 ug/d, to maintain the total serum calcium level between 1.8 and 2.2 mmol/L. The health status of patients was evaluated using the Chinese version of the medical outcomes study 36-item short from health survey (SF-36), which is a widely validated and commonly used generic tool belonging to the international quality of life assessment program.It is easy to use, acceptable to patients, and fulfils stringent criteria of reliability and validity [17.The SF-36 has been translated and applied in over 50 countries, and it measures eight domains, including physical function (PF), role-functioning physical (RP), bodily pain (BP), general health (GH), vitality (VT), social functioning (SF), rolefunctioning emotional (RE), and mental health (MH), with scores ranging from 0 to 100.The physical health summary score (PCS) and the mental health summary score (MCS) were also calculated.Higher scores indicate better self-perceived health and a higher quality of life, while a change of at least 2 points in the summary scores is a clinically significant indicator of changes in health status.Furthermore, the Visual Analog Scale (VAS) was employed to appraise the primary symptoms of restless legs syndrome, skin itching, bone pain, joint pain, and insomnia, with each item on a score range of 0-10.Higher VAS scores indicated more severe symptoms.All aforementioned scores were assessed by specialized physicians during patient hospitalization, outpatient dialysis, or follow-up visits. If the total serum calcium concentration was lower than 1.875 mmol/L 72 h after surgery, it was defined as severe postoperative hypocalcemia.The corrected value of serum albumin was calculated as follows : corrected value (mmol/L) = total calcium concentration + 0.8 × [40serum albumin].Postoperative hyperkalemia was defined as a postoperative serum potassium concentration exceeding 5.5 mmol/L.Recurrent secondary hyperparathyroidism (SHPT) is diagnosed if iPTH ≥ 300 pg/mL or 9 times higher than the upper limit of normal and symptoms such as bone pain and itching appear 6 months postsurgery.If the iPTH level was below 10 pg/mL after 1 year of follow-up, hypoparathyroidism was defined.Graft survival should be evaluated based on the normalization of blood iPTH levels within three months. Statistical analysis Statistical analysis was conducted on a sample of n = 80 patients using SPSS version 26 for Windows.The mean (standard deviation) and median (interquartile range) were used to report continuous variables with normal and nonnormal distribution respectively.Paired t-tests and nonparametric tests (Wilcoxon signed-rank test) were used to compare continuous variable data.To compare the variations in metrics between the TPTX + AT and SHPT groups, we employed the Student's t-test or the Mann-Whitney U test.The chi-square test was employed to compare postoperative complication rates.Statistical significance was defined at p < 0.05. Case report on parathyroid gland The pathological analysis of 320 excised parathyroid glands from 80 patients revealed the presence of parathyroid adenoma (4 cases) and parathyroid nodular hyperplasia (316 cases).Among our patients, 2 underwent removal of 2 glands, 3 had 3 glands removed, 69 had all 4 glands removed,5 had 5 glands removed, and 1 had all six glands removed.Notably, further imaging or laboratory studies confirmed residual parathyroid glands in patients who had less than complete gland removal. Preoperative demographic and laboratory findings The 80 patients who underwent surgery before April 2022 are summarized in Table 1.Of these patients, 45% (36) were female, with a mean (SD) age of 46.32 (11.81) years and a mean dialysis duration of 8.55 (4.22) years.The most common primary disease was chronic glomerulonephritis, accounting for approximately 38.75% [31] of cases, followed by diabetic nephropathy, accounting for approximately 22.5% [18] of cases. Comparison of blood biochemical indicators before and after surgery As summarized in Table 2, there was no significant difference in hemoglobin levels between preoperative and postoperative measurements at different time points.Alkaline phosphatase (ALP) levels increased significantly in the first week after surgery compared to preoperative levels(P < 0.05) but did not show a significant difference from preoperative levels at 1 month postoperatively.By approximately 3 months after surgery, the median ALP levels had returned to normal levels.Immediately after the surgery, all participants observed a substantial reduction in their levels of iPTH, serum calcium, and phosphorus when compared to their corresponding preoperative levels.On the third day after the operation, significant reduction in the serum calcium levels was observed, with mean value of 2.09 ± 0.25mmol/L (Table 3). Comparison of the effects of TPTX + AT and SPTX on laboratory indicators The follow-up data of iPTH, and blood phosphorus levels were significantly lower at each postoperative period than pre-op in both groups shown in Fig. 2.There were differences in iPTH between the two groups in the short term (within 1 month after surgery, *p value < 0.05), but no significant differences were seen in the long term (3 months after surgery).There was no significant difference in calcium and phosphorus values between the two groups during the follow-up period (Figs. 3 and 4). Improvement of symptoms after surgical treatment One week after the operation, patients showed significant improvement in restless leg syndrome, pruritus, bone pain, and arthralgia (P < 0.05).However, improvements in insomnia were not observed until three months after surgery.At the six-month and twelve-month follow-up assessments, patients showed a significant improvement in all symptoms, including restless leg syndrome, pruritus, bone pain, arthralgia, and insomnia(P < 0.05), when compared with the preoperative period (Table 4). Improvement of quality of life after surgical treatment Comparison of QOL scores at different periods (Table 5): The results showed that the quality-of-life scores of the eight dimensions of PTX patients were significantly higher than those before the operation at 6 months and 12 months after the operation (P < 0.05).The PCS score increased to 82.24 (6.59) points at 6 months after the operation and 86.98 (5.37) points at 12 months after the operation.The MCS score increased to 77.94 (8.27) points at 6 months after the operation and 82.23 (7.28) points at 12 months after the operation. Postoperative complications and recurrence Regarding postoperative complications, hypocalcemia (35%) and hyperkalemia (22.5%) were relatively common (Table 6).Hypocalcemia had the highest incidence, with symptoms such as twitching at the corners of the mouth or numbness of the limbs resulting from a decrease in ionized calcium in the blood.However, even at a level as low as 1.24 mmol/L, no severe convulsions were observed, and most patients showed significant improvement or disappearance of symptoms after receiving sufficient intravenous and oral calcium supplementation.Although the incidence of hyperkalemia was also high, no patients developed malignant arrhythmias, and only 2 patients had atrial fibrillation, which converted to sinus rhythm after treatment.Postoperative bleeding was mostly due to minor bleeding at the surgical site or wound infection, without any cases of asphyxia or hypovolemic shock.Two cases of bleeding were from the gastrointestinal tract, unrelated to the surgery itself, possibly triggered by anticoagulant therapy and other gastrointestinal bleeding symptoms in patients with long-term dialysis.There were no patients with vocal cord paralysis and no patient died during the perioperative period.Three of the patients who underwent TPTX + AT experienced recurrence within 1 year, as did 2 of the patients who underwent SPTX.Of the patients in whom the blood iPTH levels did not drop by 50% within 5 min of complete excision of all parathyroid tissue, 3 patients did not achieve a 70% decrease within 10 min, and 9 patients did not achieve an 80% decrease within 30 min.One patient who had a recurrence within 1 year met the criteria for the 3 decreasing thresholds, and among the 10 patients who did not meet the criteria, 4 patients had a recurrence within one year (considered as long as any one of the time thresholds was not met). Discussion Our retrospective study collected data from 80 patients who underwent PTX, analyzing changes in hemoglobin (Hb), alkaline phosphatase (ALP), iPTH, serum calcium, serum phosphorus levels as well as health-related quality of life, plus clinical symptom improvement before and after surgery.The biggest highlight of this study is the comprehensive analysis of the effects of parathyroidectomy on hemodialysis patients with secondary hyperparathyroidism over a relatively long period of time (1 year) from both objective indicators (iPTH, serum calcium, and so on) and patients' subjective perspectives (SF-36 score, VAS score).Additionally, we confirmed that the intraoperative fast parathormone assay was predictive of the recurrence of hyperparathyroidism.We found that parathyroidectomy did not significantly affect hemoglobin levels in end-stage renal disease patients, possibly due to irreversible damage to the renal interstitium and a consequent decrease in erythropoietin production.ALP is a group of glycoproteins that exhibit phosphatase activity under alkaline conditions, catalyzing the hydrolysis of various phosphates.It is a reliable indicator of bone turnover, promoting bone mineralization and reflecting the overall bone metabolism status of patients [18.Our study demonstrated that postoperative ALP levels displayed a significant increase from 234.8(130.Given the short half-life of iPTH, which is only 2 to 4 min, a retrospective analysis found that the median half-life of iPTH during surgery was 3 min and 9 s [20.Therefore, there may be a rapid decrease in serum iPTH during the PTX period.On the other hand, the speed of iPTH decrease may indicate whether all parathyroid tissues have been removed.According to HIRAMITSU et al., serum iPTH decreased to 70% of preoperative levels within 10 min after surgery, indicating sufficient removal of parathyroid tissues [21.Serum iPTH should be tested more frequently at 5, 10, and 30 min after parathyroidectomy to detect missing or undiscovered parathyroid tissue during surgery.If the decrease in serum iPTH does not reach 70%, it is likely that not all parathyroid tissue has been removed.To exclude ectopic parathyroid tissues, adipose tissue near the thymus and carotid artery sheath should be removed.However, CONZO et al. suggested that immediate postoperative iPTH levels of 26.52 pmol/L can predict the success of surgery, even if the decrease rate is not less than 80% within 20 min [22.In this study, 5 patients had recurrence during the 1-year follow-up, and the recurrence may be due to undiscovered parathyroid tissues during surgery in some of these patients.Patients who did not meet the standard had a higher rate of recurrence in the follow-up period than those who met the standard.It was discovered that the intraoperative fast parathormone assay might predict the persistence and recurrence of hyperparathyroidism [12,23].We tracked iPTH levels during the SHPT procedure, and it also exhibited this manifestation. In our study, hypocalcemia (35%) ranked the highest among postoperative outcomes.This could be related to the high incidence of postoperative hungry bone syndrome, where more free calcium enters the bone, resulting in hypocalcemia.Hypocalcemia was also found to be the most common complication in the study by ZHAO et A meta-analysis showed that preoperative serum calcium, ALP) and iPTH levels were significant predictors of hypocalcemia in patients with SHPT after PTX [26.In addition, the total weight of parathyroid tissue removed during surgery has also been identified as a risk factor for postoperative hypocalcemia [24.No patient in our study had nerve damage, possibly because our operations were performed by the same team of well-trained surgeons who always exposed the recurrent laryngeal nerve and properly drained it after surgery, leading to a low incidence of postoperative issues.Patients receiving TPTX + AT treatment showed that their postoperative recurrence was easier to manage than those receiving SPTX because they did not need a second neck surgery with higher risks and complexity,but instead underwent surgery on a more practical forearm.In our study, one patient underwent recurrent forearm transplantation, but SPTX patients did not undergo this surgery due to the risks associated with neck surgery and the complexity of the second surgery.As Lau et al. described, the long-term probability of significant hypocalcemia with TPTX + AT was higher than that with SPTX [13.However, a few TPTX + AT patients may experience long-term serum iPTH values that are lower than normal (close to 0) due to transplant survival issues, and this requires low-temperature storage of the removed tissue during surgery.Unfortunately, our hospital lacks such equipment, so symptomatic calcium supplementation measures can only be taken to alleviate the impact on patients with persistently low iPTH after surgery.Almost all patients experienced relief of symptoms such as skin itching, bone pain, and joint pain shortly after surgery.High concentrations of parathyroid hormone (PTH) can cause calcium and magnesium ions to accumulate on the skin surface and stimulate histamine release, leading to skin itching.PTH is a macromolecule that exists in low concentrations in the plasma and is poorly cleared by diffusion alone during hemodialysis, which can lead to PTH accumulation in the body over time and worsening skin itching in long-term dialysis patients.Moreover, the results of the SF-36 showed significant improvement in all aspects compared to preoperative patients, but the improvement in MCS was smaller than that in PCS 6 and 12 months after surgery.This may be because depression is the most common comorbid mental illness in endstage renal disease patients, affecting an estimated 25% of all patients [27, which can have an impact on the assessment of MCS. This study has several limitations that should be noted.First, it is a single-center retrospective analysis with a limited sample size, which may limit the generalizability of the findings.Second, it is important to recognize the potential limitations of evaluation tools, such as SF-36 and VAS scores, which are subject to individual subjectivity despite our effort to use uniform evaluation criteria.Additionally, it can be challenging to distinguish symptoms of SHPT from clinical symptoms of uremia, such as insomnia, depression, and bone pain, which may overlap and affect the interpretation of study results.Future studies with larger, multicenter samples and more objective symptom evaluation methods are warranted. Hyperparathyroidism is a severe complication that significantly affects the quality of life and increases the risk of death in patients with end-stage renal disease due to inadequate renal supply.Our study findings indicate that regardless of the surgical method used, there were significant improvements in both quality of life and laboratory measures among patients.Several prospective investigations and Mata analyses have also supported these findings [28][29][30][31]. Conclusion In summary, we found that TPTX + AT and SPTX were both safe and effective treatments for SHPT in our SHPT patients, and that they both significantly improved the patients' clinical symptoms, quality of life, and levels of calcium and phosphorus metabolism.The decrease in iPTH during the brief postoperative period was also within our expectations and may have had a positive effect on the patients' rate of recurrence.However, we must acknowledge that the size of our sample is small and that patient perceptions play a role in how well patients are doing. Fig. 1 Fig. 1 Flowchart of review grouping of the study Fig. 4 Fig. 3 Fig. 4 Comparisons of serum phosphorus levels between TPTX + AT and SPT Table 1 Patient characteristics and baseline data Data are presented as n (%), and continuous data are presented as the mean ± SD when normally distributed and as median (interquartile range) when skewed; Normal ranges: Albumin (40-55 g/L); Creatinine (57-97µmol/L) Table 3 Comparison of iPTH, serum calcium and phosphorus levels before and after PTX iPTH : intact parathyroid hormone; /: not monitored; iPTH levels are expressed as the median (interquartile range);Serum calcium and serum phosphorus are expressed as the mean ± standard deviation; A probability value of *p < 0.05 was considered to be statistically significant compared with those before the operation. Table 4 Analysis of VAS scores on symptoms before and after PTX. Table 5 Analysis of SF-36 scores on quality of life in different periods Table 6 Postoperative complications and recurrence
2023-08-12T14:09:10.377Z
2023-08-12T00:00:00.000
{ "year": 2023, "sha1": "3aba4a2c41a2b22a97a03f74650452613db88f12", "oa_license": "CCBY", "oa_url": "https://bmcsurg.biomedcentral.com/counter/pdf/10.1186/s12893-023-02143-y", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "e07973adb6cfde1f5e320a2996ca0ea4907c4cd7", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
2038350
pes2o/s2orc
v3-fos-license
Multiple Sclerosis: Evaluation of Purine Nucleotide Metabolism in Central Nervous System in Association with Serum Levels of Selected Fat-Soluble Antioxidants In the pathogenesis of demyelinating diseases including multiple sclerosis (MS) an important role is played by oxidative stress. Increased energy requirements during remyelination of axons and mitochondria failure is one of the causes of axonal degeneration and disability in MS. In this context, we analyzed to what extent the increase in purine catabolism is associated with selected blood lipophilic antioxidants and if there is any association with alterations in serum levels of coenzyme Q10. Blood serum and cerebrospinal fluid (CSF) samples from 42 patients with diagnosed MS and 34 noninflammatory neurologic patients (control group) were analyzed. Compared to control group, MS patients had significantly elevated values of all purine nucleotide metabolites, except adenosine. Serum lipophilic antioxidants γ-tocopherol, β-carotene, and coenzyme Q10 for the vast majority of MS patients were deficient or moved within the border of lower physiological values. Serum levels of TBARS, marker of lipid peroxidation, were increased by 81% in the MS patients. The results indicate that the deficit of lipophilic antioxidants in blood of MS patients may have a negative impact on bioenergetics of reparative remyelinating processes and promote neurodegeneration. Introduction Multiple sclerosis (MS) is an inflammatory immune mediated demyelinating disease of the central nervous system (CNS). An energy deficient state and oxidative and nitrative stress have been implicated in the degeneration of axons in multiple sclerosis [1][2][3][4]. In chronic lesions, axonal degeneration correlates with the extent of inflammation and leads to axonal loss through a slow burning process [5]. Mickel [6] proposed that a lipid peroxidation disturbance caused by free radicals production is involved in the breakdown of the myelin sheath. Since then several studies have demonstrated the role of increased free radical production and/or a decreased antioxidant defense in CNS as causal factors of MS [1,4,7,8]. Following demyelination the axonal membrane undergoes a number of changes including an increase in number of sodium channel within demyelinated part of the axon [9][10][11]. The maintenance of intra-axonal ionic balance and resting membrane potential following the influx of sodium through the increased sodium channels relies on the largest consumer energy in the central nervous system, Na + /K + ATPase [12]. In noninflammatory environments this increase in energy demand of axons lacking a healthy myelin sheath is apparent by the changes in density and activity of energy producing organelles mitochondria, the most efficient producers of energy [2,13,14]. Though the principal site of MS pathology is the CNS, the lipid status and the membrane properties in the platelets and erythrocytes in the peripheral blood are also altered [7]. Increased lipid peroxide levels have been observed both in the cerebrospinal fluid and in the blood of MS patients [7,8,15]. Lack of sufficient vitamin A and E in the diet has been suggested to be a risk factor for the onset of the disease 2 Multiple Sclerosis International [7,16]. However, other studies have found that the plasma levels of these vitamins are similar in MS patients and in controls [7,17]. The present study examined serum levels of vitamin A ( -carotene), vitamin E ( -and -tocopherol) isomers, and coenzyme Q 10 (antioxidant, which is an indicator of bioenergetic state) in MS patients in relation to the cerebrospinal fluid levels of purine nucleotides degradation products (adenosine, inosine, hypoxanthine, xanthine, and uric acid), which are known to be produced during energy deficiency. Furthermore, we were interested to discover which of these lipophilic antioxidants is associated the most with lipid peroxidation and degradation of purine nucleotides in MS patients. Clinical Evaluation of the Patients and Preparation of Samples. Blood serum and cerebrospinal fluid (CSF) samples from 42 patients diagnosed with multiple sclerosis (MS) according to McDonald's rule were analyzed. Test group consisted of 32 females and 10 males with average age of 36.3 ± 11.97 years. Each patient had relapsing-remitting form and was out of relapse at the time. According to our knowledge, patients were not presented with any other serious illnesses. The control group was created from neurological patients with noninflammatory diseases of the central nervous system ( = 34, 8 males, 26 females) with an average age of 36.06 ± 11.92 years, who had routine CSF analysis and biochemical parameters within the physiological values. Each proband signed informed consent and agreed with the investigation of mentioned parameters. Ethical committee statement was not necessary since the examination of parameters was indicated by neurologist and was a component of diagnostic process. None of the patients had demyelinating disease or any other diseases associated with an increase of oxidative stress and degradation of purine nucleotides. Blood and CSF samples were taken at the same time. Aliquots of CSF and serum samples were centrifuged, coded, and immediately stored at −70 ∘ C in polypropylene tubes until being assayed. Serum levels of thiobarbituric acid reactive substances (TBARS) were determined spectrophotometrically at 532 nm according to Janero and Burghardt [20]. State of blood-brain barrier (BBB) of patients was evaluated via QA-index, as ratio between albumin concentration in CSF and in blood serum multiplied by 1000. QA-index higher than 7.4 indicates BBB deterioration. Statistical Analysis. All statistical analyses were carried out using StatsDirect statistical software, version 2.7.2 (Stats-Direct Sales, Sale, Cheshire M33 3UY, United Kingdom). < 0.05 was considered significant. For normal distribution of data, the means and standard deviations were shown and Student's -test for comparing two independent samples was used. For non-Gaussian distribution, the median values and 25-75% interquartile range (IQR) were shown and independent variables were compared using the nonparametric Mann-Whitney test. The linear relationship between continuous variables was evaluated using the Spearman's correlation coefficient. Results Selected basic biochemical parameters in the serum and cerebrospinal fluid (CSF) are shown in Table 1 characterizing the control group and the group of patients with multiple sclerosis (MS) in comparison to the reference values. The group with multiple sclerosis patients ( = 42, the average age of 36.49 ± 11.97 years) was characterized by pathologically high CSF levels of immunoglobulin IgG, with evidence of increased intrathecal synthesis of immunoglobulin IgG expressed by elevated values of IgG index and Reiber's index (RIG) ( Table 1). Average QA values of MS patients (that characterize the integrity of blood-brain barrier) were within the physiological range but in comparison to controls were significantly higher ( = 0.0293). Table 2 are shown CSF levels of purine nucleotide degradation products adenosine (Ado), inosine (Ino), hypoxanthine (Hyp), xanthine (Xan), and uric acid (UA) in comparison with a control set of neurological patients with noninflammatory CNS diseases. Compared to control group, MS patients had a significantly elevated values of all purine nucleotide metabolites, except of adenosine, which were significantly lower (0.20 ± 0.16 mol/L versus 0.44 ± 0.20 mol/L) ( Table 2). Serum Lipophilic Antioxidants and TBARS. Serum lipophilic antioxidants -tocopherol ( T), -tocopherol ( T), the ratio of / -tocopherol ( / T), -carotene ( C), and coenzyme Q 10 (CoQ 10 ) in the vast majority of cases were lower than normal or were in the close proximity to the bottom border of the physiological values (Table 3). The mean ± standard deviation is shown. The greatest incidence of antioxidants deficiency occurred at T levels, where up to 71.4% of MS patients had subliminal serum values of T and T values of the rest of the MS patients were in the bottom border of the T physiological values ( Figure 1). The ratio between the isomers of vitamin E, -and -tocopherols ( / T), was reduced compared to the reference values of all analyzed MS patients. Serum -tocopherol levels were within the borders of physiological values (Table 3). We observed deficit status in serum levels of -carotene in approximately 30% and in CoQ 10 levels in 45% of MS patients (Table 3). Spearman Rank Correlations among Analyzed Parameters. Spearman rank correlations among lipophilic antioxidants and intrathecal synthesis of IgG expressed by IgG index and Reiber's index RIG ( Figure 2) showed also significant relationship between CoQ 10 and T ( = 0.355, = 0.02) and between T and IgG index ( = 0.314, = 0.04). Serum levels of T significantly correlated with levels of -tocopherol as well ( = 0.314, = 0.04) (Figure 2). Out of the measured antioxidants, -tocopherol correlated the most with CSF levels of purine nucleotide degradation products ( the fourth group consisted of patients with deficiency and thresholds to be deficit T, C, and CoQ 10 (− TCoQ 10 C; = 10). In accordance with the correlation relationships (Figure 3), the groups of MS patients with deficiency ofcarotene (− T C and − TCoQ 10 C) had higher values of TBARS and higher intrathecal synthesis of IgG (IgGindex, RIG) compared to groups − T and − TCoQ 10 (Table 4(a)). All MS subgroups had significantly lower CSF levels of adenosine and significantly increased degradation of adenosine to inosine (Ino/Ado) and to hypoxanthine (Hyp/Ado) compared to the control group (Table 4(b)). The group of MS patients with -carotene deficiency had the increased degradation of the hypoxanthine to the xanthine (Xan/Hyp), as well as significantly higher CSF Table 4: (a) Values of selected biochemical parameters (IgG index, QA, and RIG) and serum lipophilic antioxidants and TBARS in patients with multiple sclerosis (MS) divided into subgroups according to the occurrence of measured antioxidants in serum − T (tocopherol deficiency only), − TCoQ 10 ( -tocopherol and coenzyme Q 10 deficiency), − T C ( -tocopherol and -carotene deficiency), and − TCoQ 10 C ( -tocopherol, coenzyme Q 10 , and -carotene deficiency). (b) CSF levels of purine nucleotide degradation products, uric acid (UA), hypoxanthine (Hyp), xanthine (Xan), inosine (Ino), and adenosine (Ado) in patients with multiple sclerosis (MS) divided into subgroups according to the occurrence of measured antioxidants in serum − T ( -tocopherol deficiency only), − TCoQ 10 ( -tocopherol and coenzyme Q 10 deficiency), − T C ( -tocopherol and -carotene deficiency), and − TCoQ 10 C ( -tocopherol, coenzyme Q 10 , and -carotene deficiency). In comparison to the control group, significantly elevated CSF levels of xanthine were observed in patients with T deficiency and T + CoQ 10 deficiency only (Table 4(b)). Discussion The results presented suggest that, in patients with multiple sclerosis (MS) alteration in the metabolism of purine nucleotides (Table 2), reduced antioxidant and neuroprotection (Table 3) occur, and is associated with increased intrathecal synthesis of IgG. Similar results were reported by multiple authors [8,[22][23][24][25]. MS patients may suffer from a cell energy metabolism deficit that can be documented in biological fluids (cerebrospinal fluid and serum). The profile of compounds directly (CSF adenosine, inosine, hypoxanthine, xanthine, and uric acid) or indirectly (serum coenzyme Q 10 ) reflects the imbalance between adenosine triphosphate (ATP) production and consumption. Increased purine nucleotide degradation observed in our MS patients (Table 2) is activated in situations that are associated with a decrease in the amount of ATP and the related rise of adenosine monophosphate (AMP) levels. AMP can be metabolized in the cells in two ways: (a) by deamination to inosine monophosphate (IMP) followed by dephosphorylation of IMP to inosine or (b) by dephosphorylation to adenosine and its deamination to inosine (Figure 3). At physiological concentrations of ATP deamination of AMP to IMP is preferred [26]. In experimental autoimmune encephalomyelitis (EAE), the animal model of MS, it has been effectively shown that efficiency in neuronal ATP biosynthesis is decreased due to mitochondrial malfunctioning, leading to cell energy state imbalance [27,28]. If this occurs in MS patients too, a continuous outflow of ATP catabolites, including uric acid and its precursors, is expected from cerebral tissue of the MS lesions into the extracellular space. The picture of an energetic alteration in MS patients indirectly is reinforced by data referring to the coenzyme Q 10 decrease in serum of MS patients (Table 3). Coenzyme Q 10 is essential for the energy production of the cells as an electron transporter in the mitochondrial respiratory chain. In the process of oxidative phosphorylation in mitochondria Coenzyme Q 10 transfers electrons from complex I (NADH CoQ reductase) to complex III (cytochrome bc1 complex) or complex II (succinate dehydrogenase) on the complex III [29]. CoQ 10 deficiency during the time of increased energy requirements causes a decrease in production of ATP and activates the processes that lead to the degradation of ATP to AMP, its dephosphorylation to adenosine, and its subsequent degradation to inosine and hypoxanthine (Table 4(b)). The energy deficit of CoQ 10 (group 2, Table 4(a)) reduces the CSF levels of neuroprotective adenosine due to increased degradation of adenosine to inosine and hypoxanthine. Moreover, coenzyme Q 10 is one of the most important lipophilic antioxidants, preventing the generation of free radicals as well as oxidative modifications of proteins, lipids, and DNA and can also regenerate the other powerful lipophilic antioxidant, -tocopherol. Decreased levels of CoQ 10 in humans are observed in many pathological conditions (e.g., cardiac disorders, neurodegenerative diseases, AIDS, and cancer) associated with intensive generation of free radicals and their action on cells and tissues [29]. A crucial role in all these processes is played by NAD(P)H-dependent reductase(s) acting as the plasma membrane to regenerate the reduced ubiquinol form of CoQ 10 , contributing to the maintenance of its antioxidant properties [30]. Moreover, correlations of serum levels of Coenzyme Q 10 and vitamin E isomers (particularly with -tocopherol) in MS patients show that CoQ 10 is involved to the mechanisms that lead to alteration in the purine nucleotides metabolism, as well as in the processes of regeneration of vitamin E (Figure 2). Serum -tocopherol levels were in MS patients within the physiological values, but serum -tocopherol levels were reduced. Up to 71.4% of MS patients had subliminal serum values of T and the rest of the MS patients were in the bottom border of the T physiological values (Figure 1). Serum levels of T significantly correlated with levels of -tocopherol ( = 0.314, = 0.04) (Figure 2). Vitamin E collectively refers to 8 different structurally related tocopherols and tocotrienols that all possess antioxidant activity. The antioxidant activity of vitamin E is derived primarily from T and T, of which T is the most biologically active and the predominant form found in blood. During lipid oxidation the isoforms of vitamin E scavenge reactive oxygen species (ROS). This reaction produces oxidized tocopheroxyl radicals that can be recycled back to the active reduced form through reduction by vitamin C. Without reduction of vitamin E by vitamin C, vitamin E can act as ROS donor [31]. In addition to scavenging ROS, T in contrast to T also react with nitrogen species such as peroxynitrite, forming 5-nitro--tocopherol [32]. It has been accepted that the molecules of peroxynitrite are the final molecules responsible for pathological processes in neurodegenerative diseases and MS [33]; 3-nitrotyrosine is an indicator of increased formation of peroxynitrite and the most important molecule considered to be in charge of demyelination [33]. Significantly increased plasma levels of 3-nitrotyrosine were reported in MS patients [15,34]. Low levels of -tocopherol and /tocopherol ratio in our MS patients might indirectly point out to its high consumption through quenching reactive nitrogen species. Serum levels of T showed positive significant relationship between CoQ 10 ( = 0.355, = 0.02) and T and IgG index ( = 0.314, = 0.04), indicator of intrathecal synthesis IgG. While both tocopherols exhibit anti-inflammatory activity in vitro and in vivo, supplementation with mixed ( -enriched) tocopherols seems to be more potent than supplementation with -tocopherol alone [35]. Cook-Mills [36] reported that supplementation with physiological levels of purified natural forms of the vitamin E isoforms T and T has opposing regulatory functions during inflammation such that T is anti-inflammatory and T is proinflammatory. Positive correlation of T with IgG index could indicate proinflammatory effect of T in MS patients. The imbalance of T/ T levels in plasma may have significant health consequences. Out of the measured antioxidants, T correlated the most with CSF levels of purine nucleotide degradation products ( Figure 3). Serum levels of T correlated positively with CSF levels of adenosine, hypoxanthine, xanthine, and metabolic turnover of adenosine to inosine and negatively with CSF levels of uric acid and metabolic turnover hypoxanthine to uric acid (Figure 3). Mechanism of T influence on the metabolism of purine nucleotides is not yet known. Participation of T in the process of purine nucleotides degradation is probably connected with its ability effectively to scavenge NO and other free nitrogen radicals and with scavenging activity of uric acid, which effectively scavenges peroxynitrite formed by the reaction of NO with superoxide [37]. Due to its significant correlation with CoQ 10 , -carotene, and IgG index, it can be assumed that its effect on the metabolism of purine nucleotides has more complex (synergistic) nature. After the division of MS patients, according to their vitamin deficiency status into 4 groups (Table 4), MS patients with -carotene deficiency (group 3) in comparison to patients with T deficiency (group 1) and CoQ 10 deficiency (group 2) had significantly higher values of IgG index and RIG and in comparison to group 1 (− T) they had higher levels of TBARS (Table 4(a)). These results show that -carotene in MS patients participates significantly in the neutralization of lipid peroxidation processes running in this disease. Among the analyzed serum antioxidants, -carotene correlated with TBARS levels only. Beta-carotene in comparison to -tocopherol is more lipophilic and quenches radicals in lipophilic compartments more effective than -tocopherol [38]. Carotenoids are best known for their antioxidant activities including quenching-free radicals, reducing damage from reactive oxidant species, and inhibiting lipid peroxidation. Carotenoids also facilitate cell-to-cell communication which regulates cell growth, differentiation, and apoptosis, and some carotenoids convert to vitamin A [39]. Carotenoids play a pivotal role in prevention of many degenerative diseases mediated by oxidative stress including neurodegenerative diseases [40]. Low levels of -carotene observed in our group of MS patients may be due to a degradation of -carotene in its scavenging activity. It was found that, during the oxidation attacks, carotenoid breakdown products are formed (CBPs), including highly reactive aldehydes and epoxies [41]. Stimulated neutrophils are able to breakdown the -carotene and form a number of CBPs, which inhibit the mitochondrial respiration. This is accompanied by a reduction in the protein sulfhydryl content, a reduction of glutathione (GSH) levels and redox status, and increased accumulation of malonydialdehyde (MDA). Changes in the mitochondrial membrane potential can lead to deterioration in function of adenine nucleotides translocator [42]. Beta-carotene also has anti-inflammatory effects. An inflammatory stimulus, such as IFN-, activates macrophages to produce various proinflammatory cytokines (TNF , IL-1 ) and inflammatory mediators, which are synthesized by cyclooxygenase (PGE2) and by induced NO synthase. Expression of these cytokines and genes may be regulated by activation of transcription factor NF-B. Beta-carotene acts as an inhibitor of redox activation of the transcription factor [42]. The association between inflammation and a decrease in -carotene also showed Van Herpen-Broekmans [43], who found a negative correlation between serum levels of -carotene and inflammatory marker CRP. Conclusions The results of the work show that patients with multiple sclerosis in the early stage of the disease are characterized by reduced antioxidant, immunoregulatory, and neuroprotective ability, which are reflected by the increased metabolism of purine nucleotides, reduced CSF adenosine levels, low serum levels of lipophilic antioxidants -tocopherol, -carotene, CoQ 10 , and elevated levels of serum TBARS. Serum levels 8 Multiple Sclerosis International of CoQ 10 (an indicator of bioenergetic state) and T (isomer of vitamin E) significantly interfere with the metabolism of purine nucleotides in CSF, while -carotene is rather associated with intrathecal synthesis of IgG and with neutralization of lipid peroxidation processes running in this disease. Due to the fact that one of the possible causes of the axonal degeneration and disability may be an energy deficiency by increased energy requirements for axonal remyelination, demyelination and lipid peroxidation disturbance caused by free radicals production, decreased serum levels of CoQ 10 , and lipophilic antioxidants should be taken into account in clinical practice.
2018-04-03T03:20:07.230Z
2014-05-06T00:00:00.000
{ "year": 2014, "sha1": "f5888e43f8ae7ddc66be2a48ef979c3fd15c19b4", "oa_license": "CCBY", "oa_url": "http://downloads.hindawi.com/journals/msi/2014/759808.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "0aecdeb723e6f598f1637f580d9848c76715cb4c", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
227067315
pes2o/s2orc
v3-fos-license
Understanding hemodynamics with seven variables Intensive Care Unit, Hospital da Mulher, Salvador, Bahia, Brazil; Programa de Pós Graduação em Medicina e Saúde, Faculdade de Medicina da Bahia, Universidade Federal da Bahia, Salvador, Bahia, Brazil; Intensive Care Unit, Hospital da Cidade, Salvador, Bahia, Brazil Correspondence to: Dimitri Gusmao-Flores. Prof. Sabino Silva, 273 Ap 801, 40255-150, Salvador, Bahia, Brazil. Email: dimitrigusmao@gmail.com. We read with great enthusiasm two excellent reviews on monitoring in critically ill patients, which reinforces the need to integrate different variables to better understand the hemodynamic status of patients with circulatory shock (1,2). This life-threating condition was classified by Weil and Shubin, many years ago, in four types (hypovolemic, cardiogenic, obstructive and distributive) considering different pathophysiological mechanisms compromising the cardiac output (3). Over the years, all these concepts were revisited and expanded by adding more variables, like SvO 2 and microcirculatory evaluation, explaining the reasons for the reduced peripheral perfusion and setting goals of hemodynamic support for each type of shock (4). The hemodynamic resuscitation and investigation of the cause of the circulatory failure is crucial to prevent organ dysfunction and death. In order to improve the outcomes of patients with circulatory failure, bedside physicians must be able to make an early recognition of this condition so that individualized management is started. Many educational strategies were developed over time to facilitate teaching and improve understanding of the pathophysiology of shock states, treatment strategies and goals [visual tools (5), mnemonics-SOSD, VIP, ROSE (4,6), etc.]. We agree with Kattan et al. (1) and Messina et al. (2) on the use of different parameters to facilitate the understanding of hemodynamics and we created one single graph ( Figure 1) integrating seven variables that are very helpful at the bedside, in order to diagnose the type of shock, interpret hyperlactatemia and suspect of tissue hypoperfusion, and also plan interventions. When approaching the patient with circulatory shock, evaluating as many variables as possible can help us understand the hemodynamic state and to decide the best treatment. We chose seven hemodynamic data that can give us information about cardiac function, balance of oxygen delivery and consumption, and cellular metabolism, which are important even in settings where invasive monitoring cardiac output is available. Each of these variables can be briefly interpreted as follows: ScvO 2 : Although ScvO 2 is not identical to SvO 2 , particularly in shock situations, it could be used as a substitute for O 2 extraction (7). Smaller values would mean larger extractions that are usually present when smaller oxygen supplies occur. PCO 2 gap (7): This gradient is sensitive to blood flow therefore it is elevated in a situation of reduced cardiac output, or as a result of reduced microvascular blood flow and increase in CO 2 production due to altered microcirculation (for example: sepsis/septic shock). Lactate (4): The elevation of lactate may reflect a state of tissue hypoperfusion, however, there are several other causes and interpretations. Among the several causes of hyperlactatemia without hypoperfusion, the most frequent are comorbidities such as cancer and liver diseases, mitochondrial dysfunction (due to genetic diseases), vitamin deficiency (B1) or excessive catecholaminergic release. In these situations, lactate is generally the only altered variable, with no other signs of tissue hypoxia. CVP (7): Central venous pressure does not adequately identify fluid-responsive patients, however, low values reflect adequate contractile capacity to maintain low pressures of the right atrium. Faced with a state of Figure 1 Understanding hemodynamics with seven variables. The ScvO 2 , Lactate, CVP and PCO 2 gap are shown on each side of the square; dP/dt is represented by red lines (and the different slopes represent how fast the pressure is reached), the Mottling score is represented by blue circles, each one reflecting a score and the Pv-aCO 2 /Ca-vO 2 <1.4 is identified by a yellow semicircle, the points within this area mean a ratio less than 1.4. The position of the black dot represents a clinical presentation of each of the situation described, and in the graph at the bottom right, we presented the change of the hemodynamic parameters after interventions aiming for the resolution of septic shock. ScvO 2 , oxygen saturation of the central venous blood; O 2 , oxygen; PCO 2 gap, central venous-to-arterial carbon dioxide tension difference; CVP, central venous pressure; Pv-aCO 2 /Ca-vO 2 , ratio of the venous-to-arterial carbon dioxide tension difference over the arterial-to-venous oxygen content difference; dP/dt, rate of change in pressure with time. circulatory shock, reduced right atrium pressure makes the diagnosis of cardiogenic or obstructive shock unlikely. In selected cases, considering a subjective assessment of the dP/dT (below), the use of inotropic could be considered. dP/dt (rate of change in pressure with time): It is a measure that requires echocardiography, however, the assessment of the arterial pulse waveform can be used as surrogate. The greater the contractile capacity of the left ventricle, the greater the dP/dt. Analyzing the arterial waveform, the maximal first slope of the arterial pulse wave would reflect the contractile capacity: systolic slope closer to 90 degrees means good contractility (peak pressure is achieved in less time) (8). P v-a CO 2 /C a-v O 2 : This is a surrogate of the respiratory quotient. Increased values suggest a CO 2 production greater than O 2 consumption, which may indicate anaerobic metabolism (9). It may also identify patients that, being fluid-responsive, will increase O 2 consumption with fluid bolus. Thus, it could help to differentiate hyperlactatemia due to tissue hypoxia (which can benefit from increase in O 2 delivery) from hyperlactatemia due to other causes. Mottling score (10): Cutaneous perfusion depends on cutaneous perfusion pressure, so the presence of mottling suggests hypoperfusion. The higher the mottling score is, the earlier death occur. In situations where there is doubt whether hyperlactatemia is due to oxygen transport deficit, the absence of mottling would reinforce the idea of elevated lactate due to another etiology or at least suggest it can be safe watch and wait until we have more data at hand. Combining these variables, it is possible to identify different types of shock, to evaluate whether we are facing a situation with clear signs of tissue hypoxia (representing different positions in the graphic-see Figure 1), and to decide the appropriate treatment. Circulatory shock is a clinical syndrome associated with multi-organ failure and high mortality. Prompt identification of the main pathophysiological mechanism, as well as clinical signs of tissue hypoxia is of extreme importance in order to initiate an immediate and appropriate treatment. Our graph is meant to illustrate the hemodynamic parameters that could help with the diagnostic approach, as well as to plan interventions. However, there is often an overlap between different types of shock which may make interpretation of the etiology more difficult even using several integrated variables. The use of graphs and figures to help understanding the circulatory dynamics of shock syndromes is not new. In 1987, Shoemaker used a four-sided figure representing the four dimensions that characterize fluid systems (pressure, volume, flow and function-which is best characterized by oxygen consumption) in order to explain the pathophysiology of different types of circulatory failure (5). Similarly, the presented graph can be useful as an educational tool, especially in teaching units and residency programs. Footnote Provenance and Peer Review: This article was a free submission to the journal. The article did not undergo external peer review. Conflicts of Interest: All authors have completed the ICMJE uniform disclosure form (available at http://dx.doi. org/10.21037/atm-20-5493). The authors have no conflicts of interest to declare. Ethical Statement: The authors are accountable for all aspects of the work in ensuring that questions related to the accuracy or integrity of any part of the work are appropriately investigated and resolved. Open Access Statement: This is an Open Access article distributed in accordance with the Creative Commons Attribution-NonCommercial-NoDerivs 4.0 International License (CC BY-NC-ND 4.0), which permits the noncommercial replication and distribution of the article with the strict proviso that no changes or edits are made and the original work is properly cited (including links to both the formal publication through the relevant DOI and the license). See: https://creativecommons.org/licenses/by-nc-nd/4.0/.
2020-11-05T09:09:42.089Z
2020-10-01T00:00:00.000
{ "year": 2020, "sha1": "c61ad3048357d4f737a8866b50152d9e3743a9d4", "oa_license": "CCBYNCND", "oa_url": "https://atm.amegroups.com/article/viewFile/54441/pdf", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "dea1ee58f8743f8bdb7c2ccba498a8c7ef542031", "s2fieldsofstudy": [], "extfieldsofstudy": [ "Medicine" ] }
15105138
pes2o/s2orc
v3-fos-license
Spherical splitting of 3-orbifolds The famous Haken-Kneser-Milnor theorem states that every 3-manifold can be expressed in a unique way as a connected sum of prime 3-manifolds. The analogous statement for 3-orbifolds has been part of the folklore for several years, and it was commonly believed that slight variations on the argument used for manifolds would be sufficient to establish it. We demonstrate in this paper that this is not the case, proving that the apparently natural notion of ``essential'' system of spherical 2-orbifolds is not adequate in this context. We also show that the statement itself of the theorem must be given in a substantially different way. We then prove the theorem in full detail, using a certain notion of ``efficient splitting system.'' Introduction Since the seminal work of Thurston [10], the concept of orbifold has become a central one in 3-dimensional geometric topology, because it embodies the notion of a 3-manifold and that of a finite group action on that 3-manifold. As a matter of fact, Thurston himself indicated how to prove that, under some natural topological constraints, a 3-orbifold with non-empty singular locus carries one of the 8 geometries, so it is a quotient of a geometric 3manifold under a finite isometric action (see [1] and [4] for modern detailed accounts of the argument). In the case of manifolds (closed orientable ones, say) the two basic obstructions to the existence of a geometric structure are related to the existence of essential spheres and of essential tori. So one restricts first to irreducible manifolds and next to Seifert and atoroidal ones. For this restricted class of manifolds then one has Thurston's geometrization conjecture (or theorem, according to the recent work of Perelman [8]). This result can be regarded as a uniformization theorem for 3-manifolds in view of the Haken-Kneser-Milnor theorem (canonical splitting of a 3-manifold along spheres into irreducible 3-manifolds) and the Jaco-Shalen-Johansson theorem (canonical splitting of an irreducible 3-manifold along tori into Seifert and atoroidal 3-manifolds). The case of 3-orbifolds is similar, where the first basic obstruction to the existence of a geometric structure is removed by restricting to (suitably defined) irreducible 3-orbifolds. The natural expectation would then be to have an analogue of the Haken-Kneser-Milnor theorem, asserting that each 3-orbifold canonically splits along spherical 2-orbifolds into irreducible 3orbifolds. This result has actually been part of the folklore for several years, but it turns out that to state and prove it in detail one has to somewhat refine the ideas underlying the proof in the manifold case. This paper is devoted to these refinements. We will state in the present introduction two versions of the splitting theorem that we will establish below, addressing the reader to Section 1 for the appropriate definitions. Theorem 0.1. Let X be a closed locally orientable 3-orbifold. Suppose that X does not contain any bad 2-suborbifold, and that every spherical 2suborbifold of X is separating. Then X contains a finite system S of spherical 2-suborbifolds such that: • No component of X \ S is punctured discal. • If Y is the orbifold obtained from X \S by capping the boundary components with discal 3-orbifolds, then the components of Y are irreducible. Any such system S and the corresponding Y , if viewed as abstract collections of 2-and 3-orbifolds respectively, depend on X only. We inform the reader that this result has a dual interpretation in terms of connected sum, discussed in detail in Section 1. We also note that the existence part of Theorem 0.1 may seem to have a purely theoretical nature, which is not pleasant in the context of 3-dimensional topology, where a great emphasis is typically given to algorithmic methods [7]. The next longer statement emphasizes the constructive aspects of the splitting. Then the process is finite and the final Y consists of irreducible 3-orbifolds. Moreover, the final S and Y , viewed as abstract collections of 2-and 3orbifolds respectively, depend on X only, not on the specific process leading to them. In addition, at all steps of the process the essential spherical 2orbifold Σ can be chosen to be normal with respect to a triangulation of the given orbifold Y . For the reader familiar with the proof of the splitting theorem for manifolds (see for instance [3]) we will now point out what goes wrong in the orbifold case, and sketch how we have modified the argument. The main steps to show the existence of a splitting of a manifold M into prime ones are as follows: (a) Show that if M contains a non-separating sphere then it splits as the connected sum of S 2 × S 1 or S 2 × ∼ S 1 and some M ′ , and the first Betti number of M ′ is smaller than that of M . Deduce that one can restrict to M 's in which all spheres are separating. (b) Define a family of spheres S ⊂ M to be essential if no component of M \S is a punctured disc. Show that every essential S can be replaced by one having the same number of components and being normal with respect to any given triangulation of M . Deduce that all maximal essential systems of spheres are finite. (c) Prove that if S is a maximal essential system then the manifolds obtained by capping off the components of M \ S are irreducible. Uniqueness is more elaborate to prove, but its validity only depends on the following fact, that we express using the dual viewpoint of connected sum: (d) The sphere S 3 is the only identity element of the operation of connected sum. No connected sum with S 3 should be employed to realize a manifold. Turning to orbifolds, we can show that none of the points (a)-(d) has a straight-forward extension: (¬a) The existence of a non-separating spherical 2-suborbifold does not imply the existence of an essential separating one (see Fig.3 below). In particular, there are infinitely many 3-orbifolds containing some nonseparating but no essential separating spherical 2-orbifold (see Section 5). The difficulty given by non-separating spherical 2-orbifolds is mentioned in [2], where the choice is made of cutting along these 2-orbifolds too. The reasons of our choice of excluding non-separating spherical 2-orbifolds toutcourt will be discussed in Section 3. We only want to mention here that the techniques of the present paper do not seem to adapt easily to include the case of splitting along non-separating spherical 2-orbifolds. Turning to point (b), we follow [2] and define a family S of spherical 2-suborbifolds of a 3-orbifold X to be essential if no component of X \ S is a punctured discal 3-orbifold. We now have: (¬b) If S is essential and S ′ , S ′′ are obtained from S by one of the compression moves necessary to normalize S, then neither of S ′ and S ′′ need be essential (see Fig. 4 below). This fact could be viewed as a merely technical difficulty, showing that the theory of normal surfaces is inadequate to prove finiteness of maximal essential systems. The following fact is however more striking: (¬c) If S is a maximal essential system of spherical 2-suborbifolds of X, and Y is obtained by cutting X along S and capping the result, the components of Y need not be irreducible (see Fig. 5 below). Given this drawback of the notion of essential system, we have refined it to a certain notion of efficient splitting system, where we impose artificially that the capped components of the complement should be irreducible. The proof of the existence of efficient splitting systems then requires some more efforts than in the manifold case. In addition, finiteness of the splitting process in Theorem 0.2 does not follow directly from the existence of finite efficient splitting systems. Concerning uniqueness, we have the following: (¬d) There are three different types of connected sums of orbifolds, and each has its own identity, different from the other ones. In a sequence of connected sums of irreducible 3-orbifolds, the property that a certain sum is "trivial" depends on the order in which the sums are performed. The same notion of efficient system used to prove the existence of a splitting actually allows to deal also with this problem, and hence to prove uniqueness in Theorem 0.1. The same argument employed for uniqueness also proves finiteness of the splitting process in Theorem 0.2. Efficient connected sums and splitting systems In this section we introduce the necessary terminology and we provide alternative statements of Theorems 0.1 and 0.2. Local structure of orbifolds We will not cover here the general theory of orbifolds, referring the reader to the milestone [10], to the excellent and very recent [2], and to the comprehensive bibliography of the latter. We just recall that an orbifold of dimension n is a topological space with a singular smooth structure, locally modelled on a quotient of R n under the action of a finite group of diffeomorphisms. We will only need to refer to the cases n = 2 and n = 3, and we will confine ourselves to orientation-preserving diffeomorphisms. In addition, all our orbifolds will be compact. Under these assumptions one can see that a 2-orbifold Σ consists of a compact support surface |Σ| together with a finite collection S(Σ) of points in the interior of |Σ|, the cone points, each carrying a certain order in {p ∈ N : p 2}. Analogously, a 3-orbifold X is given by a compact support 3-manifold |X| together with a singular set S(X). Here S(X) is a finite collection of circles and unitrivalent graphs tamely embedded in |X|, with the univalent vertices given by the intersection with ∂|X|. Moreover each component of S(X) minus the vertices carries an order in {p ∈ N : p 2}, with the restriction that the three germs of edges incident to each vertex should have orders (2, 2, p), for arbitrary p, or (2, 3, p), for p ∈ {3, 4, 5}. Bad, spherical, and discal orbifolds An orbifold-covering is a map between orbifolds locally modelled on a map of the form R n / ∆ → R n / Γ , naturally defined whenever ∆ < Γ < Diff + (R n ). An orbifold is called good when it is orbifold-covered by a manifold, and bad when it is not good. In the sequel we will need the following easy result: Lemma 1.1. The only bad closed 2-orbifolds are (S 2 ; p), the 2-sphere with one cone point of order p, and (S 2 ; p, q), the 2-sphere with cone points of orders p = q. We now introduce some notation and terminology repeatedly used below. We define D 3 o to be D 3 , the ordinary discal 3-orbifold, D 3 c (p) to be D 3 with singular set a trivially embedded arc with arbitrary order p, and D 3 v (p, q, r) to be D 3 with singular set a trivially embedded "Y-graph" with edges of orders p, q, r. We will call D 3 c (p) and D 3 v (p, q, r) respectively cyclic discal and vertex discal 3-orbifolds, and we will employ the shortened notation D 3 c and D 3 v to denote cyclic and vertex discal 3-orbifolds with generic orders. We also define the ordinary, cyclic, and vertex spherical 2-orbifolds, denoted respectively by S 2 o , S 2 c (p), and S 2 v (p, q, r), as the 2-orbifolds bounding the corresponding discal 3-orbifolds D 3 o , D 3 c (p), and D 3 v (p, q, r). We also define the ordinary, cyclic, and vertex spherical 3-orbifolds, denoted respectively by S 3 o , S 3 c (p), and S 3 v (p, q, r), as the 3-orbifolds obtained by mirroring the corresponding discal 3-orbifolds D 3 o , D 3 c (p), and D 3 v (p, q, r) in their boundary. The spherical 2-and 3-orbifolds with generic orders will be denoted by S 2 * and S 3 * . 2-suborbifolds and irreducible 3-orbifolds We say that a 2-orbifold Σ is a suborbifold of a 3-orbifold X if |Σ| is embedded in |X| so that |Σ| meets S(X) transversely (in particular, it does not meet the vertices), and S(Σ) is given precisely by |Σ| ∩ S(X), with matching orders. A spherical 2-suborbifold Σ of a 3-orbifold X is called essential if it does not bound in X a discal 3-orbifold. A 3-orbifold is called irreducible if it does not contain any bad 2-suborbifold and every spherical 2-suborbifold is inessential (in particular, it is separating). If a 3-orbifold X is bounded by spherical 2-orbifolds, we can canonically associate to X a closed orbifold X by attaching the appropriate discal 3orbifold to each component of ∂X. We say that X is obtained by capping X. Whenever a 3-orbifold is not irreducible, we can select an essential spherical 2-suborbifold Σ, split X along Σ, and cap the result. A naïf statement of the theorem we want to prove would be that the successive process of splitting and capping comes to an end in finite time, and the result is a collection of irreducible 3-orbifolds, which depends on X only. With the appropriate assumptions and choices of the essential spherical 2-orbifolds this is eventually true (Theorem 0.2) but, just as in the case of manifolds, the statement and proof are best understood by performing a simultaneous splitting along a system of spherical 2-orbifolds, rather than a successive splitting along single ones. Connected sum To motivate the statement of the main result given below, we reverse our point of view. The remark here is that, if X can be turned into a collection of irreducible 3-orbifolds by the "split and cap" strategy mentioned above, then X can be reconstructed from the same collection by "puncture and glue" operations. As in the case of manifolds, each such operation will be called a connected sum. But the world of orbifolds is somewhat more complicated than that of manifolds, so we need to give the definition with a little care. Let X 1 and X 2 be 3-orbifolds, and pick points x i ∈ |X i |, such that one of the following holds: (i) Both x 1 and x 2 are non-singular points. (ii) Both x 1 and x 2 are singular but not vertex points, and they belong to singular edges of the same order p. (iii) Both x 1 and x 2 are vertex singular points, and the triple p, q, r of orders of the incident edges is the same for x 1 and x 2 . We can then remove from |X i | a regular neighbourhood of x i , and glue together the resulting orbifolds by a homeomorphism which matches the singular loci. The result is a 3-orbifold X that we call a connected sum of X 1 and X 2 . Depending on which of the conditions (i), (ii), or (iii) is met, we call the operation (and its result), a connected sum of ordinary type, of cyclic type (of order p), or of vertex type (of order p, q, r). Trivial connected sums In the case of manifolds, there is only one type of operation of connected sum, and its only identity element is the ordinary 3-sphere. For 3-orbifolds we have the easy: The unique identity element of the ordinary connected sum (respectively, of the order-p cyclic connected sum, and of the the order-(p, q, r) vertex connected sum) is S 3 o (respectively, S 3 c (p), and S 3 v (p, q, r)). We can then define to be trivial an operation of connected sum with the corresponding identity element. However, when one considers a sequence of operations of connected sums, one must be careful, because a sum which appears to be non-trivial in the first place may actually turn out to be trivial later on, as the following example, suggested by Joan Porti, shows. Example 1.3. Let K be any non-trivial knot in S 3 , and let p, q 2 be distinct integers. Construct a 3-orbifold X by first taking the ordinary connected sum of S 3 c (p) and S 3 c (q), and then taking the order-p cyclic connected sum of the result with (S 3 , K p ), the orbifold with support S 3 and singular set K of order p. Then both sums are non-trivial. However, if we reverse the order, we can realize the same X as the order-p cyclic sum of S 3 c (p) and (S 3 , K p ), followed by the ordinary sum with S 3 c (q), and the first sum is now trivial. This phenomenon is of course a serious obstacle to proving that each 3orbifold has a unique expression as a connected sum of irreducible ones. To overcome this obstacle, we need to redefine the notion of trivial connected sum. Graphs and efficient connected sums We consider in this paragraph successive connected sums X 0 # . . . #X n , stipulating with this notation that the first sum is between X 1 and X 0 , the second sum is between X 2 and X 0 #X 1 , and so on. Note first that to perform the j-th sum we can always arrange the puncture made to X 0 # . . . #X j−1 , to be disjoint from the spherical 2-orbifolds along which the punctured X 0 , . . . , X j−1 have been glued. This implies that X j is glued to precisely one of X 0 , . . . , X j−1 . We can then construct a graph associated to the connected sum, with nodes the X j 's, and edges labelled by the types of sums performed. Note that this graph is actually a tree. We will say that a connected sum X 0 # . . . #X n is efficient if the X j 's are irreducible and in its graph there is no node S 3 o with an incident edge of ordinary type, there is no node S 3 c with an incident edge of cyclic type, and there is no node S 3 v with an incident edge of vertex type, see Fig. 1. Figure 2: Moves relating two graphs of the same connected sum. One should notice that the graph associated to X 0 # . . . #X n is not quite unique, since to construct it one needs to isotope each puncture away from the previous ones, as described above. However, one easily sees that two graphs of X 0 # . . . #X n differ by moves as in Fig. 2, and one deduces that the notion of efficient connected sum is indeed well-defined. The following is an alternative statement of our main result: Theorem 1.4. Let X be a closed locally orientable 3-orbifold. Suppose that X does not contain any bad 2-suborbifold, and that every spherical 2suborbifold of X is separating. Then X can be realized as an efficient connected sum of irreducible 3-orbifolds. Any two realizations involve the same irreducible summands and the same types of sums (including orders). Efficient splitting systems As in the case of manifolds, to establish the existence part of Theorem 1.4 one proceeds with the original strategy of "split and cap" first mentioned above, but one has to split symultaneously along a (finite) system of disjoint spherical 2-orbifolds. To this end we give the following definitions. We say that a 3-orbifold is punctured discal if it is obtained from some D 3 * by removing a regular neighbourhood of a finite set. A system S of spherical 2-suborbifolds of X is called essential if no component of X \ S is punctured discal, and it is called coirreducible if all the components of (X \ S) are irreducible. We call efficient splitting system a finite system of spherical 2-suborbifolds of X which is both essential and coirreducible. Remark 1.5. To each system S of separating spherical 2-suborbifolds there corresponds a realization of X as a connected sum of the components of (X \ S) . Moreover S is efficient if and only if the connected sum is. The result we will establish directly below, which is obviously equivalent to Theorems 0.1 and 1.4, and will be used to deduce Theorem 0.2, is the following: Theorem 1.6. Let X be a closed locally orientable 3-orbifold. Suppose that X does not contain any bad 2-suborbifold, and that every spherical 2suborbifold of X is separating. Then X admits efficient splitting systems. Any two splitting systems coincide as abstract collections of spherical 2orbifolds, and the capped components of their complements also coincide. Comparison with the manifold case We illustrate in greater detail in this section why it has been necessary to substantially modify the proof of the splitting theorem in passing from the manifold to the orbifold case. Concerning the issue of non-separating spherical 2-orbifolds (point (a) of the Introduction) we begin by recalling what is the point for an orientable manifold M . If Σ ⊂ M is a non-separating sphere, one chooses a simple closed curve α meeting Σ once and transversely and one takes the boundary Σ ′ of a regular neighbourhood of Σ ∪ α. Then Σ ′ is again a sphere, it is separating, and one of the capped components of M \ Σ ′ is S 2 × S 1 or S 2 × ∼ S 1 , so M has one such connected summand. The very basis of this argument breaks down for orbifolds, because if Σ is a spherical 2-orbifold of cyclic or vertex type, then Σ ′ is not spherical at all (see Fig. 3). We will devote Section 5 to showing that indeed there is a wealth of 3-orbifolds which contain non-separating spherical 2-orbifolds but cannot be expressed as a non-trivial connected sum. Turning to points (b) and (c) of the Introduction, recall that a family S of spherical 2-suborbifolds of a 3-orbifold X is called essential if no component of X \ S is a punctured discal 3-orbifold. To prove finiteness of a maximal essential family using the analogue of orbifolds of Haken's normal surface theory, one should be able to prove that the property of being essential is stable under the "normalization moves." Now one can see (as we will in Section 4), that the normalization moves boil down to isotopy and compression along ordinary discs (i.e. discs without singular points). And we have the following: Example 2.1. Let a 3-orbifold X contain a system S of three cyclic spherical 2-suborbifolds or orders p, p, and q, for some p, q 2. Suppose that one of the components of X \S is as shown in Fig. 4 (an ordinary connected sum p p q r Figure 5: The capped components of the complement of a maximal essential system need not be irreducible. of S 3 c (p) and S 3 c (q), with two cyclic punctures of order p and one of order q). Suppose that the other components of X \ S are once-punctured irreducible non-spherical 3-orbifolds. Let S ′ and S ′′ be the systems obtained from S by compression along the ordinary disc D also shown in Fig. 4. Then X does not contain any bad 2-suborbifold, every spherical 2-suborbifold of X is separating, S is essential, S ′ and S ′′ are systems of spherical 2-orbifolds, and they are both inessential in X. The failure of the direct orbifold analogue of normal surfaces to establish existence of a maximal essential system may appear to be a technical difficulty, only calling for a different proof of existence. But the next example, which refers to point (c) in the Introduction, shows that the notion itself of essential system does not do the job it was designed for: Example 2.2. Let a 3-orbifold X contain a system S of one cyclic and two vertex spherical 2-suborbifolds or orders p and p, q, r (twice) for some admissible p, q, r 2. Suppose that one of the components of X \ S is as shown in Fig. 5, that is, S 3 v (p, q, r) with two vertex and one order-p cyclic punctures). Let S ′ be given by the two vertex components of S. Suppose that the other components of X \ S are once-punctured irreducible nonspherical 3-orbifolds. Then X does not contain any bad 2-suborbifold, every spherical 2-suborbifold of X is separating, S ′ is a maximal essential system spherical 2-orbifolds in X, but not all the capped components of X \ S ′ are irreducible. To understand the last two assertions, note that S is not essential, because the 3-orbifold in Fig. 5 is indeed punctured discal, but the cyclic component of S is essential after cutting along S ′ and capping, because it corresponds to a cyclic connected sum with S 3 v (p, q, r), which is non-trivial. It is perhaps worth pointing out the basic fact underlying the previous example, namely that a punctured spherical 3-orbifold is a discal one if and only if the puncture has the same type as the original 3-orbifold. Concerning uniqueness (point (d) of the Introduction), all we had to say about its statement was said in Section 1. About its proof, we only note that Milnor's one for manifolds was based on the same method proving existence, i.e. on the notion of essential spherical system, so of course our proof will be somewhat different. Existence of efficient splitting systems and algorithmic aspects In this section we prove the existence parts of Theorems 1.6 and 1.4. We do so by showing that the process described in Theorem 0.2 can be carried out with all the components of the splitting system simultaneously normal with respect to a fixed triangulation. Triangulations and normal 2-orbifolds We slightly relax the definition given in [2], and define a triangulation of a 3-orbifold X as a triangulation of |X| which contains S(X) as a subcomplex of its one-skeleton. We then define a 2-suborbifold of X to be normal with respect to a triangulation if it intersects each tetrahedron in a union of squares and triangles, just as in the case of a surface in a triangulated manifold. The following fundamental result holds: Proposition 3.1. Let X be a triangulated 3-orbifold. Then there exists a number m such that, if F is a normal surface with respect to the given triangulation and F has more than m connected components, then at least one of the components of X \ F is a product orbifold. This result is established exactly as in the case of manifolds, so we omit the proof. One only needs to notice that if a region of X \ F is a product as a manifold then it is also a product as an orbifold. Normalization moves We now recall that, in a manifold with a fixed triangulation T , there is a general strategy to replace a surface by a normal one, as explained for instance in [7]. When the surface is a sphere Σ, this strategy consists (besides isotopy relative to T ) of three "normalization" moves. The first (respectively, second) move consists in compressing Σ along a disc D contained in the interior of a tetrahedron (respectively, triangle) of T , thus getting two spheres Σ ′ and Σ ′′ , and choosing either Σ ′ or Σ ′′ . The third move is used to remove the double intersections of the normal discs with the edges, and it is illustrated in Fig. 6. As a matter of fact, this move can also be seen as compression along a disc followed by the choice of one of the resulting spheres. More precisely, one compresses along the disc in Fig. 6-right and one discards the sphere given by the union of the two discs in Fig. 6. Therefore, the third move consists of the compression along a disc disjoint from the 1-skeleton of T , followed by the elimination of a sphere which is the boundary of a regular neighbourhood of an interior point of an edge of T . Stepwise normalization As already mentioned, to prove the existence of an efficient splitting system we will construct a system of spherical 2orbifolds along the steps of Theorem 0.2, making sure each new component of the system is normal with respect to a fixed triangulation of the original orbifold. To show that this is possible, we start with an easy lemma, that we will use repeatedly below. For its statement, we stipulate that a spherical 2-orbifold of cyclic type is more complicated than one of ordinary type, and a that a spherical 2-orbifold of vertex type is more complicated than one of ordinary or cyclic type. Lemma 3.2. Let ∆ be a punctured discal 3-orbifold, and let Σ be a component of ∂∆. Suppose that no component of ∂∆ is more complicated than Σ. Then capping all the components of ∂∆ except Σ we get a discal 3-orbifold. The next two results are the core of our argument on stepwise normalization. The first one will be used again in the next section. From now on we fix a 3-orbifold X and we suppose that X does not contain any bad 2-suborbifold, and that every spherical 2-suborbifold of X is separating. Proposition 3.3. Let S be an essential system of spherical 2-suborbifolds of X. Let Y be a component of X \ S and Σ be a spherical 2-suborbifold of Y . Suppose that: • Σ is essential in Y . • All the components of S are at most as complicated as Σ. Then S ∪ Σ is essential. Proof. Suppose that some component Z of X \ (Σ ∪ S) is punctured discal. The assumptions that S is essential and that Σ is separating imply that Z is contained in Y and Σ is a boundary component of Z. By Lemma 3.2 and the second assumption, capping all the components of ∂Z but Σ we get a discal 3-orbifold, which implies that Σ bounds a discal 3-orbifold in Y : a contradiction to the first assumption. • Σ is essential in Y . • All the spherical 2-suborbifolds of Y strictly less complicated than Σ are inessential in Y . Then Σ can be replaced by a spherical 2-orbifold satisfying the same properties and such that S ∪ Σ is in normal position with respect to T . Proof. We must show that a normalization move applied to Σ can be realized without touching S, without changing the type of Σ, and preserving the property that Σ is essential Y . The first assertion is a straight-forward consequence of the description of the normalization moves given above. For the second and third assertions, we first note that in all cases the compression of a normalization move takes place along a disc disjoint from the 1-skeleton of T , whence along an ordinary (non-singular) disc D which meets Σ ∪ S in ∂D ⊂ Σ only. To fix the notation, let D ′ and D ′′ be the discs bounded by ∂D on Σ, and define Σ ′ = D ∪ D ′ and Σ ′′ = D ∪ D ′′ . By the assumptions that X contains no bad 2-suborbifolds and that D is an ordinary disc, we can (and will) assume up to permutation that Σ ′′ is an ordinary sphere, We now claim that Σ ′ or Σ ′′ is essential in Y . Before proving the claim, let us see how it implies the conclusion. Since Σ ′′ is ordinary, we have Σ ′ ∼ = Σ. If Σ ′ is essential, we are done. Otherwise Σ ′′ is essential (and ordinary), so Σ is also ordinary by the second assumption, and we can switch Σ ′ and Σ ′′ . To prove the claim, suppose by contradiction that Σ ′ and Σ ′′ are both inessential in Y . Since Σ ′′ is ordinary, we deduce that Σ bounds in Y either a discal 3-orbifold glued along a 2-disc to an ordinary 3-disc, or a 3-orbifold obtained from a discal one by removing an ordinary 3-disc which intersects the boundary in a 2-disc. The result is in both cases a discal 3-orbifold (isomorphic to the original one), whence the conclusion. The result just proved easily implies the following: Corollary 3.5. If X is triangulated then one can carry out the process of Theorem 0.2 with the splitting system S in normal position at each step. This corollary, together with Propositions 3.1 and 3.3 immediately yields the result we were heading for: Corollary 3.6. X admits finite efficient splitting systems. As promised in the Introduction, we explain here why it is crucial for us to assume that all the spherical 2-suborbifolds of X are separating. First of all, the definition itself of essential system would not be appropriate without this assumption, because a non-separating spherical 2-orbifold could constitute an inessential system. In addition, the assumption was used in a fundamental way in the proof of Proposition 3.3. Proposition 3.4 also has the next consequence, which proves the last assertion of Theorem 0.2: Corollary 3.7. Let Z be a triangulated orbifold and let Σ be an essential spherical 2-suborbifold of Z. Suppose that all the spherical 2-suborbifolds of Z strictly less complicated than Σ are inessential. Then there exists an essential spherical 2-suborbifold of Z of the same type as Σ and in normal position with respect to the fixed triangulation. Uniqueness of the splitting In this section we conclude the proofs of Theorems 1.6, 1.4, and 0.2. As above we fix a 3-orbifold X and we suppose that X does not contain any bad 2-suborbifold, and that every spherical 2-suborbifold of X is separating. Gluing of discal 3-orbifolds Our uniqueness result uses the following two easy technical lemmas. We include a proof of the second one, leaving to the reader the even easier proof of the first one. Lemma 4.1. Let W be a 3-orbifold and let D ⊂ ∂|W | be a disc. Let ∆ be a punctured discal 3-orbifold. Let Σ be a component of ∂∆ of the same type as ∆. Split |Σ| along a loop into two discs D ′ and D ′′ , and suppose that D, D ′ , D ′′ are isomorphic to each other, with at most one singular point. Let W ′ be obtained from W by gluing ∆ along a homeomorphism D ′ → D. Then: Lemma 4.2. Let W be a 3-orbifold with a spherical boundary component Σ. Let ∆ be a punctured discal 3-orbifold with at least two boundary components Σ ′ , Σ ′′ isomorphic to Σ, and all other components not more complicated than Σ ′ and Σ ′′ . Let W ′ be obtained from W by gluing ∆ along a homeomorphism Σ ′ → Σ. Then: Proof. We begin by (2). Since Σ ′ and Σ ′′ are at least as complicated as all the other components of ∂∆, capping the components different from Σ ′ and Σ ′′ we get a product orbifold, which implies the conclusion at once. Let us turn to (1). To be discal for a 3-orbifold Z means that Z is some S 3 t and some component of ∂Z, or more precisely the most complicated one, is S 2 t . Now we already know that W ′ = W , and the assumptions imply that the most complicated component of ∂W ′ has the same type as the most complicated component of ∂W , whence the conclusion. Finiteness and uniqueness: scheme of the proof To conclude the proofs of the results stated so far we must establish the finiteness of the splitting process of Theorem 0.2 and the uniqueness part of Theorems 0.2, 1.4, and 1.6. The former easily follows from the next result, to be proved below, and Proposition 3.3. Proposition 4.3. Let S and S ′ be finite systems of spherical 2-suborbifolds of X. Denote by n t (respectively, n ′ t ) the number of components of S (respectively, S ′ ) of type t. Suppose that S is coirreducible and S ′ is essential. Uniqueness easily follows from the next result (for Theorem 0.2 one only has to note that the final S is obviously coirreducible, and essential by Moves for spherical systems We will prove Propositions 4.3 and 4.4 in a unified manner. The underlying idea in both cases is that by certain moves we can modify S into a superset of S ′ , leaving unaffected the properties of S and the topological types of S and (X \ S) as abstract orbifolds. There are two types of move we will need, that we now describe. These moves apply to any spherical system S ⊂ X, under the appropriate assumptions. Move α. Suppose we have a component Σ 0 of S and a disc D 1 properly embedded in a component Y 0 of X \ S, with ∂D 1 ⊂ Σ 0 . Let ∂D 1 split Σ 0 into discs D 0 and D, and suppose that D 0 and D 1 both have at most one singular point. Assume that the component containing D 0 of Y 0 \ D 1 is a punctured disc ∆, and that ∂∆ does not have components more complicated than D 0 ∪ D 1 . Then we replace Σ 0 = D ∪ D 0 by Σ 1 = D ∪ D 1 . See Fig. 7. Move β. Suppose we have a component Σ 0 of S and a spherical 2-orbifold Σ 1 contained in a component Y 0 of X \ S, with Σ 0 ⊂ ∂Y 0 . Suppose that the component containing Σ 0 of Y 0 \ Σ 1 is a punctured disc ∆, that Σ 0 and Σ 1 have the same type, and that no other component of ∂∆ is more complicated than Σ 0 and Σ 1 . Then we replace Σ 0 by Σ 1 . See Fig. 8. Lemma 4.5. Under any move of type α or β the following happens: • The type of S (including orders) does not change. • The type of (X \ S) does not change (in particular, coirreducibility is preserved). Proof. The first assertion is proved by direct examination of Figg. 7 and 8. To prove the second and third assertions we note the only components of X \ S modified by the move are Y 0 and the other component Z 0 incident to Σ 0 , and these components are replaced by the components Y 1 and Z 1 incident to Σ 1 . The relations between them are described as follows: It is now easy to check that in all cases we are under the assumptions of Lemmas 4.1 and 4.2, whence the conclusion at once. Finiteness and uniqueness: conclusion As already mentioned, to prove Propositions 4.3 and 4.4 we want to replace S by a superset of S ′ . The first step is as follows: Proposition 4.6. Suppose that S is coirreducible and S ′ is essential. Then, up to applying moves α and isotopy to S, we can suppose that S ∩ S ′ = ∅. Proof. Let us first isotope S so that it is transversal to S ′ . Of course it is sufficient to show that if S ∩ S ′ is non-empty then the number of its components can be reduced by a move α and isotopy. Suppose some component Σ ′ of S ′ meets S, and note that the finite set of loops S ∩ Σ ′ bounds at least two innermost discs in Σ ′ . Since Σ ′ has at most three singular points, we can choose an innermost disc D 1 which contains at most one singular point. Let Y 0 be the component of X \ S which contains D 1 , let Σ 0 be the component of S containing ∂D 1 , and denote by D 0 and D the discs into which ∂D 1 cuts Σ 0 , with D 0 at most as complicated as D. The assumption that X contains no bad 2-suborbifolds easily implies that D 0 ∪ D 1 is a spherical 2-orbifold of ordinary or cyclic type. Coirreducibility of S then implies that D 0 ∪ D 1 bounds an ordinary or cyclic discal 3-orbifold B in Y 0 . Now we either have B ∩ D = ∅ or B ⊃ D. In the first case it is easy to see that we are precisely in the situation where we can apply a move α, after which a small isotopy allows to reduce the number of components of S ∩ S ′ . In the second case D 1 ∪ D bounds in Y 0 one of the manifolds obtained from B by cutting along D. Since B is an ordinary or cyclic disc we deduce that D is isomorphic to D 1 and that D ∪ D 1 bounds a ball in Y 0 disjoint from D 0 . We can then interchange the roles of D and D 0 and apply α also in this case, whence the conclusion. Proposition 4.7. Suppose that S is coirreducible and S ′ is essential. Assume that each component of S ′ is either contained in S or disjoint from S. Then: • Up to applying moves β and isotopy to S, we can suppose that each ordinary component of S ′ is contained in S. • If S and S ′ have the same ordinary components, up to applying moves β and isotopy to S, we can suppose that each cyclic component of S ′ is contained in S. • If S and S ′ have the same ordinary and cyclic components, up to applying moves β and isotopy to S, we can suppose that each vertex component of S ′ is contained in S. Proof. In all the cases, we suppose that a certain component Σ 1 of S ′ of the appropriate type is not contained S, and we show that a move β can be applied to S which replaces some component Σ 0 by Σ 1 . The conclusion then easily follows by iteration. In all cases we denote by Y 0 the component of X \ S which contains Σ 1 , and we choose Σ 0 among the components of ∂Y 0 . We start with the first assertion, so we assume Σ 1 is ordinary. Coirreducibility of S implies that Σ 1 bounds an ordinary 3-disc B in Y 0 . Let us choose a component of S ′ contained in B, not contained in S, and innermost in B with respect to these properties, and let us rename it Σ 1 . (Note that the new Σ 1 may not be parallel to the old one in X, even if it is in B). Of course Σ 1 is again an ordinary sphere, and it bounds in Y a punctured ordinary disc ∆ such that all the components of ∂∆ except Σ 1 belong to S. Essentiality of S ′ now implies that there must be a component Σ 0 of ∂∆ which does not belong to S ′ . We have created a situation where a move β can be applied, so we are done. The proof is basically the same for the second and third assertions. We only need to note that B always has the same type as Σ 1 and that the component Σ 0 of ∂∆ not belonging to S ′ cannot be of type strictly less complicated than Σ 1 , by the assumption that S and S ′ have the same components of type strictly less complicated than that of Σ 1 . Prime non-irreducible 3-orbifolds In this section we treat point (d) of the Introduction, showing that there are infinitely many 3-orbifolds with non-separating but no essential separating spherical 2-suborbifolds. Our examples arose from discussions with Luisa Paoluzzi. As in the case of manifolds, one can define a prime orbifold as one that cannot be expressed as a non-trivial connected sum, i.e an orbifold in which every separating spherical 2-suborbifold bounds a discal 3-suborbifold. Of course every irreducible orbifold is also prime, but we will show now that there are infinitely many primes which are not irreducible. This appears as a sharp difference between the orbifold and the manifold case (in which the only non-irreducible primes are the two S 2 -bundles over S 1 ). This difference represents a serious obstacle to promoting Theorem 1.4 to a unique decomposition theorem for orbifolds into prime ones. Proposition 5.1. Let X be a 3-orbifold bounded by a sphere with 4 cone points of the same order p. Assume that: • X is irreducible. • X is not the 3-ball with two parallel unknotted singular arcs. • In X there is no sphere with three singular points which meets ∂X in a disc with two singular points. Consider the orbifold with support S 2 × S 1 and singular set given by two circles of order p parallel to the factor S 1 . Remove from it a regular neighbourhood of an arc which joins the singular components within a level sphere S 2 × { * }, and call Y the result. Let Z be obtained by attaching X and Y along their boundary spheres, matching the cone points. Then Z is prime but not irreducible. Proof. A level sphere S 2 × { * } contained in Y gives a non-separating spherical 2-suborbifold of Z, so Z is not irreducible. Let us consider a separating spherical 2-orbifold Σ in Z, and the intersection of Σ with the sphere Σ 0 along which X and Y have been glued together. If Σ ∩ Σ 0 is empty then Y bounds a spherical 3-orbifold, because X and Y are irreducible. Let us then assume Σ ∩ Σ 0 to be transverse and minimal up to isotopy of Σ. Considering the pattern of circles Σ ∩ Σ 0 on Σ we see that Σ contains at least two innermost discs, which are therefore contained either in X or in Y . Since Σ has at most three singular points, we can find one such disc D with i 1 points. Now consider the loop ∂D on Σ 0 , and recall that Σ 0 has 4 singular points. Then ∂D encircles e 2 singular points. We will now show that both if D ⊂ X and if D ⊂ Y the conditions i 1 and e 2 are impossible, whence the conclusion. Suppose first that D ⊂ X. Using the fact that X is irreducible, the cases e = i = 0 and e = i = 1 both contradict the minimality of Σ ∩ Σ 0 . The cases e = 0, i = 1 and e = 1, i = 0 both lead to a bad 2-suborbifold of X, which does not exist. In the case e = 2, i = 0, using irreducibility twice, we see that X must be the 3-ball with two parallel unknotted singular arcs, but this orbifold was specifically forbidden, so indeed we get a contradiction. The case e = 2, i = 1 was also specifically forbidden. Suppose now that D ⊂ Y . Recalling that D is contained in a sphere which is separating in Z, we see that D cobounds a ball with a disc contained in ∂Y . Using this fact we can prove again that all the possibilities with e 2 and i 1 are impossible. As above, e = i = 0 and e = i = 1 contradict minimality. The case e = 2, i = 0 would imply that one of the singular arcs of Y can be homotoped to ∂Y , which is not the case. The other cases are absurd for similar reasons. Examples Whenever k 1 and the cone angles are chosen in an admissible fashion, the "stairway" of Fig. 9 contained in the 3-ball provides an Figure 9: An example or orbifold X such that Z = X ∪ Y is prime but not irreducible. example of orbifold X satisfying the assumptions of Proposition 5.1. Note that for such an X the corresponding prime non-irreducible orbifold Z is always supported by S 2 × S 1 , so |Z| is prime as a manifold. But we can also construct examples of X to which Proposition 5.1 applies, leading to an orbifold Z with non-prime |Z|. For instance, one can take an orbifold constructed exactly as Y was constructed from S 2 × S 1 , starting instead with F × S 1 , where F is a closed orientable surface F of positive genus. 3-orbifolds with ordinary and cyclic splitting Our interest in the spherical splitting of 3-orbifolds arose from our program of extending Matveev's complexity theory [5,6] from the manifold to the orbifold context. We actually succeeded [9] in generalizing the definition and some of the most significant results, investigating in particular the behaviour under connected sum. However it turns out that one does not have in general plain additivity as in the manifold case. We can only prove additivity when there are no vertex connected sums, and we have that complexity is only additive up to a certain correction summand on cyclic connected sums involving at least one purely cyclic singular component. For this reason in this section we prove that the number of such sums is well-defined: Proposition 6.1. Suppose that a 3-orbifold X is a connected sum of irreducible orbifolds without vertex connected sums, and fix an efficient realization ρ of X as X 0 # . . . #X n . For all p 2 let ν ρ (p) be the number of p-cyclic connected sums in ρ involving at least one singular component without vertices. Then ν ρ (p) is independent of ρ, so it is a function of X only. Proof. Two different realizations of X are related by the following operations: • Reordering of the X i 's, without modification of the efficient system of spherical 2-orbifolds along which X is split. • Modification of the splitting system according to one of the moves α and β described in Section 4 (see Figg. 7 and 8). We will show that ν ρ (p) remains unchanged under both operations. To deal with the first operation, we define a modified version T p of the tree associated to the realization X = X 0 # . . . #X n . To construct T p we associate to each X i one node for each cyclic component of S(X i ) and one for the union of the non-cyclic components of S(X i ). We denote by C the set of nodes of cyclic type and by N the set of nodes of non-cyclic type. We now define T p by taking an edge for each p-cyclic connected sum, with the ends of the edge joining the singular components involved in the sum. The ideas behind the construction of T p are as follows. First, we cannot create a new p-cyclic singular component by performing an ordinary connected sum or a q-cyclic connected sum for q = p. Second, a cyclic connected sum between two cyclic components gives one cyclic component, while a cyclic connected sum between a non-cyclic component and any other one gives one or two non-cyclic components. With these facts in mind, it is very easy to see that the invariance of ν ρ (p) under the first move is implied by the following graph-theoretical statement: Let T be a tree with set of nodes C ⊔ N . Let σ = (e 1 , . . . , e n ) be an ordering of the edges of T . For k = 1, . . . , n define α(k) to be 0 if both the ends of e k can be joined to nodes in N by edges in e 1 , . . . , e k−1 only, and to be 1 otherwise. Then n k=1 α(k) is independent of σ. To prove this fact, we first reduce to an easier statement. We begin by noting that the vertices in N having valence more than 1 can be blown up as in Fig. 10 without affecting the sum in question. This allows us to assume that all the nodes in N are "external," i.e. 1-valent. Moreover, using induction, we can suppose that N consists precisely of the external nodes, because if an edge e has an end which is external and belongs to C then α(e) = 1 whatever the position of e in the ordering, and the value of α on the remaining edges does not depend on the position of e in the ordering. Switching from α(e) to β(e) = 1−α(e) it is then sufficient to establish the following result. The proof we present here is due to Gwénaël Massuyeau. Lemma 6.2. Let T be a tree. Let E denote the set of external nodes of T . Let σ = (e 1 , . . . , e n ) be any ordering of the edges of T . Define β(k) to be 1 if the ends of e k can be joined to E through edges in e 1 , . . . , e k−1 only, and to be 0 otherwise. Then n k=1 β(k) is independent of σ, and precisely equal to #E − 1. Proof. Attach a circle to T by gluing the nodes in E to #E arbitrarily chosen points on the circle, and denote by Y the result. Since T is a tree, we have χ(Y ) = 1 − #E. Let σ = (e 1 , . . . , e n ) be an ordering of the edges of T , let Y k be the union of the circle with e 1 , . . . , e k , and let Y k be the connected component of Y k which contains the circle. Of course Y 0 is the circle and Y k = Y . We now have the identity which is easily established by considering separately the cases where e k has 0, 1, or 2 ends on Y k−1 , and using again the fact that T is a tree. The identity implies that n k=1 β(k) = χ( Y 0 ) − χ( Y k ) = #E − 1. To conclude the proof of Proposition 6.1 we must now show that ν ρ (p) is not affected by the moves α and β. Knowing the independence of the ordering, this is actually very easy. For both the moves, we have two realizations ρ 0 and ρ 1 of X leading to splitting systems S 0 and S 1 which differ for one component only, so S 0 = {Σ 0 } ∪ S and S 1 = {Σ 1 } ∪ S. We then compute ν ρ j (p) by performing first the connected sum which corresponds to Σ j . From Figg. 7 and 8 one sees that the connected sums along Σ 0 and Σ 1 give the same contributions to ν. After performing them, all other sums in ρ 0 and ρ 1 are the same, which eventually proves the proposition. Remark 6.3. Proposition 6.1 is false if one allows also vertex connected sums in the realization of X, because a vertex connected sum can create singular components without vertices.
2014-10-01T00:00:00.000Z
2004-09-30T00:00:00.000
{ "year": 2004, "sha1": "67711836eae35bd42216eb5e3296c82edcbcc9cf", "oa_license": null, "oa_url": "http://arxiv.org/pdf/math/0409606", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "8271cb2edd39ca3b5c84714501572b5292270ca7", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
1968988
pes2o/s2orc
v3-fos-license
Mutation of His465 Alters the pH-dependent Spectroscopic Properties of Escherichia coli Glutamate Decarboxylase and Broadens the Range of Its Activity toward More Alkaline pH* Glutamate decarboxylase (GadB) from Escherichia coli is a hexameric, pyridoxal 5′-phosphate-dependent enzyme catalyzing CO2 release from the α-carboxyl group of l-glutamate to yield γ-aminobutyrate. GadB exhibits an acidic pH optimum and undergoes a spectroscopically detectable and strongly cooperative pH-dependent conformational change involving at least six protons. Crystallographic studies showed that at mildly alkaline pH GadB is inactive because all active sites are locked by the C termini and that the 340 nm absorbance is an aldamine formed by the pyridoxal 5′-phosphate-Lys276 Schiff base with the distal nitrogen of His465, the penultimate residue in the GadB sequence. Herein we show that His465 has a massive influence on the equilibrium between active and inactive forms, the former being favored when this residue is absent. His465 contributes with n ≈ 2.5 to the overall cooperativity of the system. The residual cooperativity (n ≈ 3) is associated with the conformational changes still occurring at the N-terminal ends regardless of the mutation. His465, dispensable for the cooperativity that affects enzyme activity, is essential to include the conformational change of the N termini into the cooperativity of the whole system. In the absence of His465, a 330-nm absorbing species appears, with fluorescence emission spectra more complex than model compounds and consisting of two maxima at 390 and 510 nm. Because His465 mutants are active at pH well above 5.7, they appear to be suitable for biotechnological applications. Glutamate decarboxylase (Gad; EC 4.1.1.15) is a pyridoxal 5Ј-phosphate (PLP)-dependent 2 enzyme widely distributed among living organisms (1). It catalyzes CO 2 release from the ␣-carboxyl group of L-glutamate to yield 4-aminobutyrate (␥-aminobutyrate (GABA)). In commensal and pathogenic strains of Escherichia coli and in other enteric bacteria, like Shigella flexneri, Listeria monocytogenes, and Lactococcus lac-tis, Gad is a key component of the most important acid resistance system, based on glutamate. This system protects enteric bacteria from the extreme acid stress that they encounter during transit through the stomach of the host on their way to the gut (2). The system exploits the proton consuming activity of Gad by replacing the leaving CO 2 with a proton, which is irreversibly incorporated into GABA. E. coli possesses two Gad isoforms, GadA and GadB, which exhibit an acidic pH optimum (pH 3.8 -4.6) and become activated intracellularly because in extremely acidic environments protons leak through the bacterial cell membrane (2). pH-dependent activation of Gad is accompanied by a distinct change in the absorption spectrum of the cofactor (3)(4)(5)(6). At wavelengths above 300 nm, the spectrum changes from one at pH Ͼ 5.3 with the major absorbance band at 340 nm (inactive enzyme) to another at pHϽ 5.3 with the major band at 420 nm (active enzyme). The midpoint of the spectroscopic change is shifted from pH 5.3 to 5.8 in the presence of chloride, which is the most abundant anion in gastric secretions (3). The spectral transition exhibits a high level of cooperativity of the protonation/deprotonation process, with at least six protons being involved (3)(4)(5). The 420-nm absorbing species is generally accepted to be the ketoenamine form of the PLP-Lys 276 internal aldimine, protonated on the Schiff base nitrogen (Fig. 1A). Two structures have been proposed for the 340-nm absorbing species (Fig. 1A). O'Leary and Brummund (7,8) assumed that it is an aldamine in which the C4Ј is substituted by a cysteine residue. Tramonti et al. (5), upon mutation of the Lys 276 , proposed that the 340-nm chromophore is the enolimine tautomer of the ketoenamine. Model studies in aqueous and nonpolar solvents have shown that aldimines and aldamines of PLP originate different fluorescence emission spectra when excited at 330 nm (9). The enolimine tautomer of the Schiff base with hexylamine emits maximally at 512-518 nm. Aldamines formed with histidine and cysteine show a single emission peak with a max value of 367-385 nm. This occurs because the C4Ј of the cofactor is sp 3 -hybridized, and the conjugation between the double bond of the Schiff base and the pyridinium ring is lost (10). These properties were used to assign enolimine or aldamine structures in several PLP-dependent enzymes (9 -15). Several structures of GadB are available: high and low pH forms, halide-bound forms, and a mutant (GadB⌬1-14) lacking the 14 N-terminal residues. Together the structures provided a plausible basis for all of the above spectroscopic properties (3,4). Wild type GadB undergoes two significant conformational changes. The first one involves the last 14 residues of each subunit, which are flexibly disordered at low pH and become ordered and plug the active site at high pH, thus explaining the loss of activity. His 465 , the penultimate residue in sequence, was crystallographically shown to form an aldamine with the cofactor, thus acting as a "lock residue" for the active site ( Fig. 1B) (3,4). In the second conformational change, the first 14 residues of each subunit are unstructured in the high pH form, whereas in the low pH form they group in two bundles. The bundles consist of three helices each and surmount chloride-binding sites that, when occupied, exert a stabilizing effect (3). This latter conformational change is attributable to a classical coil-to-helix transition resulting from the protonation of carboxylate side chains located at strategic positions in this part of the sequence. In GadB⌬1-14 cooperativity of the absorbance change was greatly reduced (n Ϸ 2) compared with wild type, whereas the transition from active to inactive form was 20 times slower and no longer affected by chloride. The conformational change at the N terminus also controls the intracellular distribution of the enzyme (4). At acidic pH wild type GadB is mainly associated with the membrane fraction, whereas at neutral pH it remains largely in the cytosol. Notably, the GadB⌬1-14 mutant remains in the cytosol under both acidic and neutral conditions. The two His 465 mutants described in this work were produced to determine the effects of eliminating the capacity for aldamine formation on the cooperativity of the system, on the pH dependence itself, and on the link between the conformational changes at the C and N termini. EXPERIMENTAL PROCEDURES Materials-FastStart High Fidelity DNA polymerase, restriction enzymes, alkaline phosphatase, and ampicillin were from Roche Applied Science, TOPO TA cloning system was from Invitrogen, and the DNA ligation system and DEAE-Sepharose FF were from GE Healthcare. Taq DNA polymerase and SDS-PAGE protein markers were from Fermentas, and the plasmid DNA and the DNA extraction kit from agarose gel were from Nucleospin. Ingredients for bacterial growth were from Difco, and streptomycin sulfate was from U. S. Biochemical Corp. PLP and analytical grade sodium acetate were from VWR International. Sodium chloride was from Riedel-de Haen. 4-(2-Hydroxyethyl)piperazine-1-(3-propane-sulfonic acid) was from Acros Organics. Vitamin B6, potassium dihydrogen phosphate, dipotassium hydrogen phosphate, L-glutamic acid, and kanamycin were from Fluka. All other chemicals were from Sigma. Oligonucleotide synthesis and DNA sequencing services were by MWG Biotech. Site-directed Mutagenesis-Construction of the E. coli GadB mutants was performed by PCR amplification on the entire gadB open reading frame as cloned in pQgadB (16). In each case two primers were used. Site-directed mutagenesis for GadBH465A was performed using the forward oligonucleotide 5Ј-GGCGTATCACGAGGCCCTTTC-3Ј, which anneals upstream of the pQE60 polycloning site, and the mutagenic oligonucleotide 5Ј-GGAAGCTTAAACGTTATCAGGTAGCTT-TAAAGCTGTTC-3Ј. The deletion mutant GadB⌬HT, lacking His 465 and Thr 466 , was generated using the forward primer 5Ј-GGCCATGGATAAGAAGCTAGTAACG-3Ј, which anneals in the 5Ј coding region of the gadB gene, and the mutagenic oligonucleotide 5Ј-GGAAGCTTAAACGTTATCATTTAAAGCTG-3Ј. The italicized sequences indicate the NcoI and HindIII restriction sites used for directional cloning of the PCR products into pQE60. The underlined nucleotides are the codon substitutions introducing the His 3 Ala mutation in GadBH465A, whereas the nucleotides in bold are those in between the two triplets coding for His 465 and Thr 466 that were deleted in GadB⌬HT. The amplification products were cloned in the pCRII-TOPO vector (TOPO TA cloning system). E. coli Mach1-T1 transformants were selected by blue/white screening. Plasmids from white colonies were purified and fully sequenced on both strands. Plasmids pCRII-H465A and pCRII-⌬HT were digested with EcoRV and HindIII, which cut 624 nucleotides upstream and 12 nucleotides downstream from the GadB TGA stop codon, respectively. The 639-nucleotide DNA fragment was subcloned into pQgadB (16) digested with the same restriction enzymes and therefore lacking the C-terminal region of gadB. The ligation mixture was used to transform E. coli JM109/ pREP4. Transformants were screened by colony PCR. Plasmids from positive clones were also digested with NcoI and HindIII restriction enzymes to confirm the presence of the entire mutated gene. Protein Purification, SDS-PAGE, and Cell Fractionation-The conditions used for expression and purification of GadBH465A and GadB⌬HT were essentially as described for wild type GadB (16) except that the DEAE-Sepharose chromatography was performed at 4°C instead of at room temperature to improve the stability of the mutant enzymes. Protein purity was judged by 12% SDS-PAGE (17). Enzyme concentration and activity were assayed as described (16). The PLP content of all preparations was determined by treating the proteins with 0.1 M NaOH and measuring absorbance at 388 nm (⑀ 388 ϭ 6550 liter⅐mol Ϫ1 ⅐cm Ϫ1 ) (18). The effect of pH on the cellular localization of GadBH465A and GadB⌬HT compared with wild type GadB was established following cell fractionation. Cytoplasmic and membrane fractions from E. coli strains JM109/pREP4 containing either pQgadBH465A or pQgadB⌬HT were obtained and assayed for enzyme activity and by immunoblot as described (4). Gad Activity Assay-Two assay methods were used. The Gabase assay quantifies GABA production (16), and the specific activity determined is referred to as mol GABA min Ϫ1 mg Ϫ1 . Alternatively, an assay using a pH-stat was conducted (Metrohm 718 stat titrino) at 40°C in the absence of added buffer (20). Titration was performed with an aqueous solution of HCl (0.1 M). In a typical experiment, 10 ml of a solution containing 80 mM L-glutamic acid and 0.5 mM PLP in water was brought to pH 4.6 with NaOH. Gad (20 g) was added, and the titration curve was recorded. The specific activity was determined from the initial slope of the titration curve over 5 min and is defined as mol H ϩ added min Ϫ1 mg Ϫ1 . Spectroscopic Measurements-Absorption spectra and absorbance changes in the presence of L-glutamate were recorded at the indicated temperatures on a Hewlett-Packard Agilent model 8452 diode array spectrophotometer. CD spectra were obtained as the averages of three scans on a Jasco J-10 spectropolarimeter with a DP520 processor for thermostatic control of the cell compartment at 10°C. The scans were obtained in the 300 -500-nm range in a 0.2-cm path length cuvette at a scan speed of 20 nm/min with a 1-nm bandwidth. Fluorescence spectra were taken with a FluoroMax-3 (Horiba Jobin-Yvon) spectrofluorometer with a thermostatically controlled cell compartment at 20°C using 5-nm bandwidth on both sides at a scan speed of 100 nm/min. Blank spectra were subtracted from spectra of enzyme-containing sample in both CD and fluorescence analyses. Data Analysis-Curve fitting, deconvolution of spectra, and statistical analyses were carried out with Scientist (Micromath, Salt Lake City, UT) and GraphPad Prism 4.00 (GraphPad Software, San Diego, CA). The pH-dependent variations in absorbance were analyzed using the following equations, where Abs EH and Abs E are the absorbances of the 420-nm absorbing species at the beginning and the end of the spectral transition, respectively. K, K 1 , and K 2 are the intrinsic dissocia-tion constants of the species involved in the titration, and n is the number of protons involved in the transition. Values for k cat and K m were determined in 50 mM sodium acetate buffer, pH 4.6, by following the changes in absorbance at 420 and 340 nm observed during the reaction of wild type GadB and of GadBH465A and GadB⌬HT in the presence of different concentrations of L-glutamate as described (21). The absorption spectra were resolved into their component absorption bands (deconvolution) by nonlinear least square fit of the experimental data to the sum of a variable number of log normal curves, each having independent parameters (22). Bioinformatic Analysis-A K a /K s ratio analysis was carried out as follows: 14 orthologues/paralogues of GadB were manually selected from the output of a BLAST search versus the Uni-Prot data base. The protein sequences were aligned with T-COFFEE (23), and the corresponding coding sequences were retrieved from the EMBLCDS data base (24) using DBFETCH and aligned with REVTRANS (25) using the protein sequence alignment as template. K a /K s ratios were calculated with SELECTON (26) in high accuracy mode using the M8 codon substitution model. Overproduction and Purification of GadBH465A and GadB⌬HT-His 465 of GadB was either replaced with alanine (GadBH465A) or deleted together with the last residue in the polypeptide chain, Thr 466 (GadB⌬HT). Both mutants were overexpressed in E. coli and purified essentially following the protocol used for wild type GadB (16), which was also purified for comparison purposes. During DEAE-Sepharose chromatography at pH 6.5 the mutants, unlike wild type GadB, eluted as yellow bands, indicating that the mutations had altered their absorption spectra. The purity of both mutants was Ͼ95%, as based on SDS-PAGE (data not shown). The mutants were stable at 4°C for many months. The yield from a standard purification (a 2-liter culture) was approximately 70 mg for GadBH465A and 43 mg for GadB⌬HT, corresponding to 54 and 33% of the purification yield of the wild type enzyme, respectively. The content of holoenzyme, established by calculating the PLP concentration in 0.1 N NaOH, was on average 79 and 67% of GadBH465A and GadB⌬HT total protein concentration, respectively. Under standard assay conditions (0.2 M pyridine/HCl buffer, pH 4.6, at 37°C in the presence of 0.1 mM PLP) the specific activity, referred to the total protein concentration, was 176 units/mg for GadBH465A and 118 units/mg for GadB⌬HT, corresponding to 78 and 52% of the specific activity of wild type GadB, respectively. At pH 4.6 the absorption spectra of the mutants are similar to each other but differ from that of wild type GadB (Fig. 2) because a small but significant 340-nm absorbing species is present. The lower absorbance at 420 nm is partly due to this species and partly due to the lower overall content of PLP (holoenzyme). The k cat and K m values calculated for each mutant are provided in Table 1. The mutations cause a slight decrease in both k cat and K m compared with wild type GadB, but the catalytic efficiency (k cat /K m ) is similar to wild type. pH-dependent Spectral Properties-Preliminary experiments in acetate buffer showed that the UV-visible absorption spectra NOVEMBER 13, 2009 • VOLUME 284 • NUMBER 46 of GadBH465A and GadB⌬HT changed very little in the range of pH 4.5-7.0 (data not shown). This behavior contrasts with that of wild type GadB (4,5). The range of analysis was therefore extended to 9.5, and phosphate was used as buffer instead of acetate. The spectral changes shown by the mutants are different in several respects. In the wild type enzyme at pH 6.8, less than 10% of the 420-nm absorbance observed at pH 4.6 is present (Fig. 3A, inset), whereas in both mutants ϳ50% of the 420-nm absorbance observed at pH 4.6 is still detected above pH 9 (Fig. 3, A and B). Moreover, the species formed at alkaline pH does not display maximum absorbance at 340 nm but rather at 334 nm, whereas the isosbestic point is shifted from 361 to 348 nm. The spectral transition is no longer affected by chloride, and the extreme cooperativity is lost (Fig. 3C). Role of His 465 in E. coli Glutamate Decarboxylase The finding that the pH-dependent transition does not go to completion at alkaline pH suggests that the deprotonation responsible is not that of the protonated aldimine itself. Instead, it must result from protonation of a group affecting the equilibrium between the two chromophores involved. The pH dependence of absorbance at 420 nm fit well to Equation 1 with values of n lower than 1 (Table 2), indicating a negatively cooperative process. This is consistent with a mechanism, described by Equation 2, in which protonation of a group in one GadB monomer decreases the proton affinity of the corresponding group in one or more of the other monomers. The data fit well to Equation 2, with pK values of 6.7 and 8.4. The absorption spectra at pH 4.6 and 6.8 (wild type) or 8.5 (GadBH465A and GadB⌬HT) were resolved into their component absorption bands ( Fig. 4 and data not shown). Good fits were obtained to two components in all cases except for wild type GadB at pH 6.8 (Fig. 4A) where three components were required, with max values of 330, 341.5, and 410 nm. Because the species absorbing at 341.5 nm predominates at pH 6.8 in wild type GadB and is absent from the spectra of the mutants, it is deduced to be the aldamine with His 465 . CD and Fluorescence Properties-The effects of mutating His 465 on the CD and fluorescence spectra of GadB were investigated. The CD spectrum of GadBH465A at pH 4.6 displays a peak at 420 nm, coinciding in intensity and shape with that of wild type GadB (Fig. 5). At alkaline pH the CD spectra are significantly different; in wild type GadB the CD signal at pH 7.5 is essentially flat in the range 300 -500 nm, whereas in GadBH465A at pH 8.3 the 420-nm signal is still present and accounts for more than 45% of the signal detected at pH 4.6 ( Fig. 5). In addition GadBH465A at a pH level of Ͼ7.5 shows a positive CD signal centered at 332 nm. The CD spectra of GadB⌬HT were very similar to those of GadBH465A. The fluorescence properties of wild type GadB, GadBH465A, and GadB⌬HT were analyzed at acidic and alkaline pH. When excited at 430 nm at pH 4.6 both wild type and mutants show an emission spectrum with a maximum at 500 nm (Fig. 6, A and B). The fluorescence changes significantly at alkaline pH; wild type GadB exhibits very little fluorescence upon excitation at 430 nm (Fig. 6A), whereas the fluorescence of both mutants increases and shifts to 522 nm (Fig. 6B). Upon excitation at 345 nm the wild type enzyme and the mutants display different emission spectra both at acidic and alkaline pH. The emission spectrum of wild type GadB at pH 4.6 shows two maxima, at 390 and 500 nm, whereas at pH 7.5 it exhibits only one peak with a maximum at 390 nm (Fig. 6C). The emission spectrum at pH 4.6 of both His 465 mutants displays a maximum at 390 nm, which decreases upon alkalinization with a concomitant increase at 510 nm (Fig. 6D). Upon excitation of Trp residues at 295 nm (Fig. 6E), the emission spectrum of wild type GadB exhibits maxima at 355 and 490 nm at pH 4.6 and at 355 nm at pH 7.5. The fluorescence emission spectra of both mutants obtained upon excitation at 295 nm display maxima at 355 and 490 nm at acidic pH and at 355 and 520 nm at alkaline pH. pH-dependent Cellular Partition-To assess whether in the two mutants the six N termini still form two triple helical bundles at acidic pH and whether this remains a cooperative process, despite the absence of cooperativity from the change in absorbance spectrum (Fig. 3C and Table 2), we relied on an earlier finding: the acid-induced formation of the triple bundles determines the partition of the enzyme between cytosolic and membrane fractions (4). The two fractions, obtained by ultracentrifugation of cell extracts from the E. coli strains overexpressing wild type GadB, GadBH465A, and GadB⌬HT, were resuspended at four different pH values: 5.5, 5.75, 6.0, and 6.2. The distribution of the different forms of the enzyme was determined by SDS gel electrophoresis and by activity measurements (Fig. 7). The proportion of all three proteins found in the membrane fractions increases with pH, and the process is cooperative. The transition midpoint is at pH 5.9 for all three proteins. The cooperativity levels are similar (n Ϸ 3.0) within the experimental error. However, for wild type enzyme the cooperativity is significantly lower than that governing the change in absorption spectrum (Fig. 3A, inset, and Table 2). Catalytic Properties-Spectroscopic analysis provided evidence that in both His 465 mutants the 420-nm absorbing species is still significantly present at very high pH values. This species is considered the catalytically competent one. The activity of both mutants was therefore assayed in the pH range 4.0 -7.0 using a pH-stat device without added buffer (Fig. 8). Similar results were obtained by measuring the activity either in acetate or in phosphate buffer (Fig. 8, inset). The data of activity versus pH fit well to the Hill equation, suggesting that cooperativity still affects the enzyme activity (n ϭ 2-3), although not at the same high levels as in the spectroscopic changes. The tran-FIGURE 3. pH-dependent absorption changes. A, absorption spectra of GadBH465A at pH 4.86, 5.9, 7.16, 7.56, and 8.9 and of wild type GadB (inset) at pH 4.86, 5.13, 5.33, 5.64, and 6.84. B, absorption spectra of GadB⌬HT at pH 4.56, 6.9, 7.16, 8.5, and 8.9. Protein concentration was 7.7 M, and the spectra were recorded in 50 mM potassium phosphate buffer at the indicated pH values. The arrows indicate the change in absorbance at 420 nm and at 340 nm upon increasing pH. C, the pH variation at 420 nm is represented for wild type GadB, in the absence (filled squares) and presence (empty squares) of 50 mM NaCl, for GadBH465A (filled triangles) and for GadB⌬HT (empty circles). The solid and the dashed lines through the experimental points show the theoretical curves obtained using Equations 1 and 2, respectively. sition midpoint was at pH 5.6 in wild type GadB and at pH 6.0 in the mutants. At pH 6.7 GadBH465A and GadB⌬HT still decarboxylate L-glutamate at a significant rate (Fig. 8). The reaction was followed in 50 mM phosphate buffer at pH 6.7 over a 2-h period; more than 47% of the 50 mM glutamate in the reaction mixture was converted into GABA by the mutants versus 8% by wild type GadB (Fig. 9). Because of the significant amount of protons used up during the reaction, the pH of the solution increased of 0.3 unit, which probably hampered the complete conversion of substrate in the reaction mixture. DISCUSSION The production in bacteria of inducible decarboxylases was elegantly shown more than 60 years ago by Gale (27), who stated that it "may be the method by which the organism extends its (pH) range of existence." Indeed the neutrophilic bacterium E. coli can survive for more than 2 h at pH Յ 2.5, when glutamate is supplied in minimal medium and Gad activity is essential to this ability (28,29). The acidic pH optimum of Gad is well suited to the intracellular pH 4.5, occurring upon exposure to an extremely acidic environment (pH Ͻ 2.5) like that of the stomach. Gad activity was suggested to be beneficial to the micro-organism by contributing to proton consumption and to the inversion of membrane potential, counteracting the entry of more protons (2,4). The present work was undertaken with the aim of providing new insights into the molecular basis of the pH-dependent mechanism of Gad regulation, in particular with respect to the role played by His 465 , the residue locking the active site by forming an aldamine with the PLP-Lys 276 Schiff base (3). One inevitable effect of mutating His 465 is that the contribution of aldamine formation is eliminated from the pH-dependent equilibrium between active and inactive forms of the enzyme. A second inevitable effect is that the pH dependence of aldamine formation, which requires the imidazole of histidine to be deprotonated, is also eliminated. An unexpected finding was the major effect that the mutations exert on the cooperativity and midpoint of pH-dependent spectroscopic transition. Cooperativity-Earlier experiments on GadB⌬1-14 showed that the inability to form N-terminal helical bundles lowers the cooperativity from n Ͼ Ͼ 6 to n Ϸ 2 (3). Here we show that the formation of the triple helical bundles is still cooperative both in the His 465 mutants and in wild type GadB, but in this case n Ϸ 3. The point of half transition is at pH 5.9. Notably O'Leary and Brummund (7) showed that E. coli Gad, both native and borohydride-reduced (in which aldamine formation/decomposition cannot occur), undergo a rate determining step, consisting of the loss of three protons, before detecting the spectral changes. Based on our and on previous results (3,7), we suggest that the contribution from N-terminal helix formation to the cooperativity of the whole system is equivalent to n Ϸ 3 and that bundles fold/unfold independently from the events occurring at the active site. Thus the spectroscopic changes of GadB not only track the conformational changes in the active site but also indirectly record the rate-determining conformational changes at the distant N termini (3). It is tempting to assume that during evolution a histidine residue was selected for aldamine formation because the pK of its distal side chain nitrogen (6.0) is very close to the pK at which the transition between the folded and unfolded state of the bundles occurs, thus allowing both events (aldamine formation and bundle formation) to become linked. This assumption is substantiated by a K a /K s ratio analysis of the coding sequences of GadB and of 14 among its orthologues and paralogues. This kind of analysis detects which residues in a protein evolve neutrally and which ones, because of their structural or functional importance, are subjected to purifying selection (30). In the case of GadB the 14-residue-long C-terminal tail, although conformationally disordered at low pH, possesses one residue under strong purifying selection, and this is indeed that corresponding to E. coli GadB His 465 (Fig. 10). Because positive cooperativity in the spectroscopic transition is lost when His 465 is removed (Fig. 3C), this residue is likely responsible for the residual cooperativity observed in GadB⌬1-14 (3,4). Its absence massively affects the equilibrium between the open and closed active sites. This partially explains the finding that both mutants are at least twice as active as wild type GadB in the pH range 5.7-7.0 (Fig. 8). In wild type GadB the level of cooperativity of the catalytic activity as a function of pH is significantly lower (n ϭ 2-3) than that of the change in absorption spectrum. The difference likely arises because the absorption spectrum changes were observed on the free enzyme, whereas the activity measurements were initial velocities taken at high glutamate concentration. Under the latter conditions, the active sites of the enzyme are fully occupied by glutamate so that His 465 cannot enter. Notably a similar level of cooperativity, although not detectable at the spectroscopic level, is observed in the His 465 mutants. Because the conformational change occurring at the N termini is still a cooperative process in the mutants, the cooperativity in the activity is likely due to one or more residues, also undergoing a conformational change. One candidate is Asp 86 , a substratebinding residue located on a loop taking up different conformations in the high and low pH forms of GadB (4). The negative cooperativity of the spectroscopic changes in the mutants also needs to be explained. Based on the fitting of the experimental data, we assume that the pK of an ionizable group affecting the equilibrium between the 420-and the 334-nm absorbing species increases as a consequence of the conformational change. Such a perturbation in the pK of an ionizable group often occurs on catalytic groups of enzymes active sites and is known as the Born effect (or desolvation effect); it favors the neutral form of a titratable group when transferred from a polar to an apolar environment (31). Analysis of the His 465 mutants revealed that an active site residue undergoes this change in pK when the active site environment of GadB becomes less polar. . Time course of GABA production. GABA production was analyzed over a period of 2 h in wild type GadB (black), GadBH465A (light gray), or GadB⌬HT (dark gray). The decarboxylation reaction was carried out at 30°C in 4 ml of 50 mM potassium phosphate buffer, pH 6.7, in the presence of 40 M PLP and 50 mM glutamate. The enzyme concentration is 2 M. During the reaction aliquots (200 l) were withdrawn and analyzed to assess GABA content with the Gabase assay. The pH changes during the reaction were also recorded (inset). The symbols and lines used are as in Fig. 7B. The reported values represent the means of three independent experiments, with a variation not exceeding 10% of the stated value. Spectroscopic Properties-E. coli GadB is the only PLP-dependent enzyme where an aldamine form of the cofactor was crystallographically observed (3). Thus it can be used to analyze the aldamine contribution to the spectroscopic properties of a PLP-dependent enzyme and to compare these properties with those of chemical models (9,15). UV-visible spectroscopy confirms that in the absence of His 465 , the active site of GadB is less efficiently locked. This is mainly substantiated by the persistence of the 420-nm absorbing ketoenamine species, typical of a more hydrated active site (5,32), which is still appreciably detected at pH Ͼ9.0. Several spectroscopic lines of evidence confirm that the 340-nm species is the aldamine. Deconvolution of spectra show that the 340-nm absorbing species, detected as the major species in wild type GadB at pH 6.8, is absent in the mutants, and a 327-nm absorbing species is present instead. In CD spectra both His 465 mutants display, unlike wild type GadB, optical activity centered at 332 nm. We propose that in wild type GadB the aldamine contribution via the sp 3 hybridized C4Ј of the cofactor is opposite in sign to the Cotton effect because of the active site environment of the cofactor, thus resulting in an almost undetectable optical activity, confirming the hypothesis of O'Leary and Brummund (7). Fluorescence emission spectroscopy of wild type GadB and of the His 465 mutants provided insight into the nature and fluorescence properties of the PLP derivatives embedded in the active site. Upon alkalinization the emission of the mutants shifts from 490 to 522 nm when excited at 430 nm; this can be attributed to the ketoenamine species, switching from a polar to an apolar environment. A model PLP-hexylamine Schiff base behaves similarly when analyzed in chloroform instead of water (9). Upon excitation at 345 nm wild type GadB at acidic pH and the His 465 mutants at any pH exhibit two emission max-ima, at 390 and 490 nm, with the former always prevailing. We propose that the enolimine tautomer of the PLP-Lys 276 Schiff base is being detected in these conditions. In model studies the fluorescence emission of enolimine is characterized by an unusually high Stoke shift (9,000 -11,000 cm Ϫ1 ) (15) because of an intramolecular proton transfer in the excited state, yielding the ketoenamine as the excited species, which then emits fluorescence at long wavelengths (15). The rates of proton transfer and radiative decay from the ketoenamine (Ϸ510 nm) or directly from the enolimine (Ϸ430 nm) were suggested to account for the relative intensities of the two fluorescence emission maxima in phosphorylase b and other PLP-dependent enzymes (33,34). Thus we suggest that in the absence of His 465 side chain the enolimine tautomer is present in the active site of GadB; enolimine is typically detected when the active site becomes less polar (9). This is in accord with previous findings with the GadB Lys 276 mutant (5). The complexity of the fluorescence emission spectrum of the 330 -340-nm absorbing species in GadB appears to arise from the combined effects of several factors: the C4Ј hybridization, the radiation energy-induced intramolecular proton transfer between Schiff base tautomers, and their respective radiation decay rates. However, based on the present findings, it cannot be excluded that similar results could be obtained with carbinolamine (34). Catalytic Properties-It has been recently shown that wild type GadB can be efficiently entrapped in calcium alginate beads (immobilized) and used in a reactor set-up with a pH-stat to perform the decarboxylation reaction at pH 4.6 in the absence of a buffer system (20). Immobilization does not affect the pH dependence of enzyme activity. Because glutamic acid is abundant in waste streams from biofuel production, it is regarded as an interesting starting material for the synthesis of nitrogen-containing bulk chemicals, which can be derived from GABA (20). Because in GadBH465A and GadB⌬HT the active site "lock" cannot be formed and the range of activity is significantly extended toward alkaline pH, they are very attractive for the above application. Indeed at pH 5.9 the specific activity of both mutants is four times higher than that of wild type GadB. At this pH glutamic acid is much more soluble than at pH 4.6 (35). This is very advantageous for use in a bioreactor, because it removes, with massive gains in efficiency, the bottleneck caused by the limited solubility of glutamate at the acidic pH optimum of wild type GadB. In conclusion, aldamine formation contributes significantly to the inactivation of GadB and to its pH-dependent activity profile. The pK of the distal imidazolic nitrogen of His 465 affects the pH-dependent spectroscopic and catalytic properties of E. coli GadB, and His 465 must be present to integrate the cooperativity of triple helix formation into the cooperativity of the whole system. Notably, in the eukaryotic relative of E. coli GadB, Arabidopsis thaliana Gad1, this residue is not found in the long C-terminal tail, instrumental to activity regulation by pH and by Ca 2ϩ /calmodulin binding, and this probably explains why Gad1 does not undergo the same mechanism of inactivation (19).
2018-04-03T03:19:04.897Z
2009-09-21T00:00:00.000
{ "year": 2009, "sha1": "ba5271753bcca738cd3e1ed29162562c08a3d4d7", "oa_license": "CCBY", "oa_url": "http://www.jbc.org/content/284/46/31587.full.pdf", "oa_status": "HYBRID", "pdf_src": "Highwire", "pdf_hash": "6adc49ef2e4e84f0d04186dd7b197a5a13fa0ca6", "s2fieldsofstudy": [ "Biology", "Chemistry" ], "extfieldsofstudy": [ "Medicine", "Chemistry" ] }
2536955
pes2o/s2orc
v3-fos-license
Evidence of association between Nucleosome Occupancy and the Evolution of Transcription Factor Binding Sites in Yeast Background Divergence of transcription factor binding sites is considered to be an important source of regulatory evolution. The associations between transcription factor binding sites and phenotypic diversity have been investigated in many model organisms. However, the understanding of other factors that contribute to it is still limited. Recent studies have elucidated the effect of chromatin structure on molecular evolution of genomic DNA. Though the profound impact of nucleosome positions on gene regulation has been reported, their influence on transcriptional evolution is still less explored. With the availability of genome-wide nucleosome map in yeast species, it is thus desirable to investigate their impact on transcription factor binding site evolution. Here, we present a comprehensive analysis of the role of nucleosome positioning in the evolution of transcription factor binding sites. Results We compared the transcription factor binding site frequency in nucleosome occupied regions and nucleosome depleted regions in promoters of old (orthologs among Saccharomycetaceae) and young (Saccharomyces specific) genes; and in duplicate gene pairs. We demonstrated that nucleosome occupied regions accommodate greater binding site variations than nucleosome depleted regions in young genes and in duplicate genes. This finding was confirmed by measuring the difference in evolutionary rates of binding sites in sensu stricto yeasts at nucleosome occupied regions and nucleosome depleted regions. The binding sites at nucleosome occupied regions exhibited a consistently higher evolution rate than those at nucleosome depleted regions, corroborating the difference in the selection constraints at the two regions. Finally, through site-directed mutagenesis experiment, we found that binding site gain or loss events at nucleosome depleted regions may cause more expression differences than those in nucleosome occupied regions. Conclusions Our study indicates the existence of different selection constraint on binding sites at nucleosome occupied regions than at the nucleosome depleted regions. We found that the binding sites have a different rate of evolution at nucleosome occupied and depleted regions. Finally, using transcription factor binding site-directed mutagenesis experiment, we confirmed the difference in the impact of binding site changes on expression at these regions. Thus, our work demonstrates the importance of composite analysis of chromatin and transcriptional evolution. Results: We compared the transcription factor binding site frequency in nucleosome occupied regions and nucleosome depleted regions in promoters of old (orthologs among Saccharomycetaceae) and young (Saccharomyces specific) genes; and in duplicate gene pairs. We demonstrated that nucleosome occupied regions accommodate greater binding site variations than nucleosome depleted regions in young genes and in duplicate genes. This finding was confirmed by measuring the difference in evolutionary rates of binding sites in sensu stricto yeasts at nucleosome occupied regions and nucleosome depleted regions. The binding sites at nucleosome occupied regions exhibited a consistently higher evolution rate than those at nucleosome depleted regions, corroborating the difference in the selection constraints at the two regions. Finally, through site-directed mutagenesis experiment, we found that binding site gain or loss events at nucleosome depleted regions may cause more expression differences than those in nucleosome occupied regions. Conclusions: Our study indicates the existence of different selection constraint on binding sites at nucleosome occupied regions than at the nucleosome depleted regions. We found that the binding sites have a different rate of evolution at nucleosome occupied and depleted regions. Finally, using transcription factor binding site-directed mutagenesis experiment, we confirmed the difference in the impact of binding site changes on expression at these regions. Thus, our work demonstrates the importance of composite analysis of chromatin and transcriptional evolution. Background The chromatin of eukaryotic genomes is compacted into several levels. Nucleosomes, which form the lowest level of compaction, are made up of~147 bp of DNA wrapped around a histone protein complex and interspersed by~50 bp of exposed linker DNA. In recent years, the occupancy of nucleosome positions in yeasts has been investigated by using different approaches (such as tiling arrays and parallel sequencing), which employs micrococcal nuclease (MNase) digestion [1][2][3]. The results show that about 70-80% of the yeast genome is occupied by nucleosomes [4][5][6]. The intrinsic mechanisms that determine the nucleosome locations have long been of interest to researchers. Studies of budding yeast have discovered dinucleotides (AA/TT/AT) periodicity along nucleosome positioning sequences [7,8]; and that nucleosome depleted regions (NDRs) are characterized by positioned stretches of poly (dA:dT) tracts [9,10]. In addition, a number of patterns of nucleosome occupancy have been observed. For example, a~140 bp NDR is often found upstream of the transcription start site flanked by -1 and +1 nucleosomes, with the +1 nucleosome located~13 bp downstream from the transcription start site [11,12]. It has also been found that, near the 5' end of genes, a uniform 165 bp spacing of nucleosomes (18 bp linker) extends to as many as nine nucleosomes [5][6][7][8][13][14][15]. Importantly, many of these features are evolutionary conserved [7,16]. It is known that the transcription mechanism in eukaryotes functions at different levels, e.g. at the DNA sequence level, transcription factors interact with cisregulatory sequences; and at the chromatin level, where the chromatin allows the chromosomal segments to switch between activated state and suppressed states of transcription [17,18]. The interplay of changes in nucleosome occupancy and transcriptional machinery at each level suggests a strong association between nucleosome positioning and transcription mechanism [19,20]. For example, TATA-less promoters, which are characterized by NDRs, are frequently linked to basal transcription. Conversely, the promoters of TATAcontaining genes tend to be occupied by nucleosomes and are stress responsive [13,21,22]. Moreover, it has been demonstrated that nucleosomes could facilitate the recognition of transcription factor binding sites (TFBSs), and guide transcription factors to their target sites in a DNA sequence [22,23]. As an example, Maffey et al. [24] characterized the constraints imposed by well positioned nucleosomes on the interaction of androgen receptors with their binding sites, which are located in the proximal promoters of murine probasin genes. The above evidence confirms the importance of the association between nucleosome positioning and transcriptional regulation. Such evidence in turn raises the interesting issue of the role of nucleosomes in constraining evolutionary changes in TFBSs. Recent studies have identified the evolutionary features related to nucleosome organization in yeasts [9,25]. For example, it has been found that nucleosome free linker regions have a lower evolution rate than nucleosome occupied regions (NRs) [9,25]. In an another study, a large-scale comparative genomic analysis of distantly related yeasts found that gene expression divergence is coupled with the evolution of DNAencoded nucleosome organization [26]. Further, by analyzing the nucleosome position of two closely related yeast species, Tirosh et al. [27] indicated that the major contribution towards divergence of nucleosome positioning is through mutations in the local sequences (ciseffects). Moreover, the sequences that quantitatively affect nucleosome occupancy were found to evolve under compensatory dynamics while maintaining heterogeneous levels of AT content [28]. Considering the fact that significant fraction of regulatory variation can be attributed to changes in cis-regulatory elements [29][30][31][32], understanding the evolutionary process requires the investigation of all the factors that contribute to TFBS evolution [33]. With the availability of the whole genome nucleosome map in yeast species [34], it is thus desirable to extend existing studies on regulatory regions from an evolutionary perspective while considering the presence of chromatin structure. In this paper, we have attempted a more comprehensive analysis to demonstrate that nucleosome occupancy in yeast promoters plays an important role in the evolutionary changes in TFBSs. To determine the evolutionary features of TFBSs constrained by nucleosome occupancy, we first investigated the distribution of TFBSs in NRs and NDRs that regulate 1) orthologous genes of Saccharoymyces cerevisiae, Candida glabrata, and Kluyveromyces lactis (Saccharomycetaceae); and 2) those that specifically regulate S. cerevisiae (Saccharomyces specific) genes, which represent young genes. We found that TFBS locations in orthologous genes are dominant in NDRs, but those in Saccharomyces specific genes appear more frequently in NRs. To further validate this evolutionary tendency, we investigated the distribution of TFBSs in NRs and NDRs in duplicate gene pairs of yeast that might have undergone relaxation of selection pressure. Since TFBS variations are due to difference in consensus sequences and nucleotide substitutions can promote diversification of regulatory elements [35,36], these interesting findings motivated us to estimate the evolution of TFBSs by position-specific evolution rates [37]. The evolution rates of TFBSs were found to be higher at NRs than their depleted counterparts (NDRs). Finally, the impact of TFBS changes on gene expression at NRs and NDRs were evaluated using site-directed mutagenesis of TFBS and real-time PCR analysis. Our findings on the evolutionary events in TFBSs suggest that 1) NRs can accommodate more changes that contribute to the variation in TFBSs, and 2) the selection constraints of NRs and NDRs are different. Future analyses of data across different biological conditions can reflect on the role of variations in TFBSs. Collecting yeast TFBSs The genome sequence and the gene and chromosome annotations of the yeast species examined in this study were obtained from a recent compilation in the Saccharomyces Genome Database (SGD) [38]. The target genes of transcription factors and their TFBSs in five closely related yeasts from the Saccharomyces sensu stricto clade, namely, S. cerevisiae, S. paradoxus, S. mikatae, S. kudriavzevii and S. bayanus, were retrieved from the MYBS database http://cg1.iis.sinica.edu.tw/~mybs/ [39] ( Figure 1a). MYBS contains integrated information derived from an array of experimentally verified and predicted consensus or position weight matrices (PWMs) that correspond to 183 known yeast transcription factors. To improve the accuracy of binding site search, traditional methods impose filters such as phylogenetic footprinting information and transcription factor-DNA binding affinity by setting the p-value in a ChIP-chip experiment. However, during inter-or intra-species evolutionary analysis, using conservation of phylogenetic footprinting as primary criteria will not be feasible. In such cases, simply considering the constraints of bound promoters in ChIP-chip data might be insufficient. Thus, in this current work, to control for the specificity of TFBSs, we examined the reliable annotations of TFBSs for each transcription factor according to the following criterion. For a transcription factor α, the ratio had to be satisfied. We applied an additional criterion that the p-value of the corresponding transcription factor ChIP-chip experiment for the gene should be ≤ 0.001 [40]. Furthermore, to avoid ambiguity, overlapping TFBSs corresponding to the same transcription factor were excluded from our analysis. In total, our dataset contained 104 transcription factors with 29,193 TFBSs in 2,522 promoters of S. cerevisiae. For TFBSs corresponding to the 104 transcription factors that occurred at least once in all five sensu stricto species, including S. [11]; (c) orthologous genes were collected from OrthoMCL-DB and detected S. cerevisiae specific genes; (d) duplicate gene pairs were identified in S. cerevisiae; (e) the frequency distribution of TFBSs in orthologous genes, sacharomyces specific genes and duplicate gene pairs were derived with respect to nucleosome occupancy in S. cerevisiae; (f) suitable statistical tests were used to determine if the distributions in (e) was significantly different; (g) the evolutionary rates of TFBS present in sensu stricto yeasts was calculated at NRs and NDRs; and (h) the difference in (g) were tested for significant difference. cerevisiae, there were 22,447 TFBSs present in 1134 promoters (Table 1). Nucleosome occupancy information in S. cerevisiae Genome-wide nucleosome occupancy data (Figure 1b) of S. cerevisiae was retrieved from http://atlas.bx.psu. edu/project/saccharomyces.html [11]. Mavrich et al. [11] used MNase digested DNA from nucleosome core particles that were crosslinked with formaldehyde in vivo. These were further immunopurified with antibodies against tagged histones H3 and H4. After correction for MNase bias and making calls on nucleosome locations, a total of 1,206,057 individual nucleosomal DNAs were sequenced using Roche GS20 (454 Life Sciences), and then mapped to genomic coordinates obtained from http://www.yeastgenome.org [38]. Furthermore, Mavrich et al. established rules governing genomic nucleosome organization in S. cerevisiae. They also developed a statistical model to predict nucleosome positions in terms of nucleosome occupancy, and identified well positioned and fuzzy nucleosomes. In this work, we consider both well positioned and fuzzy nucleosomes. Orthologous and Saccharomyces specific genes We first examined the differential relationship between the frequency distribution of TFBSs in orthologous genes (in Saccharomycetaceae) and in genes only present in the descendent species S. cerevisiae, with respect to nucleosome occupancy. For this task, we collected the genome sequences of three diverged yeast species, namely, S. cerevisiae, C. glabrata, and K. lactis, from SGD [38]. Then, for each of the 2,522 genes in S. cerevisiae, we downloaded the "orthologous" genes in C. glabrata and K. lactis from the OrthoMCL-DB [41] (see Table 1 and Figure 1c). Genes that existed in S. cerevisiae, but not in C. glabrata or K. lactis, are called "Saccharomyces specific" genes. Additional file 1 Table S1 lists 2,152 orthologous genes and 75 Saccharomyces specific genes considered in our analysis. Next, using the TFBSs from our set of transcription factors, we computed the numbers of TFBSs in the nucleosome occupied regions (NRs) and nucleosome depleted regions (NDRs) of each gene (Table 1) based on the genome wide nucleosome occupancy map of S. cerevisiae [11] (Figure 1e). The measurement was performed separately on the orthologous genes and Saccharomyces specific genes. A TFBS was deemed to be in NR (or NDR) if its location overlapped (or did not overlap) with that of the nucleosome positions retrieved from Mavrich et al. [11]. For those TFBSs, we used a two-sided c 2 -test to determine whether the differences in their frequency in NRs and NDRs occurred more often than under random expectation (Figure 1f). The null hypothesis H 0 is that the frequency distribution in NRs is equal to the distribution in NDRs, and the alternative hypothesis is that they are different. We rejected the null hypothesis under the criterion that the p-value ≤ 0.05. Identifying duplicate genes in S. cerevisiae We compiled a list of 1,048 independent duplicate pairs in the S. cerevisiae genome by adopting a similar, but more stringent, protocol to that developed by Gu et al. [42]. First, we downloaded all available proteins in S. cerevisiae from the latest compilation of SGD [38]. To identify duplicate gene pairs (Figure 1d), we performed an allagainst-all BLASTP search on the entire proteome. Two genes were regarded as duplicate pairs if they satisfied the following three criteria. First, the expected value (E) of reciprocal best hits during the BLASTP search should be < 10 -20 . Second, the length of the alignable region (L) between the two sequences should be greater than half of the length of the longer protein. Third, their similarity should be ≥ I, where I = 30% if L ≥ 150 amino acids (a.a.); and I = 0.06 + 4.8L -0.32(1 + exp(-L/1000)) if L < 150 a.a.. Furthermore, all overlapping pairs and transposons containing genes were excluded to ensure that each gene pair only occurred once in our dataset. Moreover, only gene pairs with at least 150 informative codons were retained for further analysis. From the promoters of duplicate gene pairs, we computed the frequency of TFBSs in NRs and NDRs and normalized them with the total number of TFBSs at these regions (Figure 1e). Furthermore, we determined whether the preference of the TFBSs at NRs and NDRs were significantly different according to one-sided twosample proportion test (Figure 1f) under the criterion that p-value < 0.01 (Table 2). Calculating the evolution rates of TFBSs We calculated the evolution rates of TFBSs in NRs and NDRs based on the method proposed by Moses et al. [37]. The rates were computed for all the TFBSs of S. cerevisiae that were conserved in other sensu stricto yeasts (Figure 1g). Using aligned promoters from the same gene sets of sensu stricto yeasts [43], species tree of these species [44,45] and parsimony algorithm [46], we derived evolutionary inference by computing the minimal number of changes (minimum parsimony) needed to align each column of the promoters in all four species with the promoters of S. cerevisiae. Promoter regions with missing sequences in the alignment were treated as gaps and excluded. The average evolution rate of a TFBS was obtained by computing the sum of the minimal number of changes over all positions, and then divided by its length. Given that mutation rates at NRs are higher across the genome when compared to NDRs [9,25], it could be intriguing whether the evolutionary rates of TFBSs at NRs and NDRs is an extrapolation of the genome-wide trend. In this situation, using the complete set of promoter sequences containing the TFBSs as a general background can induce considerable bias. Hence, to control for such context-inducible bias, we calculated the number of changes in NRs and NDRs separately, by excluding the positions containing TFBSs on each promoter. These two calculations act as two types of backgrounds. The number of changes in a TFBS (at NRs and NDRs) was further normalized by the number of changes in the respective background ( Figure 1g). Furthermore, for species with short evolutionary distances, like those considered here, the number of substitutions per site of a DNA sequence determined by using parsimony methods is expected to be similar to that obtained by applying maximum likelihood approach. We also investigated whether the median evolution rate of TFBSs at NRs was statistically greater than that of TFBSs at NDRs by applying the Wilcoxon-Mann-Whitney U two-sample test with a stringent criterion that the p-value ≤ 0.01 (Figure 1h). Mutagenesis for TFBSs A yeast strain (BY4741 (BY), a descendant of S288C) was grown in yeast extract-peptone-adenine-dextrose (YPAD) medium [47] and harvested at the mid-log phase. Overnight yeast cultures were used to prepare the starting cultures with OD 600 = 0.1 and grown in the YPAD medium at 30°C with 250 rpm shaking. The yeast cells were harvested at the OD 600 = 1.0, and the total RNAs were extracted by using the MasterPure™ yeast RNA purification kit (EPICENTRE), and contaminated DNAs were removed by treatment of DNaseI in the same kit. To determine the effects of TFBSs that have recently evolved in related yeast species on expression difference, we randomly chose TFBSs that have undergone gain or loss events [48] from NRs and NDRs in S. cerevisiae promoters for site-directed mutagenesis. Further, we identified the nucleotides that cause TFBS gain or loss in each gene for site-directed mutagenesis. The constructions were performed by PCR-based mutagenesis, which involved two sequential steps [49]. First, the TFBS region of interest in the BY gene was replaced by a URA3 cassette with about 45 bp flanking homologous regions to the gene of interest at both ends. To perform the first transformation, we used the LiOAc/SS Carrier DNA/PEG method [50], and the insertion of URA3 in the TFBS region was confirmed by diagnostic PCR and sequencing. The inserted URA3 was then replaced by a second transformation with the appropriate fragment of BY's PCR-based TFBS-modified sequence (where the specific transcription factor could not bind) in the URA3-inserted strain. The second transformation was performed by electroporation based on the user manual of MicroPulser™ electroporator (BIORAD). The transformants were selected by 5-Fluoroorotic Acid (5-FOA) counter selection. Only the strains (called swapped strains) that carried the desired sequence (where the specific transcription factor could not bind) survived and formed colonies on the media with 5-FOA(4 g/ml). The constructions in the TFBS region were confirmed by diagnostic PCR and sequencing. Perusing expression shifts with real-time PCR To compare the mRNA levels of the candidate genes (the genes in the mutagenesis and control groups), we used SYBR green core reaction to perform quantitative PCR (Applied Biosystems model 7,300 Real-Time PCR System). Before performing real-time PCR, total RNAs were first reverse transcribed by a high-capacity cDNA reverse transcription kit (Applied Biosystems) using oligo dT primers as reverse transcription primers. Realtime PCR was performed on the final volume of 25 μL containing 50 ng of the cDNA sample, 50 nM of each gene-specific primer, and 12.5 μL of the SYBER green Taq premixture [51]. The PCR conditions included enzyme activation at 50°C for 2 min and 95°C for 10 min, followed by 40 cycles of denaturation at 95°C for 15 sec, and annealing/extension at 60°C for 1 min. To verify that a single product had been amplified, a dissociation curve was generated at the end of each PCR cycle using the software provided by the Applied Biosystems 7,300 Real-Time PCR System (version 1.4). The relative expression of each gene was normalized to that of the ACT1 gene (ΔCt, the Ct (cycle threshold) is defined as the number of cycles required for the fluorescent signal to cross the defined threshold). In addition, the amplification efficiency of each primer pair was tested by using two-fold serial dilutions of the templates as suggested by ABI. Finally, the mRNA levels of the candidate genes were compared using a paired t-test. The distribution of TFBSs is constrained by nucleosome occupancy To understand the differences in the selection constraints due to nucleosome occupancy of the regulatory sequences in yeast promoters, we first compared the distribution of TFBSs in orthologous genes of Saccharomycetaceae and Saccharomyces specific genes in NRs and NDRs. For this task, we downloaded 2,152 genes of S. cerevisiae that had orthologs in both C. glabrata and K. lactis from OrthoMCL-DB [41] with 23,605 TFBSs and 75 Saccharomyces specific genes with 1144 TFBSs (Table 1 see Materials and Methods for details). In Saccharomyces specific genes, frequency of TFBSs was found to be higher in NRs than in NDRs; however, in orthologous genes, TFBSs were more frequent in the NDRs ( Table 2). The p-value of the two-sided c 2 -test is ≤ 0.05, which indicates a significant association between TFBSs and nucleosome occupancy, rather than random expectation (see Materials and Methods). These results suggest that young genes found only in the descendent S. cerevisiae species exhibit more TFBS variation and frequently occur in NRs, indicating a possible source of the vicissitude in their regulatory sequences. To verify the evolutionary tendency of TFBSs with respect to nucleosome occupancy, we examined the distribution of TFBSs in the promoters of duplicate gene pairs at NRs and NDRs. Our results (one-sided twosample proportion test; p-value < 10 -40 ) indicated that the duplicate pairs that have undergone relaxation of the selection constraint [42,[52][53][54] also exhibited more TFBS variation at NRs than at NDRs (Table 2). Comparing the evolution rate of TFBSs at NRs and NDRs Previous studies analyzed the dependence of nucleotide substitution rates in the yeast genome by comparing their positions on a map of nucleosome locations [9,25]. A relative difference (about 10%) in substitution rates between the NDR and the equidistant centre point of nucleosomal DNA (dyad) was reported in Washietl et al. [25]. In this study, by determining the minimum parsimony of nucleotides at each position (see Materials and Methods), we analyzed the impact of nucleosome occupancy on the evolution rate of TFBSs in sensu stricto yeasts (Figure 1g and 1h). We only considered alignments of the sequences available in all five sensu stricto species, i.e., we excluded regions containing gaps in the alignment. Our dataset contained 21,930 TFBSs. This analysis was performed separately on the TFBSs at NRs and NDRs. Though, our data for evolutionary rate scattered broadly, the median evolution rate of the TFBSs in our dataset (Figure 2), is significantly higher at NRs (0.45) than at NDRs (0.37) according to Wilcoxon-Mann-Whitney U two-sample test (p-value = 1.61×10 -32 ). Nevertheless, experimental errors in determining nucleosome positions and TFBS prediction might be the possible source of the broad scatter in the data and could bias our result. TFBS gain and loss events in NDRs show higher possibility of altering gene expression To evaluate the impact of TFBS change at NR and NDR on gene expression, we randomly selected six TFBSs that were known to have experienced gain or loss events [48] from NDRs and NRs in S. cerevisiae promoters for site-directed mutagenesis. The TFBSs corresponding to gain or loss events were removed from the laboratory strain (Additional file 2 Table S2). After which, we measured the expression changes in mutant/wild type strains using quantitative PCR (real-time PCR). In the six mutagenesis cases in both NRs and NDRs (t-test pvalue < 0.05), significant expression changes were found between the mutant and wild type strains in three TFBSs in NDRs (50%), while only one out of six TFBSs in NRs demonstrated expression differences (Table 3). Figure 2 Evolution rate of TFBSs conserved in sensu stricto yeast species at NRs and NDRs using minimum parsimony method. The evolution rate of TFBSs in the sensu stricto species was found to be higher at NRs than at NDRs (Wilcoxon-Mann-Whitney U two-sample test, p-value = 1.61×10 -32 ). These results indicate that TFBS gain or loss events at NDRs may have a higher probability of causing expression differences than those at NRs. Discussion Evolutionary analysis in promoter regions has provided important insights into the regulatory process and the properties of TFBS motifs [30][31][32]48]. Yet, the current understanding of TFBS evolution is limited, especially in deciphering the extant of the contributions from other DNA-binding factors such as nucleosomes, chromatin remodelers and chromatin modifiers. While there are several theories highlighting the influence of chromatin architecture, specifically the nucleosome landscape on the molecular evolution of genomic DNA [9,25], not many studies focus on the role played by nucleosome in TFBS evolution [34,55,56]. Further, studies have partially resolved the effect of nucleosome arrangement patterns on transcription [18,57]. Our goal here is to comprehensively elucidate the evolution of TFBSs due to the constraints on sequence structure affected by nucleosome positioning in sensu stricto yeasts. We have conducted a detailed evolutionary analysis of TFBSs with respect to nucleosome occupancy by taking advantage of recently published nucleosome map in S. cerevisiae [11]. Our analysis has uncovered TFBS evolution changes in the context of nucleosome occupancy by different perspectives. Our results suggest that the evolution of TFBSs in yeast species has a noteworthy relationship with the nucleosome organization encoded in promoter sequences on a genome-wide scale. We found that TFBSs in orthologous genes (shared in Saccharomycetaceae) were frequently located at NDRs, while TFBSs in younger Saccharomyces specific genes were dominant at NRs. Furthermore, genes that have undergone duplication are known to be under lower purifying (stabilizing) selection [54,58]. In addition, promoters near duplicate gene pairs are also known to have increased substitution rates, indicating relaxation of selection constraints [53]. According to our results, the TFBSs at NRs in duplicated genes exhibited more variation in terms of their occurrence frequency than those at NDRs. Consistently, the expression divergence of duplicate genes confirms rapid evolution, which could be attributed to cischanges, specifically to the variation of TFBSs [42,59]. These results are also concordant with our findings for TFBSs in ancestral and young gene sets, reinforcing the possibility of difference in selection across NRs and NDRs. A possible source of difference could be ascribed to the impinging of repair mechanisms of DNA sequences by nucleosomes [60][61][62][63]. This is reflected by, high mutation rates at NRs than at linker regions, which are depleted of nucleosomes [9,25] and could conceivably explain the frequent occurrence of novel TFBSs in these regions. In addition, a recent study has suggested that natural selection acts to maintain genome-wide signature of nucleosome formation [64]. This study also provided evidence for selection on conserving chromatin structure, and contributes significantly in driving mutational bias at both coding and non-coding regions. Most importantly, the above results reveal the significance of conglomerate analysis of regulation and promoter nucleosome status in explaining the regulatory evolution [55]. The availability of whole genome nucleosome maps has facilitated research on the regulatory process. As a result, some studies have hinted that the existence of competition and co-operation between nucleosomes and transcription factors may contribute to the regulatory effects on expression divergence [26,[65][66][67]. Since regulatory sequences are believed to play an important role in molecular evolution [48,68,69], we explored the evolutionary significance of the dominance of TFBSs in young genes located at NRs by comparing the evolution rates of TFBSs at NRs and NDRs. Our results demonstrated that, at NRs, TFBS evolutionary rates were significantly higher than at NDRs, although the data seems to be broadly scattered. This indicates the possibility that NRs, which can accommodate more TFBSs variations, may contain binding site sequences with lower purifying selection relative to NDRs. The finding is also congruent with the recent work of Babbitt [70], which indicated that the nonfunctional TFBS could escape purifying selection when they occur in high nucleosome occupancy. It is likely that the weaker selection constraint on TFBSs at NRs plays an important role in the creation of novel binding sites via stochastic mutational processes [36,71]. Furthermore, the weaker selection constraint at NRs can probably be explained by the fact that DNA in nucleosomes is less accessible to DNA binding proteins [72]. Functional constraint could be one of the major explanations for the different evolution rates in NRs and NDRs. Therefore, it is crucial to investigate whether there is a difference in impact of TFBS changes on expression at NRs and NDRs. We provided an indirect evidence via TFBS modification and expression analysis (Table 3 and Additional file 2 Table S2) and revealed that a larger fraction of swapped mutants at NDRs led to expression shift than swapped mutants at NRs. Although our data is limited, previous studies in several species, including yeast have also indicated the role played by nucleosome in regulating gene expression [18,26,57]. These results suggest the possibility of difference in selection constraint on TFBSs at NRs and NDRs. Conclusions Recent studies have indicated that nucleosome organization broadly influences regulatory evolution in yeast [27,55]. For example, in the evolution of within species cis-regulatory elements, it is known that polymorphism in the regulatory sequences are interrelated to changes in nucleosome occupancy [73,74]. The data from our current analysis shows that NRs can contain more TFBS variations, which in turn reflects the importance of TFBSs located in NDRs [75]. We confirmed the difference in selection constraint at NRs and NDRs by measuring the evolutionary rates of TFBSs at these regions Moreover, observations reported in literature support our findings by demonstrating the differences in the accessibility of DNA to their binding proteins inside and outside nucleosome occupied regions [60,62,72]. To ensure the quality of our data, we took several precautions in data selection and have controlled for possible source of bias in our estimates. Thus, the current analysis of the effect of nucleosome positions on the evolution of TFBSs can be considered reliable. Though our study reveals an important feature in TFBS regulatory evolution, a more direct analysis would be required to address the nature of selection that drives the distinction in evolutionary rates. Additional material Additional file 1: Table S1. The list of orthologs and S. cerevisiae specific genes used in this study. Additional file 2: Table S2. The details of TFBSs (that had undergone gain or loss events) used in the site-directed mutagenesis experiment along with their promoter and target gene information.
2014-10-01T00:00:00.000Z
2011-05-31T00:00:00.000
{ "year": 2011, "sha1": "c8b7662daf41f4945c13a901b00a86c2607977b6", "oa_license": "CCBY", "oa_url": "https://bmcevolbiol.biomedcentral.com/track/pdf/10.1186/1471-2148-11-150", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "547ff4c70873865b8fd5c7bf6d6319bf5c622c67", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
17598495
pes2o/s2orc
v3-fos-license
The Dirichlet and Neumann and Dirichlet-to-Neumann problems in quadrature, double quadrature, and non-quadrature domains We demonstrate that solving the classical problems mentioned in the title on quadrature domains when the given boundary data is rational is as simple as the method of partial fractions. A by-product of our considerations will be a simple proof that the Dirichlet-to-Neumann map on a double quadrature domain sends rational functions on the boundary to rational functions on the boundary. The results extend to more general domains if rational functions are replaced by the class of functions on the boundary that extend meromorphically to the double. Introduction It has recently come to light that double quadrature domains in the plane exist in great abundance and that they can be viewed as replacements for the unit disc when it comes to questions of computational complexity in conformal mapping and potential theory. They are especially useful in the multiply connected setting. The "improved Riemann mapping theorem" described in [11] and further expounded upon in [9] allows one to map domains in the plane, even multiply connected domains, to nearby double quadrature domains, thus providing the means to pull back objects on double quadrature domains to the original domain. Double quadrature domains share many of the beautiful and simple properties of the unit disc. The purpose of this paper is to explain methods for solving some of the classical problems of potential theory in quadrature domains that are every bit as simple as similar problems on the unit disc. In fact, the methods will be shown to be analogous to the method of partial fractions from freshman calculus. In this paper, we will consider problems in potential theory with rational boundary data in two special types of domains. The first type will be bounded quadrature domains with respect to area measure without cusps in the boundary. We shall call such domains area quadrature domains. The second type will be area quadrature domains which are also quadrature domains with respect to boundary arc length. We shall call such domains double quadrature domains. We refer the reader to [11] for the precise definitions of these domains and for a summary of their basic properties. The theory of area quadrature domains was pioneered by Aharonov and Shapiro in [1] in the simply connected setting and by Gustafsson [16] in the multiply connected setting. Quadrature domains with respect to boundary arc length were studied by Shapiro and Ullemar in the simply connected setting in [20] and by Gustafsson in [17] in the multiply connected setting. Good references for information about quadrature domains, their usefulness, and the history of the subject are the book [14], the papers [18] and [12] therein, and the book [19]. We now list some of the key properties of area and double quadrature domains that we will need in what follows. To begin, assume that Ω is an area quadrature domain. Then Ω has a boundary consisting of finitely many non-intersecting C ∞ smooth real analytic curves which are, in fact, real algebraic. Gustafsson [16] (after Aharonov and Shapiro [1] in the simply connected case) showed that the Schwarz function S(z) associated to Ω extends meromorphically to the double of Ω. (A meromorphic function h on Ω that extends continuously up to the boundary extends meromorphically to the double if and only if there is a meromorphic function H on Ω that also extends continuously to the boundary such that h = H on the boundary.) Since (1.1) S(z) =z on the boundary, it follows that z extends meromorphically to the double (by defining the extension to be S(z) on the backside) and S(z) extends meromorphically to the double (by defining the extension to bez on the backside). Gustafsson showed that the meromorphic extensions of the two functions z and S(z) to the double form a primitive pair for the double, meaning that they generate the field of meromorphic functions on the double. (See Farkas and Kra [15] for the basic facts about primitive pairs and the field of meromorphic functions on the double.) Identity (1.1) allows us to see that a rational function R(z,z) of z andz is equal to R(z, S(z)) on the boundary and this yields an extension of the rational function to the double as a meromorphic function. Conversely, if G is a meromorphic function on the double, then G is a rational combination of the extensions of z and S(z). Since S(z) =z on the boundary, we see that the restriction of G to the boundary is rational. We have seen, therefore, that the field of rational functions R(z,z) on the boundary is precisely the set of meromorphic functions on the double restricted to the boundary. Consequently, if the rational function does not blow up on the boundary, then it is C ∞ smooth on the boundary. Let R(bΩ) denote the set of rational functions on the boundary without singularities on the boundary, and let R(Ω) denote the set of meromorphic functions on Ω obtained by extending functions in R(bΩ) to Ω via the formula R(z,z) = R(z, S(z)). The Szegő kernel S(z, w) associated to the area quadrature domain Ω extends holomorphically in z and anti-holomorphically in w to an open set containing Ω × Ω minus the boundary diagonal. The Garabedian kernel L(z, w) extends holomorphically in z and holomorphically in w to an open set containing Ω × Ω minus the diagonal. It has a simple pole in the z variable at z = w when w ∈ Ω is held fixed. The residue in z at w is 1/2π. The Szegő kernel and Garabedian kernel are non-vanishing in simply connected domains, but on an n-connected domain, the Szegő kernel S(z, w) has n − 1 zeroes in z on Ω for each fixed w in Ω. The Garabedian kernel L(z, w), however, is non-zero if z ∈ Ω and w ∈ Ω with z = w even in the multiply connected case. If a ∈ Ω, then neither S(z, a) nor L(z, a) vanish for z in the boundary. See [4] for proofs of all these facts in the spirit of this paper. Let S 0 (z, w) denote S(z, w) and let S m (z, w) denote (∂/∂w) m S(z, w). Similarly, let L 0 (z, w) denote L(z, w) and let L m (z, w) denote (∂/∂w) m L(z, w). The Szegő span is the complex linear span S of all functions h(z) of the form h(z) = S m (z, a) as a ranges over Ω and m ranges over all non-negative integers. The Garabedian span is the complex linear span L of all functions H(z) of the form H(z) = L m (z, a) as a ranges over Ω and m ranges over all non-negative integers. The Szegő plus Garabedian span is the set S + L of all sums h + H where h is in the Szegő span and H is in the Garabedian span. We will often shorten our notation by writing S m a (z) = S m (z, a) and L m a (z) = L m (z, a), and we emphasize here that the unadorned S(z) will always stand for the Schwarz function. Note that, because L(z, a) has a singular part that is a non-zero constant times (z − a) −1 , the singular part of L m a is a non-zero constant times (z − a) −(m+1) . To have a proper feeling for the Szegő and Garabedian spans, we mention here that the Szegő span is a dense subspace of the L 2 -Hardy space and the Garabedian span is a dense subspace of the orthogonal complement to the L 2 -Hardy space in L 2 (bΩ). Hence S + L is dense in L 2 of the boundary. Let A ∞ (Ω) denote the space of holomorphic functions on Ω in C ∞ (Ω). The Szegő span is also a dense subspace of A ∞ (Ω) and the Garabedian span is a dense subspace of the orthogonal complement to the L 2 -Hardy space in the topology of C ∞ (bΩ). Hence S + L is dense in C ∞ on the boundary. (See [3] and [4] for proofs of these facts in the more general smooth domain case. The density of the space of rational functions on the boundary of an area quadrature domain in C ∞ of the boundary is also proved in the last section of [9].) Let T (z) denote the complex unit tangent vector function defined on the boundary of Ω and pointing in the direction of the standard orientation of the boundary. A very important identity at the heart of much of this paper is the relationship between the Szegő kernel and the Garabedian kernel, which holds for z ∈ bΩ and a ∈ Ω. We may differentiate this identity with respect to a to obtain If Ω is a double quadrature domain, then all the properties above hold plus the property (proved by Gustafsson in [17]) that T (z) extends to the double as a meromorphic function. Consequently, identities (1.2) and (1.3) show that the functions S m a and L m a also extend to the double for each a in Ω and m ≥ 0. With these preliminaries behind us, we can state our main results. We call the first result the Basic Decomposition. where the function spaces on the right are understood to be restricted to the boundary. On a double quadrature domain, We will prove this theorem in the next section, where it will be seen that the coefficients that appear in the decompositions are determined by the principal parts of two meromorphic functions. We remark here that, because the functions on the left hand side of the equalities in Theorem 1.1 extend meromorphically to Ω via the R(z,z) = R(z, S(z)) substitution, and the functions on the right also extend, we may also state the following result. On a double quadrature domain, We will show how the decomposition in Theorem 1.1 can be used to solve the Dirichlet problem in §3. We will consider the Dirichlet-to-Neumann map in §4, and finally the Neumann problem in §6. Note that functions in S are holomorphic on Ω and functions in L have poles. Hence, it follows as a corollary to Theorem 1.2 that the class of holomorphic functions on a double quadrature domain which extend meromorphically to the double and which have no singularities on the boundary is exactly equal to the Szegő span. Since S +L is an orthogonal sum, the following theorem is an easy consequence of Theorem 1.1. The results of the next section will therefore show that the Szegő projection of a rational function can be computed via rather straightforward algebra on quadrature domains. Before we start proving the decompositions, we mention that related results hold for the Bergman kernel and span. Let B(z, w) denote the Bergman kernel associated to a bounded area quadrature domain and let Λ(z, w) denote the complimentary kernel (or conjugate kernel) to the Bergman kernel (see [4, p. 134] for the definition and basic properties of Λ(z, w)). We may define the Bergman span B and the complimentary kernel span Λ exactly as we defined the Szegő span and Garabedian span. Let R ′ (Ω) denote the set of meromorphic functions on Ω that are derivatives of functions in R(Ω). Theorem 1.4. On a simply connected area quadrature domain Ω, On a multiply connected area quadrature domain, R ′ (Ω) is comprised of the functions in B + Λ with vanishing periods. We will explain in §5 why a major part of the proof of Theorem 1.4 should be attributed to Björn Gustafsson. We remark, that since all non-zero functions in the space Λ have poles in Ω, a corollary of Theorem 1.4 is that, on an area quadrature domain, a function in R ′ (Ω) without singularities in Ω must be in the Bergman span. This result will allow us to characterize the image of the rational functions under the Dirichletto-Neumann map of an area quadrature domain. Call the map that takes Dirichlet problem boundary data to the normal derivative of the solution to the Dirichlet problem the D-to-N map. Theorem 1.5. On an area quadrature domain, the D-to-N map takes R(bΩ) into BT + BT . On a double quadrature domain, BT + BT is contained in R(bΩ) and so the D-to-N map takes takes R(bΩ) into itself. More can be said in case the quadrature domains are simply connected. Theorem 1.6. On a simply connected area quadrature domain, the D-to-N map takes R(bΩ) onto BT + BT . We will show that, on an area quadrature domain, the decomposition BT +BT in Theorem 1.5 uniquely determines the functions in the Bergman span appearing in the sum, as made precise in the following theorem. Theorem 1.7. On an area quadrature domain, functions in BT + BT are represented as κ 1 T + κ 2 T by uniquely determined elements κ 1 and κ 2 in the Bergman span. Results like this will allow us to consider a one-sided inverse to the D-to-N map in the setting of Theorem 1.6 that is defined in rather explicit terms (see Theorem 5.1 in §5). The basic decomposition in area quadrature domains Suppose that Ω is a bounded area quadrature domain and suppose φ(z) = R(z,z) is a rational function of z andz without singularities on bΩ. We will now explain how to produce a finite decomposition of φ on the boundary in terms of the Szegő kernel and the Garabedian kernel that can be thought of as an analogue of a "partial fractions decomposition" on the boundary. The decomposition will allow us to solve the Dirichlet problem with rational boundary data in finite terms. Pick a point a in Ω. Notice that on the boundary, and this defines a meromorphic extension G of S a φ to Ω. We now subtract off the unique linear combination λ of the functions of the form L m b k to remove the poles of the meromorphic function G on Ω. We will show that is a function in the Szegő span S. Indeed, if we pair h with a function g in the dense subset of the Hardy space consisting of functions in A ∞ (Ω), and note that functions of the form L m b k are orthogonal to the Hardy space, we may use the identity S(z) =z and identity (1.2) to see that and the residue theorem shows that this last integral yields a finite linear combination of values of g and its derivatives at finitely many points. Hence, h is equal to the linear combination of the functions S m a k which would have the same effect when paired with g. We have shown that there are finitely many points a n and b n in Ω and positive integers N S , M S , N L , and M L such that on the boundary of Ω. We remark here that, just as in the method of undetermined coefficients, the coefficients and points in the decomposition (2.1) are uniquely determined by the principal parts of meromorphic functions with finitely many poles. Indeed the coefficients B nm and the points b n were chosen so that the principal parts of the sum λ match the principal parts of S a (z)R(z, S(z)). If we multiply the decomposition by T (z) and use identities (1.2) and (1.3), and note that R(z,z) = R( S(z) ,z) on the boundary, we obtain after conjugation and so we see that the coefficients A nm and the points a n are determined by the principal parts of L a (z)R( S(z) ,z) in Ω. We have shown that S a times a rational function is in the Szegő plus Garabedian span when restricted to the boundary. To finish the proof of Theorem 1.1 for area quadrature domains, we need to show that a function in S + L divided by S a , when restricted to the boundary, is rational and without singularities on the boundary. If we divide a function in S + L by S a , we obtain a sum of functions of the form S m an /S a and L m an /S a . Such functions extend to the double of Ω as meromorphic functions because identities (1.2) and 1.3 show that they are equal to the conjugates of L m an /L a and S m an /L a , respectively, on the boundary. Since these functions extend to the double and do not have singularities on the boundary, they are therefore rational combinations of z and S(z), which when restricted to the boundary, are rational functions of z andz without singularities on the boundary. This completes the proof of the part of Theorem 1.1 about area quadrature domains. We remark that the space of functions on the boundary given by S a times a rational function is easily seen to be independent of the point a since quotients of the form S a /S b extend meromorphically to the double, and are therefore rational on the boundary. On a double quadrature domain, S a extends to the double as a meromorphic function and has no singularities on the boundary (see [11]). Therefore S a R = R and the proof of Theorem 1.1 is complete. Using the decomposition to solve the Dirichlet problem We first assume that Ω is a simply connected area quadrature domain. We will now show how the decomposition (2.1) produces a simple and explicit solution to the Dirichlet problem with rational boundary data R(z,z). Recall that S a and L a are non-vanishing on Ω and extend holomorphically past the boundary in case Ω is simply connected. Notice that if we divide the decomposition (2.1) by S a , then we decompose our boundary data R(z,z) into a finite sum of functions of the form S m b /S a and L m b /S a . The functions S m b /S a extend holomorphically to Ω and are smooth up to the boundary. Identities (1.2) and (1.3) reveal that L m b /S a is equal to the conjugate of S m b /L a on the boundary and therefore these functions extend antiholomorphically to Ω and are smooth up to the boundary. (Note that the simple pole of L a at a gives rise to a zero of the quotient at a.) Consequently, we can read off the conclusion of the following theorem. Theorem 3.1. The solution to the Dirichlet problem on a simply connected area quadrature domain with rational boundary data R(z,z) can be read off from the basic decomposition of S a (z)R(z,z) and is of the form h + H where both h and H are sums of quotients that extend holomorphically to Ω and meromorphically to the double. In fact, h is a quotient of the form σ 1 /S a and H is a quotient of the form σ 2 /L a where σ j , j = 1, 2, are functions in the Szegő span. Note that meromorphic functions on the double are generated by the meromorphic extensions of z and S(z) to the double, and that S(z) is algebraic. Thus, we have given an alternate way of looking at Ebenfelt's theorem [13] about the algebraicity of the solution to the Dirichlet problem with rational boundary data on simply connected area quadrature domains (see also [10] for more about this fascinating subject). We may repeat much of the same reasoning in case Ω is an n-connected area quadrature domain, taking into account that S a has n−1 zeroes. We may choose the point a so that the n − 1 zeroes of S a are distinct and simple (see [4, p. 105]). Let a 1 , . . . , a n−1 denote these zeroes. Let G(z, w) denote the classical Green's function associated to Ω and write G 0 w) and Gm w (z) := ∂ m ∂w m G(z, w). Note that, as a function of z, G m b (z) has a singular part at b ∈ Ω that is a nonzero constant times 1/(z −b) m , is harmonic in z on Ω−{b}, extends continuously to the boundary and vanishes on the boundary. Similarly, Gm b (z) has a singular part that is a non-zero constant times the conjugate of 1/(z − b) m , is harmonic in z on Ω − {b}, extends continuously to the boundary and vanishes on the boundary. To solve the Dirichlet problem on Ω, given rational boundary data R(z,z), as in the simply connected case, the decomposition (2.1) yields and where σ 1 and σ 2 are in the Szegő span. Note that h and H extend smoothly to the boundary, that H is holomorphic on Ω because L a is non-vanishing on Ω − {a} and the pole of L a creates a zero of H at a, but h is perhaps only meromorphic since it may have simple poles at some or all of the zeroes of S a . However, by subtracting off appropriate constants times G 1 a k for each of the zeroes a k to remove the simple poles, noting that these functions vanish on the boundary, we obtain the harmonic extension of our boundary data to Ω in the form where h and H extend meromorphically to the double, and hence are rational combinations of z and the Schwarz function S(z). Consequently, we have an alternate way to that given in [9] to see that the solution to the Dirichlet problem with rational boundary data is algebraic, modulo an n − 1 dimensional subspace. The Dirichlet-to-Neumann map We now continue the line of thought of the last section and consider the implications for the D-to-N map for rational boundary data on an area quadrature domain Ω. It is shown in [4, p. 134-135] that, if w ∈ Ω is held fixed, the normal derivative (∂/∂n z ) of G 0 w (z) with respect to z is given by We remark here that it is also shown there that this expression can be further manipulated to yield two more expressions for the same normal derivative, Although these expressions are shorter and simpler, we will have reason to prefer the longer form. Hence, for m ≥ 1, the normal derivative of G m w (z) is given by and the alternative expressions above yield It is shown in [4, p. 77, 134-135] that the normal derivative of the solution to the Dirichlet problem given by equation (3.1) is But, for w ∈ Ω, the Bergman kernel B(z, w) is related to the Green's function via and the complimentary kernel Λ(z, w) to the Bergman kernel is given by definition as Λ(z, w) = − 2 π ∂ 2 ∂z∂w G(z, w). (See [4, p. 134] for these identities and the basic properties of Λ(z, w).) Consequently, the last formula for the normal derivative can be rewritten as It is shown in [7] that, on an area quadrature domain, if a meromorphic function g extends meromorphically to the double, then g ′ also extends meromorphically to the double. It is shown in [7] that the Bergman kernel extends meromorphically to the double on an area quadrature domain. We will prove momentarily that Λ(z, a k ) also extends meromorphically to the double as a function of z. Hence, we have expressed the normal derivative of the solution to the Dirichlet problem as gT + GT where g and G are meromorphic functions on Ω that extend meromorphically to the double, and are consequently rational combinations of z and S(z). When we restrict to the boundary, we conclude that g and G are rational functions of z andz on the boundary. It is shown in [7] that, on an area quadrature domain, the function T 2 extends to the double as a meromorphic function. Hence, the Neumann boundary data of the solution to the Dirichlet problem with rational boundary data is a sum of a rational function times the square root of a rational function plus the conjugate of such expressions. On a double quadrature domain, the function T itself extends meromorphically to the double, and in this case, we may state that the D-to-N map sends rational functions of z andz to rational functions of z andz. In the next section, we consider which rational functions appear in the range of the D-to-N map in this manner. We now give the promised proof that Λ(z, a) extends meromorphically in z to the double for fixed a ∈ Ω when Ω is an area quadrature domain. The Bergman kernel B(z, a) is related to Λ(z, a) via the identity for z in the boundary. Since Λ(z, a) is equal to minus the conjugate of the quantity B(z, a) times T (z) 2 on the boundary, and since both of these functions extend meromorphically to the double, and are therefore rational functions of z and S(z) =z on the boundary, it follows that Λ(z, a) extends meromorphically to the double and is equal to a rational combination of z andz on the boundary. The identity (4.3) or the simpler expressions for the normal derivatives of the Green's functions could be used to simplify (4.2) to read or even but we prefer (4.2) because the poles of the Λ(z, a k ) terms exactly cancel the poles of h ′ the way we have chosen the coefficients, and therefore the normal derivative is in fact expressed as gT + GT where g and G are holomorphic functions on Ω that extend smoothly to the boundary, and that extend meromorphically to the double. We will have more to say on this subject later when we prove Theorem 1.5. An interesting consequence of equations (4.4) and (4.5) is that they imply that certain period matrices are non-singular. Theorem 4.1. Suppose that the zeroes a 1 , . . . a n−1 of the Szegő kernel associated to a point a in an n-connected area quadrature domain Ω are simple. Then the matrix of periods associated to the functions K(z, a k ), k = 1, . . . , n − 1 is non-singular. So is the matrix of periods associated to the functions Λ(z, a k ), k = 1, . . . , n − 1. We remark that it was proved in [8] that, if Ω is n connected, then there exist n − 1 points b 1 , b 2 , . . . , b n−1 in Ω such that the period matrix of K(z, b k ) is non-singular. It is also interesting to note that, because of the way the zeroes of the Szegő kernel and the periods of the functions in Theorem 4.1 transform under conformal changes of variables, and because smoothly bounded n-connected domains are conformally equivalent to an area quadrature domain via Gustafsson's theorem [16], Theorem 4.1 can be seen to hold for general bounded smooth n-connected domains as well. To prove Theorem 4.1, note that because the rational functions are dense in C ∞ (bΩ), we may approximate a harmonic measure function ω k which is harmonic on Ω and equal to one on the k-th boundary curve of the n − 1 inner boundary curves and equal to zero on the other boundary curves. The normal derivative of the solution to the Dirichlet problem with this rational boundary data is given by equation (4.4), and it can be made as close in C ∞ (bΩ) to the normal derivative −iF ′ k T of ω k as desired (see [4, p. 87] for the calculation of this normal derivative). Since the periods of F ′ k , k = 1, . . . , n − 1 are well known to be linearly independent, and since the periods of the functions given by equation (4.4) are linear combinations of the periods of Λ(z, a k ), it follows that the periods of Λ(z, a k ) are independent. Identity (4.3) shows that the periods of B(z, a k ) are just minus the conjugates of the periods of Λ(z, a k ), and so it follows that the periods of B(z, a k ) are also independent. This completes the proof of Theorem 4.1. Proof of Theorem 1.4 The proof of Theorem 1.4 follows a similar pattern to the arguments in the last section and is motivated by a result of Gustafsson stated as Lemma 4 in [6]. (In fact, there are two proofs of a closely related theorem given in [6] and a statement without proof of a converse that is relevant here. The proof we set out below is a third way of looking at this problem, and is shorter and simpler than the arguments given in [6], but it must be said that the meat of the argument is Gustafsson's idea.) Suppose that Ω is an area quadrature domain and that h is a function in R(Ω). It follows that h ′ has only finitely many poles in Ω which are residue free poles of order two or more. Note that the singular part of Λ(z, a) is equal to a non-zero constant times (z − a) −2 , and the singular part of Λ m (z, a) is equal to a non-zero constant times (z − a) −(m+2) . Hence, there is an element λ in the span Λ that has the same principal parts at each of the poles of h ′ . But such a function λ is given as the derivative (∂/∂z) of a finite sum of the form Note that φ vanishes on the boundary and that h−φ also has removable singularities at the poles of h. Also note, that because h extends meromorphically to the double, there is a meromorphic function H on Ω that extends smoothly to the boundary (and without singularities on the boundary) such that h(z) = H(z) for z ∈ bΩ. Now, given g in A ∞ (Ω), we may compute the inner product h ′ −λ, g Ω = and the residue theorem implies that this last integral is equal to the complex conjugate of a finite linear combination of values of g and its derivatives at finitely many points in Ω. There is an element κ in the Bergman span that has the same effect when paired with g in the L 2 (Ω) inner product. Hence, since A ∞ (Ω) is dense in the Bergman space, h ′ − λ = κ and we have proved that R ′ (Ω) ⊂ B + Λ. To prove the reverse inclusion, we will need to define some terminology. Note that we may differentiate identity (4.3) with respect toā to obtain the identity for z in the boundary (where the superscript m indicate derivatives of order m with respect toā in the Bergman kernel and with respect to a in the Λ-kernel). Given a function g = κ + λ in B + Λ, we define the complimentary function G to g to be the function gotten by conjugating all the constants in the linear combination, by changing terms of the form K m b in g to Λ m b in the complimentary function, and terms of the form Λ m b to K m b . In this way, we obtain a function G in B + Λ that satisfies g(z)T (z) = −G(z)T (z) on the boundary. If γ is a curve in the boundary of Ω, then identity (5.1) shows that and similarly for integrals of Λ m (z, a) by taking conjugates. Hence, g and G satisfy Hence, if a period of g vanishes, then so does the same period of G. First, we will prove the reverse inclusion in case Ω is a simply connected area quadrature domain. Note that, in this case, elements of B + Λ have single valued antiderivatives on Ω which are meromorphic on Ω because all the poles of elements of Λ are residue free and are of order two or more. Let h be such an antiderivative, and write h ′ = κ+λ as we did above. Let G be the complimentary function to h ′ and let H be an antiderivative of G. Let γ z denote a curve that starts at a boundary point b and moves along the boundary to another point z in the boundary. The formula (5.2) holds for the curve γ z for our complimentary functions g = h ′ and G = H ′ . Hence, h(z) − h(b) is equal to the conjugate of −(H(z) − H(b)) on the boundary. This shows that h extends to the double, and the proof of the reverse inclusion is complete in the simply connected case. We now turn to the multiply connected case. If κ + λ is an element of B + Λ with vanishing periods, then the periods of the complimentary function are also zero and we obtain two meromorphic functions h and H as we did above such that h ′ = κ + λ and h ′ T = −H ′ T on the boundary. Our task now is to show that we may adjust h and H by constants in order to make h = −H on the boundary so that we may conclude that h extends to the double. Choose a point b in the outer boundary of Ω and adjust h and H by subtracting off constants h(b) and H(b) from h and H so that h(b) = 0 and H(b) = 0. The calculation of the last paragraph shows h = −H on the outer boundary. The key to seeing that this identity persists on the inner boundaries is that integrals from b to a point b ′ on an inner boundary also agree because of the relationships between the kernels and the Green's function. Indeed, if γ is a curve in Ω that starts at b and goes to b ′ and w ∈ Ω is not in γ, then it is well known that To see this, note that if φ is real valued, then and the real part of this integral is φ(b ′ )−φ(b). Hence, since the Green's function is real and vanishes on the boundary, the real parts of the two integrals in (5.3) vanish, and therefore, since they are conjugates of each other, the identity follows. Now differentiating (5.3) with respect tow and letting w = a yields that (This is just a reformulation of a well known fact going back to Bergman and Schiffer about the vanishing of the β-periods of the meromorphic differentials obtained by extending K a dz to the backside of the double as the conjugate of −Λ a dz.) To continue, if we now choose such a curve γ from b to b ′ that avoids points where h ′ and H ′ have singularities, this identity shows that Consequently, the identity h = −H extends to the inner boundary containing b ′ . We may conclude that h extends to the double as a meromorphic function. Theorem 4.1 yields that if Ω is n connected, then there exist n − 1 points a 1 , a 2 , . . . , a n−1 in Ω such that the period matrix of K(z, a k ) is non-singular. Hence, given an element κ + λ of B + Λ, it is possible to subtract off a linear combination of the functions K(z, a k ) so to make the periods vanish. Hence, Theorem 1.4 yields that κ + λ is equal to a function in R ′ (Ω) modulo a linear combination of K(z, a k ). If we let B n−1 denote the complex linear span of the K(z, a k ), then we can state that Next, we may use Theorem 1.4 to determine the image of R(bΩ) under the Dirichlet-to-Neumann map in an area quadrature domain Ω. Formula (4.2) combined with Theorem 1.4 shows that functions in the image are of the form κ 1 T + λ 1 T + κ 2 T + λ 2 T where κ 1 and κ 2 are in B and λ 1 and λ 2 are in Λ. Identity (5.1) shows how to convert the term λ 1 T into a term of the form κ 3 T where κ 3 ∈ B, and the term λ 2 T into a term of the form κ 4 T where κ 4 ∈ B. Hence, when everything is combined, we obtain an expression in BT + BT . Next, we show that every element in BT +BT is equal to the normal derivative of a harmonic function with rational boundary values in case Ω is a simply connected area quadrature domain. Indeed, given a function ψ = κ 1 T + κ 2 T in this space, we may find functions holomorphic functions h 1 and h 2 on Ω such that −ih ′ 1 = κ 1 and −ih ′ 2 = κ 2 where, by Theorem 1.4, h 1 and h 2 are in R(Ω). We may now write Such a function is the normal derivative of the harmonic function with rational boundary data h 1 + h 2 . Finally, we need to show that a representation of the form ψ = κ 1 T + κ 2 T is unique. If such an expression were equal to zero on the boundary, then κ 1 T = −κ 2 T and the left hand side of this expression is orthogonal to holomorphic functions in L 2 (bΩ) and the right hand side is orthogonal to the conjugates of holomorphic functions in L 2 (bΩ). Such functions must be given by sums of F ′ k T where F ′ k are the well known holomorphic functions that arise via F ′ k = 2(∂/∂z)ω k where ω k are the harmonic measure functions associated to the n − 1 inner boundary curves (see [4, p. 80] for a proof of this result). Hence κ 1 = n−1 k=1 c k F ′ k . We have just shown that κ 1 T is equal to the normal derivative of a harmonic function φ with rational boundary values. However, κ 1 T is now also seen to be the normal derivative of a linear combination ω of ω k , k = 1, . . . , n − 1. Hence, φ and ω differ by a constant, and it follows that the boundary values of ω are in R(bΩ). But the functions in R(bΩ) are the boundary values of meromorphic functions of the form R(z, S(z)) and, since ω vanishes on the outer boundary, it follows that ω ≡ 0, and hence, that φ is constant and hence, that κ 1 ≡ 0. Finally, it is an easy exercise to see that if then all the coefficients c nm must be zero. (Indeed, such a function would be orthogonal to the Bergman space, and hence orthogonal to all polynomials. However, pairing a polynomial with the function in the Bergman span would yield a non-trivial sum of values and derivatives of the polynomial at finitely many points in Ω, and it easy to construct a polynomial that would make this sum non-zero, yielding a contradiction.) We have shown that the representation of a function in BT + BT is uniquely determined. The techniques of this section allow us to construct a one-sided inverse to the D-to-N map on rational functions in a simply connected area quadrature domain. Indeed, given a basic term like K m a in the Bergman span we may express a meromorphic antiderivative of −ih of K m a on Ω via a path integral formula. The proof of Theorem 1.4 reveals that h is in R(Ω). Now, the normal derivative of the solution to the Dirichlet problem with boundary data h ∈ R(bΩ) is equal to K m a T . By this means, we may define a linear transformation L that maps K m a T to the boundary values of h. The same procedure works for conjugates of terms of the form K m a T . Since the representation of functions in BT + BT is unique, we obtain the operator L of the following theorem. Theorem 5.1. Suppose Ω is a simply connected area quadrature domain. There is a natural linear transformation L which maps BT + BT onto R(bΩ) such that the D-to-N map composed with L is the identity. The Neumann problem The Szegő projection can be used to solve the classical Neumann problem for the Laplacian in planar domains in much the same way that it was used above to solve the Dirichlet problem. This process is described in [4, p. 87]. On a double quadrature domain, both the Szegő kernel S a and the Garabedian kernel L a extend to the double and are therefore rational on the boundary. Also, the functions F ′ j extend to the double (on area quadrature domains). If we combine these results with Theorem 20.1 in [4] and use the fact that the Szegő projection maps rational functions on the boundary to rational functions on the boundary, we obtain the following result. where h and H are holomorphic functions on Ω such that h ′ and H ′ extend meromorphically to the double (and are therefore rational on the boundary) and the c k are constants. Non-quadrature domains All of the results of this paper carry over to non-quadrature domains if we define our basic objects differently. Suppose Ω is a bounded n-connected domain with C ∞ smooth boundary. In this context, let R(bΩ) denote the space of C ∞ functions on the boundary that extend meromorphically to the double of Ω, let R(Ω) denote the space of meromorphic functions on Ω that have boundary values in R(bΩ), and let R ′ (Ω) denote the space of functions that are derivatives of functions in R(Ω). It is proved in [5] that there are two Ahlfors maps f 1 and f 2 associated to two (rather generic) points in Ω such that the meromorphic extensions to the double of Ω form a primitive pair for the double. Hence, the function spaces just described can all be expressed in terms of rational functions of f 1 and f 2 . (Since f j = 1/f j on the boundary j = 1, 2, these functions conveniently replace the Schwarz function in many situations.) The main theorems of the paper in this context can be stated as follows. Theorem 7.1. Given a point a in a bounded smooth finitely connected domain Ω, S a R(bΩ) = S + L, where the function spaces on the right are understood to be restricted to the boundary. Furthermore, S a R(Ω) = S + L. The Szegő projection maps S a R(bΩ) onto the Szegő span. The Dirichlet problem can be solved for boundary data in R(bΩ) by exactly the same methods we used in §3. The theorem about the Bergman span also generalizes in a straightforward manner. Theorem 7.2. Suppose that Ω is a bounded smooth finitely connected domain. On a multiply connected area quadrature domain, R ′ (Ω) is comprised of the functions in B +Λ with vanishing periods. In both cases, the D-to-N map takes R(bΩ) into BT + BT . In case Ω is simply connected, this mapping is onto. Representations of functions ψ in BT + BT uniquely determine elements κ 1 and κ 2 in the Bergman span so that ψ = κ 1 T + κ 2 T . Finally, we remind the reader that we explained in §4 why Theorem 4.1 holds in general bounded smooth domains. We remark here that the general result could also be proved from scratch using the definitions in this section and by repeating the proof given in §4 using these definitions.
2013-11-26T18:40:05.000Z
2013-11-26T00:00:00.000
{ "year": 2015, "sha1": "177618d36c6902ed37c57d6d40775c20606389c6", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1311.6767", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "d859b766138455581c7c76beb2e6066a1e40326f", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
7410300
pes2o/s2orc
v3-fos-license
Limited Weight Loss or Simply No Weight Gain following Lifestyle-Only Intervention Tends to Redistribute Body Fat, to Decrease Lipid Concentrations, and to Improve Parameters of Insulin Sensitivity in Obese Children Objectives. To investigate whether lifestyle-only intervention in obese children who maintain or lose a modest amount of weight redistributes parameters of body composition and reverses metabolic abnormalities. Study Design. Clinical, anthropometric, and metabolic parameters were assessed in 111 overweight or obese children (CA of 11.3 ± 2.8 years; 63 females and 48 males), during 8 months of lifestyle intervention. Patients maintained or lost weight (1–5%) (group A; n: 72) or gained weight (group B). Results. Group A patients presented with a decrease in systolic blood pressure (SBP) and diastolic blood pressure (DBP) ( and , resp.), BMI (), z-score BMI (), waist circumference (), fat mass (), LDL-C (), Tg/HDL-C ratio (), fasting and postprandial insulin (), and HOMA (), while HDL-C () and QUICKI increased (). Conversely, group B patients had an increase in BMI (), waist circumference (), SBP (), and in QUICKI (), while fat mass (), fasting insulin (), and HOMA () decreased. Lean mass, DBP, lipid concentrations, fasting and postprandial glucose, postprandial insulin, and ultrasensitive C-reactive protein (CRP) remained stable. Conclusions. Obese children who maintain or lose a modest amount of weight following lifestyle-only intervention tend to redistribute their body fat, decrease blood pressure and lipid levels, and to improve parameters of insulin sensitivity. Introduction The United States population has experienced a twofold to fourfold increase in the prevalence of obesity in the last two decades [1,2], with a higher incidence among young African-Americans and Latinos [3]. Data obtained in a major city in the Venezuelan Andes seem to indicate a similar trend [4,5]. Childhood obesity may lead to diabetes mellitus, cardiovascular disease, and premature death in adulthood [6]. Similarly, diseases such as dyslipidemias and type 2 diabetes are now being more frequently detected in both children and adolescents. Obese children have exhibited insulin resistance, elevated blood pressure, and increased concentrations of serum lipids. Higher body weight has been accompanied by an accumulation of visceral fat as well as elevated levels of alanine aminotransferase (ALT) and aspartate aminotransferase (AST) with hepatic steatosis [7][8][9][10][11][12][13]. Obese children have also been found to be in a proinflammatory state, characterized by increased ultrasensitive C-reactive protein (CRP) concentrations, changes in 2 International Journal of Pediatric Endocrinology endothelial function, and alterations in arterial structure [14,15]. The effect of modest nonpharmacological lifestyle-only intervention on auxological and metabolic parameters in obese children has not been conclusively determined, but several recent studies have demonstrated an improvement in body composition, a decrease in hemostatic and inflammatory markers, and an improvement in markers of insulin sensitivity following intervention [16][17][18][19]. It is not yet clear whether weight loss is required for these parameters to improve and to be sustained in time. In this study, we hypothesized that lifestyle-only intervention in obese children could lead to an improvement of both anthropometric and metabolic markers, even if weight loss was discrete or if weight remained stable. Subjects. One hundred and eleven overweight or obese children (63 females and 48 males), with a mean chronological age of 11.3 ± 2.8 years, participated in the study. Fiftythree of the patients were prepubertal or in early puberty (Tanner stage I-II), whereas the rest were in puberty (Tanner stage III-IV). Patients originated from the Endocrinology, Nutrition, Growth, and Development Clinics of the Hospital de Clinicas Caracas in Caracas, Venezuela and the Instituto Autónomo Hospital Universitario de Los Andes (IAHULA) in Mérida, Venezuela. Only overweight or obese children, as determined by a BMI (kg/m 2 ) above the 90th percentile or a Z-score BMI above 1.5 were included. Children with cardiac, endocrine, renal, and other chronic abnormalities and those receiving therapy (statins, glucocorticoids, omega 3, metformin, etc.) that could affect cardiovascular or metabolic function were excluded from the study. Patients with acute or chronic inflammatory disease that could affect inflammatory markers or those with physical impediments that could affect their exercise capacity were excluded from the study. All of our patients completed both baseline and 8month auxological and laboratory evaluations, regardless of whether they followed lifestyle change recommendations. Procedures. Patients were measured using a Harpenden stadiometer. Body mass index (BMI) was calculated, and subjects were classified utilizing the Fundacredesa-INN-USB's growth curves for Venezuelan children [20]. According to these curves the subjects were considered overweight if they had a BMI equal to or greater than the 90th percentile or a BMI z-score of >1.5. They were considered obese if their BMI was above the 97th percentile or if their BMI zscore was >2. The skin fold of the triceps was measured using a special caliper. Subsequently, measurements were taken of the circumferences of the left arm, the waist, and the hip using a measurement tape, as indicated by the National Health and Nutrition Examination Survey, 2000 [21]. The fat and lean mass were then calculated using a mathematical formula (FM = TrF(LAC)/2 − p(TrF) 2 /4 and LM = (LAC − pTrF) 2 /4p, where FM is fat mass, LM is lean mass, TrF is the triceps fold measurement, LAC is the left arm circumference, and p = 3, 141) [20]. Blood pressure (BP) was measured in all children. A fasting blood sample was obtained to determine levels of glucose, lipids, insulin, and CRP. Then, a glucose load of 1.75 grams per kilogram of body weight (maximum of 75 g) was administered orally to each patient. Two hours later, another blood sample was taken to determine glucose and insulin levels. Serum cortisol and thyroid function tests were measured at baseline in all patients and had to be normal to be included in the study. After the evaluation of baseline auxological and biochemical parameters, patients' weight, height, blood pressure, waist circumference, hip circumference, and arm circumference were measured monthly. Eight months later, the blood tests were repeated. Glucose, triglycerides (Tg), total cholesterol (TC), and highdensity lipoprotein cholesterol (HDL-C) were measured by enzymatic methods with a Technicon autoanalyzer utilizing chemicals from Boehringer Mannheim Diagnostics. Lowdensity lipoprotein cholesterol (LDL-C) was calculated using the Friedwald formula (LDL-C = TC − Tg/5 + HDL-C) [22]. Insulin, cortisol, and thyroid function tests (TSH, FT4, and total T4) were measured by chemiluminescence utilizing an Immulite 1000 analyzer from Siemens Healthcare Diagnostics with interassay and intraassay coefficients of variation of 6.5% and 5.4%, 7.8% and 7.7%, 8.9% and 3.9%, 4.1% and 4.4%, and 6.7% and 6.7%, respectively. Ultrasensitive C-reactive protein (CRP) levels were determined by an immunometric assay (high-sensitivity enzyme immunoassay for the quantitative determination of Creactive protein concentrations, DRG International, Inc, USA) with intraassay and interassay coefficients of variation of 7.5% and 4.1%. The insulin resistance index, HOMA-IR, was calculated with the following formula: Insulinemia (uU/mL) × glicemia (mmoL/L)/22.5. The insulin sensitivity index, QUICKI (quantitative insulin-sensitivity index), was calculated using the formula: 1/[(log insulin 0 min) + (log glicemia 0 min)] [23,24]. This study was performed with the approval of the hospital's ethics committee, and authorization to participate in the study was obtained from parents and/or representatives. Lifestyle Intervention Program. Patients and parents received a written form with general recommendations regarding nutrition and physical activity. Additionally, a subgroup was given a form in which to register weekly hours of physical activity, number of steps taken per day, and hours per week spent in sedentary activities, such as watching television or playing with a computer. In an initial meeting, children were informed of the changes in physical activity, behavior, and nutrition that were expected of them. Parents were stimulated to actively and responsibly participate in the program. Lifestyle changes included limiting the duration of television and computer activities. Patients were also encouraged to restrict their caloric intake by reducing the frequency of snack consumption, by exchanging high-calorie snacks for low-calorie, low-fat, low-carbohydrate snacks, and by limiting the consumption of sugar-based carbonated drinks and juices. A balanced diet (proteins, carbohydrates, and fat) with a high content of fiber and increased consumption of vegetables, grains, and fruits was recommended. Patients were stimulated to participate in the supervised sport activity of their choosing or, alternatively, to exercise at home on a treadmill or walk with their parents or older siblings. Initially, these activities would be performed three times weekly, but if possible their frequency would be increased to once per day. At the same time, the duration of the physical activity would be gradually increased in accordance with the child's exercise capacity. A subgroup of 36 patients were asked to determine the number of steps taken per day with the use of a pedometer (Omron model HJ-112INT), a device in the form of a watch that calculates the number of steps taken and the distance covered by the subject who carries it [25,26]. We considered physical activity to be any regular sport, in addition to dance, aerobics, ballet, brisk walks, jogging, and school physical education. We quantified this activity in hours per week. In order to calculate the number of steps taken per day, the pedometer was used for 5 days. Then, the total number of steps taken during that period was divided by 5 to determine the average number of steps per day. This method was used to calculate the number of steps taken per day before initiating lifestyle-only intervention and once the intervention period was complete. Statistical Analysis. The continuous variables are presented as mean ± standard deviation, and the categorical variables are shown in numbers and percentages. Initially the analysis was completed in all participants before and after intervention. Data was subsequently evaluated according to gender and pubertal development. Subjects were finally subdivided into 2 groups, one where they either maintained their weight or experienced a modest weight loss of 1-5% (group A; n: 72, mean chronological age of 11.8 ± 2.7 years, 45 females and 27 males) and another where they experienced a mean weight gain of 6.6% (group B; n: 39, mean chronological age of, 10.5 ± 2.7 years, 19 females and 20 males). The statistical differences of normally distributed variables were analyzed according to gender and pubertal status using the nonpaired Student's t-test. The statistical differences of nonnormally distributed variables were analyzed according to gender and pubertal status using the Mann-Whitney test (blood pressure, insulin, HOMA-IR, and CRP). Furthermore, whereas the paired Student t-test was used to analyze the statistical differences of normally distributed variables before and after intervention, the Wilcoxon test was employed to analyze those of nonnormally distributed variables. A logistic regression analysis to determine the influence of gender and pubertal development over the loss or gain of weight of patients was also performed. A P < .05 was considered significant. The statistical package SPSS, version 15, was utilized. Total Group. Children in our group as a whole had a significant decrease in BMI, z-score BMI, and fat mass (P < .0001) during the 8-month-long intervention period. While QUICKI increased (P < .0001) following intervention, LDL-C (P < .005), Tg/HDL-C ratio (P < .05), fasting insulin concentrations (P < .005), postprandial insulin concentrations (P < .005), and HOMA (P < .0001) decreased significantly. HDL-C levels were low at baseline and increased following intervention (P < .05). Waist circumference, waist/hip ratio, lean mass, SBP, DBP, fasting glucose, postprandial glucose, and CRP were normal at baseline and remained stable during the 8-month period. Results of the total group can be seen in Table 1. We also analyzed the data of the total population according to gender (Table 2) and pubertal status ( Table 3). Changes noted during the intervention period were mostly similar in males and females (both exhibited an increase in QUICKI levels and a decrease in BMI, z-score BMI, fat mass, fasting insulin, and HOMA. Neither showed changes in waist circumference, waist/hip ratio, lean mass, SBP, DBP, fasting glucose, postprandial glucose, TC, Tg, and CRP concentrations during this period). However, only females exhibited decreased postprandial insulin levels, a lower Tg/HDL-C ratio, and significant increases in HDL-C. Conversely, LDL-C concentrations decreased only in males ( Table 2). Postprandial insulin levels, both before and after life-style only intervention, and postprandial glucose and fasting insulin concentrations before intervention were higher in females than in males. In contrast the waist/hip ratio, both before and after intervention, was increased in males ( Table 2). Analysis of our data by pubertal staging revealed a similar decrease in BMI, z-score BMI, and fat mass and an increase in QUICKI in both prepubertal and pubertal children. No change in waist circumference, waist/hip ratio, lean mass, SBP, DBP fasting glucose, Tg, TC, HDL-C, Tg/HDL ratio, and CRP levels was noted in either group following intervention. Fasting insulin, postprandial insulin, HOMA, and LDL-C levels decreased significantly following intervention only in prepubertal children. Conversely, postprandial glucose levels decreased significantly only during puberty (Table 3). BMI, waist circumference, fat mass, lean mass, SBP, fasting and postprandial insulin, and HOMA were higher, and QUICKI was lower both before and after intervention in pubertal subjects (Table 3). Group A. When we compared the variables before and after intervention, patients who either maintained their weight or had a modest weight loss presented with a significant decrease in SBP and DBP (P < .005 and P < .05), in BMI, Z-score BMI, waist circumference (P < .0001), and fat mass (P < .005) following intervention. LDL-C (P < .05), Tg/HDL ratio (P < .05), fasting and postprandial insulin (P < .005), and HOMA (P < .005) decreased significantly during the 8-month intervention period in these patients, while HDL (P < .05) and QUICKI increased (P < .005). Fasting and postprandial glucose concentrations and CRP were normal at baseline and remained unchanged during the intervention period (see Table 4). International Journal of Pediatric Endocrinology Group B. Patients who gained weight had a further increase in BMI (P < .0001), waist circumference (P < .005), SBP (P < .005), and in QUICKI (P < .005) during the 8-month intervention period. Fat mass (P < .05), fasting insulin (P < .05), and HOMA (P < .05) decreased. Lean mass, DBP, lipid concentrations, fasting and postprandial glucose, postprandial insulin, and CRP remained stable during the study period. HDL-C levels were low at baseline and did not change during this period. Results of group B can be seen in Table 4. When we compared group A with group B we noticed that, before intervention, BMI (P < .05), waist circumference (P < .05), and systolic BP (P < .002) were significantly lower in group B (Table 4). After intervention, BMI Z-score (P < .05), diastolic BP (P < .05), and the Tg/HDL-C ratio (P < .05) were significantly higher in group B. In order to determine whether gender and pubertal development had a significant influence over the weight change during intervention, we performed a logistic regression analysis with weight gain or loss as a dependent variable. It demonstrated that the influence of gender (P = .099) and pubertal development (P = .157) was not significant. Subgroup. In a subgroup of 36 patients, 16 who either maintained their weight or had a modest weight loss and 20 who gained weight, we analyzed the number of steps taken per day counted with a pedometer and the number of exercise hours per week. Patients in the former group lost weight and decreased their BMI, while children in the latter group increased their weight and BMI remained stable. The number of steps taken per day and the amount of exercise hours increased (P < .0001 and P < .005, resp.) in both subgroups ( Table 5). The number of hours spent watching TV or playing video games showed a tendency to decrease in patients who lost weight and of increasing in patients who gained weight. When we compared subgroups, we noticed that after intervention, BMI Z-score was significantly higher in patients who lost weight (P < .01). Discussion Reports evaluating the effect of exercise on body fat have been contradictory, with some studies showing an increase in body fat but not in visceral fat, others demonstrating stable body composition, and yet others presenting a decrease in fat mass and an increase in lean body mass [17,27,28]. In our study, group A patients demonstrated an improvement in anthropometric parameters with a decrease in BMI, Zscore BMI, and waist circumference following moderate physical activity and diet-based lifestyle-only intervention. Additionally, in a subgroup of these patients, fat mass decreased significantly, and muscle mass remained stable following intervention. It is interesting to note how fat mass also decreased significantly in a number of group B patients, despite their weight increase. This suggests an important role for exercise in improving body composition, even in the absence of weight loss, as in this group of children an increase of physical activity as measured by the number of hours of exercise per week and by the number of steps taken was also noticed. A proinflammatory state with an increase in CRP levels has been detected in obese children and is often accompanied by alterations in endothelial function and arterial structure [9,14,18,29]. Studies of the effect of physical activity or lifestyle changes on inflammatory markers in children have shown mixed results. Whereas some report no effect on hemostatic or inflammatory markers following exercise or lifestyle-only intervention, others document a decrease in circulating fibrinogen concentrations [30][31][32] or a significant decrease in previously elevated levels of CRP, fibrinogen, and IL-6 [18]. In our study, a tendency towards a decrease in CRP levels following lifestyle-only intervention was found in all patient groups, but these levels remained in the normal range both before and after intervention. Recent studies have demonstrated the existence of a link between weight reduction in overweight children and improved insulin sensitivity following exercise or diet-based lifestyle-only intervention [16,33]. In our study, fasting insulin levels and postprandial insulin levels decreased following intervention, while indexes of insulin resistance improved. These changes occurred even in the absence of weight loss or following only modest weight reduction. These findings are in agreement with recent reports in which short-term exercise-training programs were able to increase insulin sensitivity and cardiorespiratory fitness in obese children, even in the absence of weight loss and without measurable changes in body composition [17,18]. However, whereas prepubertal children showed improvements in insulin resistance, children undergoing puberty did not. In the latter group, levels of fasting insulin, postprandial International Journal of Pediatric Endocrinology 7 insulin, and HOMA were higher, and QUICKI values were lower. Compared to male patients, female patients were found to have lower QUICKI values and higher levels of fasting insulin, postprandial insulin, and HOMA. After intervention, both females and males had increased QUICKI values and lower levels of fasting insulin and HOMA. These findings demonstrate how insulin sensitivity can be different during the various stages of adolescence and between sexes. In our intervention study, concentrations of LDL cholesterol and the Tg/HDL-C ratio decreased, and HDL concentrations increased, both in the whole group and in group A children. Lipids remained stable in children who gained weight. Barter et al. [34] demonstrated how, in an obese group of adult patients, an increase in Tg and a decrease in HDL-C allowed for the identification of patients with generalized metabolic disorders. Similarly, in a recent study [4] we found that 69% of our obese children had an elevation of the Tg/HDL-C ratio that correlated with all markers of obesity and in a linear regression analysis proved to be, together with BMI, the main variable explaining the metabolic syndrome. We evaluated children of both sexes and included among them some that were in an age range in which puberty influences insulin resistance. Ideally, this study should have been performed separately in males and females using tighter age ranges. However, we did analyze our data according to gender (males versus females) and pubertal status (children in Tanner stages 1-2 versus children in Tanner stages [3][4][5]. Another potential limitation of this study is the use of HOMA and QUICKI as measures of insulin resistance, instead of using the hyperinsulinemic-euglycemic clamp or the frequently sampled iv glucose tolerance test, which are considered to be more accurate. However, both HOMA and QUICKI have been validated as surrogate markers of insulin resistance in nondiabetic children and adolescents and compare quite well to the more sophisticated tests of insulin resistance [35,36]. This study would have been strengthened by the use of a control group of nondieting and nonexercising obese children; however, it would have been very difficult and probably unethical for us to follow a subgroup of obese children without any form of intervention, considering that they sought our advise to lose excess weight and reduce accompanying comorbidities. In conclusion, a simple lifestyle-only intervention program, leading to either limited weight loss or no weight gain, helped redistribute body fat, decrease lipid levels, and improve parameters of insulin sensitivity in overweight and obese children. The reversibility of these auxological and metabolic abnormalities following moderate lifestyle-only intervention, even in the presence of very limited weight loss or simply no weight gain, has important implications for the future cardiovascular health of children and adolescents.
2014-10-01T00:00:00.000Z
2011-03-14T00:00:00.000
{ "year": 2011, "sha1": "b71ae09a8cebf3f4ff5e9a55ada29a3a924ac25a", "oa_license": "CCBY", "oa_url": "https://ijpeonline.biomedcentral.com/track/pdf/10.1155/2011/241703", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "b71ae09a8cebf3f4ff5e9a55ada29a3a924ac25a", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
245115886
pes2o/s2orc
v3-fos-license
Concurrent Measurement of Mitochondrial DNA Copy Number and ATP Concentration in Single Bovine Oocytes To sustain energy-demanding developmental processes, oocytes must accumulate adequate stores of metabolic substrates and mitochondrial numbers prior to the initiation of maturation. In the past, researchers have utilized pooled samples to study oocyte metabolism, and studies that related multiple metabolic outcomes in single oocytes, such as ATP concentration and mitochondrial DNA copy number, were not possible. Such scenarios decreased sensitivity to intraoocyte metabolic relationships and made it difficult to obtain adequate sample numbers during studies with limited oocyte availability. Therefore, we developed and validated procedures to measure both mitochondrial DNA (mtDNA) copy number and ATP quantity in single oocytes. Validation of our procedures revealed that we could successfully divide oocyte lysates into quarters and measure consistent results from each of the aliquots for both ATP and mtDNA copy number. Coefficient of variation between the values retrieved for mtDNA copy number and ATP quantity quadruplicates were 4.72 ± 0.98 and 1.61 ± 1.19, respectively. We then utilized our methodology to concurrently measure mtDNA copy number and ATP quantity in germinal vesicle (GV) and metaphase two (MII) stage oocytes. Our methods revealed a significant increase in ATP levels (GV = 628.02 ± 199.53 pg, MII = 1326.24 ± 199.86 pg, p < 0.001) and mtDNA copy number (GV = 490,799.4 ± 544,745.9 copies, MII = 1,087,126.9 ± 902,202.8 copies, p = 0.035) in MII compared to GV stage oocytes. This finding is consistent with published literature and provides further validation of the accuracy of our methods. The ability to produce consistent readings and expected results from aliquots of the lysate from a single oocyte reveals the sensitivity and feasibility of using this method. Introduction The accumulation of adequate stores of metabolic substrates as well as an increase in mitochondrial number within the oocyte are important components of the acquisition of developmental competence. Increases in mitochondrial number and the creation of metabolic substrate stockpiles occur throughout folliculogenesis [1][2][3][4][5][6][7][8][9][10]. The oocyte and early embryo have low levels of glycolytic activity and rely on mitochondrial oxidative phosphorylation to produce the ATP necessary to sustain development to the blastocyst stage of embryo development [11][12][13][14]. Substrates and co-factors such as pyruvate, acetyl CoA, NADH, CO 2 , and FADH are produced from glycolytic activity of the cumulus cells and transferred to the oocyte via gap junctions to support oxidative phosphorylation [12,15,16]. This transfer of metabolic substrates occurs throughout oogenesis until the onset of oocyte maturation leads to gap junction breakdown. The oocyte must support energy demanding developmental processes like fertilization and embryonic cell division with the mitochondria and metabolic substrates accumulated prior to oocyte maturation. Previous studies have linked intraoocyte ATP levels, mtDNA copy number, and mitochondrial function to oocyte developmental competence [17][18][19][20][21]. Within bovine oocytes, reported ATP concentrations range from 0.25 to 35 pmol and reported mtDNA levels range Methods Protoc. 2021, 4, 88 2 of 9 from 13,000 to 3,600,000 copies per oocyte. Due to low quantities of starting materials in single oocytes, many studies utilized pooled oocytes for analyses, and no papers have measured multiple metabolic parameters within a single oocyte. Single cell analysis is a valuable tool to better understand and evaluate intraoocyte relationships among metabolic components and mitochondria functionality. While pooling oocytes allows researchers to account for high variability in abattoir-derived oocytes, single oocyte analyses are necessary for in vivo studies where sample numbers are often limiting. Therefore, we developed a protocol with the objective to divide the lysate of a single oocyte into quarters and measure ATP and mtDNA copy number in duplicate, quarter oocyte equivalents. After successful validation of the protocol for both ATP and mtDNA measurements, we performed a second study with the objective to compare ATP quantity and mtDNA copy number between germinal vesicle (GV) and metaphase two (MII) stage oocytes as an additional validation to demonstrate that the protocol would yield similar results to previously published studies of pooled oocytes. Collection of Cumulus-Oocyte-Complexes No live animals were handled for this study. Abattoir-sourced, mixed breed, bovine (Bos taurus) ovaries were obtained for the manual aspiration of follicles and collection of cumulus-oocyte-complexes (COCs). Briefly, COCs were manually aspirated from 3 to 8 mm follicles using a 12 mL syringe and 18-gauge needle. The aspirate was then transferred to a petri dish and searched for COCs. Once located, COCs were placed into oocyte collection media (OCM; M199 with Hanks' salts, 2% (v/v) FBS, 2 mmol/L L-glutamine, 50 U/mL penicillin, and 50 µg/mL streptomycin) until searching was completed. Only those COCs with a homogenous cytoplasm and at least 5 layers of cumulus cells were selected for further use. For the validation of procedures, selected COCs were stripped of their cumulus cells by vortexing for 3-5 min in 1 × trypsin, washed through PBS, and immediately snap frozen in 2 µL of PBS for storage at −80 • C until used for assays. For the comparison of GV and MII stage oocytes, COCs were randomly divided and half (n = 15) were collected at the GV stage as described above. The remaining COCs were washed in oocyte maturation media (OMM; M199 with Earle's salts, 10% (v/v) FBS, 2 mmol/L Lglutamine, 0.2 mmol/L sodium pyruvate, and 50 mg/mL gentamycin) before undergoing maturation for 22-24 h at 38.5 • C in 5% CO 2 and humidified air. The matured COCs, which represent MII oocytes (n = 15), were then collected, denuded, and stored in the same manner as the GV stage oocytes. TaqMan Primer and Probe Design Primers were designed using NCBI BLAST according to the bovine mitochondrial genome (Accession Number: NC_006853.1; Table 1). The selected primer pair created a 183 base pair product from base pairs 11,569 to 11,751 of the bovine mitochondrial genome. The probe sequence was located between the forward and reverse primers and ranged from base pairs 11,621 to 11,645. A gradient polymerase chain reaction (PCR) ranging from 50 to 60 • C was performed to determine optimal annealing temperature ranges of the primers. . Oocyte Lysis and Division Germinal vesicle stage oocytes were used to optimize and develop the protocol. Following optimization, we validated the protocol's ability to provide consistent values for ATP and mtDNA copy number assays in quarter oocyte equivalents. Tubes containing single oocytes were centrifuged at 12,000× g for 30 s at 4 • C. For ATP quantification (n = 5), 8 µL 5 mM Tris-HCl was added to each sample and all samples were heated at 95 • C for 10 min. Thirty microliters of nuclease free water were added to each sample to achieve a final volume of 40 µL. This volume was then divided into four, 10 µL aliquots for the ATP assay. For mtDNA copy number quantification (n = 6), 8 µL Tris-HCl were added to each sample and samples were heated at 95 • C for 10 min. Twenty-six microliters of proteinase K (Zymo Research; Irvine, CA, USA) were added to each sample to reach a final concentration of 200 µg/mL. Samples were heated at 55 • C for 30 min followed by heating at 95 • C for 10 min for proteinase K deactivation. The lysate was then divided into four, 9 µL aliquots for use in qPCR analysis of mitochondrial DNA copy number. ATP Quantification Validation Samples were processed using the ATP Determination Kit (Life Technologies; Carlsbad, CA, USA) according to the manufacturer's directions. Each lysate was thoroughly mixed by pipetting, and 10 µL was added to individual wells of a white, 96 well plate (Costar ® , 96 well, flat bottomed, white polystyrene assay plate, Corning Incorporated, Corning, NY, USA) and combined with 90 µL of assay mix. Luminescence was recorded using the Synergy H1 Microplate Reader (Biotek; Winooski, VT, USA). mtDNA Copy Number Quantification Validation Samples were processed using our custom TaqMan primers and probe (Table 1). Each lysate was mixed thoroughly by pipetting and 9 µL was combined with 1 µL of the custom TaqMan primer/probe mix and 10 µL Fast Advanced Master Mix (ThermoFisher Scientific, Waltham, MA, USA) in a MicroAmp ® Fast 96-well Reaction Plate (0.1 mL, ThermoFisher Scientific, Waltham, MA, USA; Table S1A). The PCR settings were as follows: 2 min at 94 • C followed by 40 cycles of 10 s at 94 • C, 15 s at 57 • C, and 12 s at 72 • C (QuantStudio3, ThermoFisher Scientific, Waltham, MA, USA; Table S1B). Oocyte Lysis and Division Tubes containing single oocytes were centrifuged at 12,000× g for 30 s at 4 • C. Then, 8 µL 5 mM Tris-HCl was added to each sample and all samples were heated at 95 • C for 10 min. Oocyte lysate was mixed thoroughly by pipetting and 5 µL of oocyte lysate was removed for ATP analysis (Figure 1). Thirteen microliters of proteinase K were added to the tubes with the remaining 5 µL of oocyte lysate to create a final concentration of 200 µg/mL proteinase K. Tubes containing oocyte lysate + proteinase K were heated at 55 • C for 30 min. Proteinase K was deactivated by re-heating at 95 • C for 10 min. After deactivation, the lysate was used for qPCR analysis of mitochondrial DNA copy number (Figure 1). Methods Protoc. 2021, 4, x FOR PEER REVIEW 4 of 9 ATP Quantification The initial 5 μL of oocyte lysate (collected before proteinase K addition) was diluted with nuclease free water to a total volume of 20 μL which was then divided into two, 10 μL aliquots to perform ATP quantification in duplicate ( Figure 1). Standards were generated via dilution of the kit-included 5 mM ATP substrate (0.5 μM, 0.05 μM, 0.025 μM, 0.005 μM, 0.0025 μM). Samples and standards were processed using the ATP Determination Kit (Life Technologies; Carlsbad, CA, USA) as described above. Luminescence values of standards were used to generate a standard curve. The standard curve was used to quantify ATP concentration in each one quarter oocyte equivalent based on the average value of the duplicates assayed for each sample. Oocyte ATP values were then converted to weight using the molarity calculator by GraphPad with the following settings: concentration = micromolar, ATP formula weight = 507.18, volume = 10 μL [22,23]. The result was then multiplied by four because the 10 μL samples used for the ATP concentration assay represented one fourth of an oocyte. mtDNA Standard Preparation A group of five oocytes with a homogenous cytoplasm and 3-5 layers of cumulus cells were denuded and snap frozen in 2 μL of PBS. They were then lysed as previously described. The oocyte lysate was then combined with 12.5 μL of Accustart II PCR Super-Mix (2×; Quantabio, Beverly, MA, USA), 0.5 μL of 5 μm forward primer, and 0.5 μL of 5 μm reverse primer (Table 1) for a final volume of 25 μL (Table S1C). Cycling conditions of 2 min at 94 °C followed by 40 cycles of 10 s at 94 °C, 15 s at 57 °C, and 12 s at 72 °C were used (Table S1B). The PCR product was further processed through electrophoresis on a 1.5% agarose gel. The resulting band was located at ~183 base pairs, indicating that it was the correct PCR product. The band was then excised from the gel, and the PCR product was purified from the gel using the Zymoclean™ Gel DNA Recovery Kit (Zymo Research, Irvine, CA, USA). DNA concentration of the final eluate was determined using the Qubit™ dsDNA High Sensitivity Assay Kit (ThermoFisher Scientific, Waltham, MA, USA), and purity was determined using 260/280 ratios measured on a NanoDrop spectrophotometer. Number of mitochondrial DNA copies per microliter was calculated from the DNA concentration of the eluate and the molecular weight of the PCR product. The eluate ATP Quantification The initial 5 µL of oocyte lysate (collected before proteinase K addition) was diluted with nuclease free water to a total volume of 20 µL which was then divided into two, 10 µL aliquots to perform ATP quantification in duplicate (Figure 1). Standards were generated via dilution of the kit-included 5 mM ATP substrate (0.5 µM, 0.05 µM, 0.025 µM, 0.005 µM, 0.0025 µM). Samples and standards were processed using the ATP Determination Kit (Life Technologies; Carlsbad, CA, USA) as described above. Luminescence values of standards were used to generate a standard curve. The standard curve was used to quantify ATP concentration in each one quarter oocyte equivalent based on the average value of the duplicates assayed for each sample. Oocyte ATP values were then converted to weight using the molarity calculator by GraphPad with the following settings: concentration = micromolar, ATP formula weight = 507.18, volume = 10 µL [22,23]. The result was then multiplied by four because the 10 µL samples used for the ATP concentration assay represented one fourth of an oocyte. mtDNA Standard Preparation A group of five oocytes with a homogenous cytoplasm and 3-5 layers of cumulus cells were denuded and snap frozen in 2 µL of PBS. They were then lysed as previously described. The oocyte lysate was then combined with 12.5 µL of Accustart II PCR SuperMix (2×; Quantabio, Beverly, MA, USA), 0.5 µL of 5 µm forward primer, and 0.5 µL of 5 µm reverse primer (Table 1) for a final volume of 25 µL (Table S1C). Cycling conditions of 2 min at 94 • C followed by 40 cycles of 10 s at 94 • C, 15 s at 57 • C, and 12 s at 72 • C were used (Table S1B). The PCR product was further processed through electrophoresis on a 1.5% agarose gel. The resulting band was located at~183 base pairs, indicating that it was the correct PCR product. The band was then excised from the gel, and the PCR product was purified from the gel using the Zymoclean™ Gel DNA Recovery Kit (Zymo Research, Irvine, CA, USA). DNA concentration of the final eluate was determined using the Qubit™ dsDNA High Sensitivity Assay Kit (ThermoFisher Scientific, Waltham, MA, USA), and purity was determined using 260/280 ratios measured on a NanoDrop spectrophotometer. Number of mitochondrial DNA copies per microliter was calculated from the DNA concentration of the eluate and the molecular weight of the PCR product. The eluate was first diluted 1:1000 before being further diluted to create a standard curve Methods Protoc. 2021, 4, 88 5 of 9 ranging from 10 copies to 1,000,000 copies of the mitochondrial DNA segment per 9 µL. The amplification efficiency of the standard curve was calculated using the qPCR efficiency calculator from ThermoFisher [24]. mtDNA Copy Number Quantification The 18 µL portion of oocyte lysate and inactivated proteinase K were divided into two, 9 µL aliquots and quantitative PCR (qPCR) was performed in duplicate using our custom TaqMan primers and probe ( Table 1). The standards of known mtDNA copy number (described above) were included during each assay. Nine microliters of sample or standard was combined with 1 µL of the custom TaqMan primer/probe mix and 10 µL Fast Advanced Master Mix (ThermoFisher Scientific, Waltham, MA, USA) in a MicroAmp ® Fast 96-well Reaction Plate (0.1 mL, ThermoFisher Scientific, Waltham, MA, USA, Table S1A). The PCR settings were as follows: 2 min at 94 • C followed by 40 cycles of 10 s at 94 • C, 15 s at 57 • C, and 12 s at 72 • C (QuantStudio3, ThermoFisher Scientific, Waltham, MA, USA, Table S1B). Average cycle threshold (CT) values were calculated and compared to those obtained from the standard curve to determine mtDNA copy numbers. Statistics and Analyses All statistical analyses were performed in R software version 3.6.3 [25]. The corresponding code is available online (https://github.com/CaseyRead/Read_etal_2021 _MethdPrtc; accessed on 31 October 2021). For validation of our procedures, coefficient of variation (CV) was calculated for the quadruplicate ATP (fluorescence) or mtDNA copy number (CT) values obtained for each individual oocyte analyzed. The data for ATP and mtDNA values was tested for normality by performing a Shapiro-Wilk test and by plotting the residuals for visual evaluation of normality. The values for ATP were determined to have an approximately normal distribution, while the mtDNA values were natural log transformed. Analysis of variance was performed to determine the differences in ATP and ln(mtDNA) between GV and MII oocytes. Linear regression was performed to determine the relationship between ATP and ln(mtDNA). Results were considered significant if p < 0.05. All values are presented as mean ± SD. Standards Quality The 260/280 ratio for the PCR product used to make the standards was 1.78. A 260/280 ratio of approximately 1.8 is generally accepted as "pure" for DNA, so we confidently used this PCR product to generate the standards for mtDNA copy number [26,27]. The amplification efficiency of the qPCR assay was calculated to be 100.25%. The desired range for amplification efficiency is 90-110% [28][29][30]. The R 2 values for the standard curves for each assay were ≥0.99. Desired R 2 values fall in the range of 0.95-1.0 with the goal to be as close to one as possible [31]. Based on the values for our standard curves falling within the optimal ranges, we are confident that our standard curves were accurate measures of the mtDNA copy number and ATP concentration within the oocyte. Validation of Procedures for Quantification of ATP and mtDNA Copy Number in Quarter Oocyte Equivalents We validated that the lysate from a single oocyte could be divided into quarters with minimal variation among the ATP or mtDNA CT values for each quarter oocyte equivalent (Table S2A,B). Intra-assay CV values for ATP and mtDNA were 4.72 ± 0.98% and 1.61 ± 1.19%, respectively. Because the CV values from our methodology were below 5%, we have concluded that there is minimal variation between oocyte lysate portions. This allows us to be confident that the measures obtained via use of this protocol in future studies are valid for comparisons of both ATP and mtDNA copy number in single oocytes. ATP and mtDNA Copy Number in GV versus MII Oocytes Of the 30 total oocytes utilized for this study, data from 4 oocytes were removed from the dataset prior to analysis due to values outside of the standard curve. Interassay coefficient of variation was 6.61 ± 5.89 and 2.04 ± 1.48 for the ATP and mtDNA copy number assays, respectively. Metaphase II stage oocytes had significantly higher quantities of ATP than GV stage oocytes (GV = 628.02 ± 199.53 pg, MII = 1326.24 ± 199.86 pg, p < 0.001, Figure 2A, Table S2C). The increased levels of ATP in MII oocytes are consistent with previously published studies of bovine oocytes [1][2][3]. This further validates the accuracy and sensitivity of our protocol to distinguish ATP quantities between different developmental stages of oocytes. However, our values for the concentration of ATP within the oocyte were higher than those previously reported in the literature [1][2][3]20,[32][33][34][35]. This difference could be due to differences in assay sensitivity, oocyte dilution factors, and/or additional sources of variation in sample processing. Due to differences in assay methods and dilutions, and to allow for interstudy comparison, we suggest presenting the amount of ATP present within the oocyte as a weight measure. stage oocytes (GV = 490,799.4 ± 544,745.9, 1,087,126.9 ± 902,202.8, p = 0.035, Figure 2B, Table S2C). Multiple studies have shown that, in the bovine, MII stage oocytes have significantly more copies of mtDNA than GV stage oocytes [3,32]. Additionally, the values retrieved for our assessment of mtDNA copy number were in the same range as those previously reported for bovine oocytes [3,4,20,32,34]. This experiment also validated the capability of our methods to measure a large range in mtDNA copy numbers (12,478 to 3,141,658 copies). Mitochondrial DNA copy number and ATP content were not significantly correlated in our samples (p = 0.18, Figure 2C, Table S2C). Iwata et al. 2011 evaluated ATP levels and mtDNA copy number from separate pools of oocytes collected from cows of increasing age and reported a positive correlation between ATP concentration and age, but a negative correlation between mtDNA copy number and age [3]. Such results suggest a negative correlation between ATP concentration and mtDNA copy number. This was the only paper we identified that compared mtDNA copy number to ATP level within the bovine oocyte. One potential reason for our different outcomes is that Iwata et al. 2011 utilized different pools of oocytes for analysis and our measures were from the same oocyte which allowed us to more accurately relate ATP levels to the corresponding mtDNA copy number from the same oocyte. Studies involving other cell types have shown no relationship between ATP content and mtDNA copy number as well as both positive and negative correlations between the two values [21,36,37]. The highly variable relationship between ATP level and mtDNA copy number suggests that there are multiple variables within the cell that are affecting ATP levels and highlights the importance of collecting multiple metabolic measurements when possible to fully elucidate the cause of altered oocyte metabolism. Mitochondrial DNA copy number was significantly different between GV and MII stage oocytes (GV = 490,799.4 ± 544,745.9, 1,087,126.9 ± 902,202.8, p = 0.035, Figure 2B, Table S2C). Multiple studies have shown that, in the bovine, MII stage oocytes have significantly more copies of mtDNA than GV stage oocytes [3,32]. Additionally, the values retrieved for our assessment of mtDNA copy number were in the same range as those previously reported for bovine oocytes [3,4,20,32,34]. This experiment also validated the capability of our methods to measure a large range in mtDNA copy numbers (12,478 to 3,141,658 copies). Mitochondrial DNA copy number and ATP content were not significantly correlated in our samples (p = 0.18, Figure 2C, Table S2C). Iwata et al. 2011 evaluated ATP levels and mtDNA copy number from separate pools of oocytes collected from cows of increasing age and reported a positive correlation between ATP concentration and age, but a negative correlation between mtDNA copy number and age [3]. Such results suggest a negative correlation between ATP concentration and mtDNA copy number. This was the only paper we identified that compared mtDNA copy number to ATP level within the bovine oocyte. One potential reason for our different outcomes is that Iwata et al. 2011 utilized different pools of oocytes for analysis and our measures were from the same oocyte which allowed us to more accurately relate ATP levels to the corresponding mtDNA copy number from the same oocyte. Studies involving other cell types have shown no relationship between ATP content and mtDNA copy number as well as both positive and negative correlations between the two values [21,36,37]. The highly variable relationship between ATP level and mtDNA copy number suggests that there are multiple variables within the cell that
2021-12-12T17:05:00.212Z
2021-12-01T00:00:00.000
{ "year": 2021, "sha1": "d20aef4c801a45fc4880df364c53801e92831a26", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2409-9279/4/4/88/pdf", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "f449e770b3c144f504b07f98f3eb39aa526c5be2", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
257912642
pes2o/s2orc
v3-fos-license
Hydrogen-triggered X-ray Bursts from SAX J1808.4-3658? The Onset of Nuclear Burning We present a study of weak, thermonuclear X-ray bursts from the accreting millisecond X-ray pulsar SAX J1808.4-3658. We focus on a burst observed with the Neutron Star Interior Composition Explorer on 2019 August 9, and describe a similar burst observed with the Rossi X-ray Timing Explorer in 2005 June. These bursts occurred soon after outburst onset, $2.9$ and $1.1$ days, after the first indications of fresh accretion. We measure peak burst bolometric fluxes of $6.98 \pm 0.50 \times 10^{-9}$ and $1.54 \pm 0.10 \times 10^{-8}$ erg cm$^{-2}$ s$^{-1}$, respectively, which are factors of $\approx 30$ and $15$ less than the peak flux of the brightest, helium-powered bursts observed from this source. From spectral modeling we estimate accretion rates and accreted columns at the time of each burst. For the 2019 burst we estimate an accretion rate of $\dot M \approx 1.4-1.6 \times 10^{-10}$ $M_{\odot}$ yr$^{-1}$, and a column in the range $3.9-5.1 \times 10^7$ g cm$^{-2}$. For the 2005 event the accretion rate was similar, but the accreted column was half of that estimated for the 2019 burst. The low accretion rates, modest columns, and evidence for a cool neutron star in quiescence, suggest these bursts are triggered by thermally unstable CNO cycle hydrogen-burning. The post-burst flux level in the 2019 event appears offset from the pre-burst level by an amount consistent with quasi-stable hydrogen-burning due to the temperature-insensitive, hot-CNO cycle, further suggesting hydrogen-burning as the primary fuel source. This provides strong observational evidence for hydrogen-triggered bursts. We discuss our results in the context of previous theoretical modeling. 1. INTRODUCTION Thermonuclear (Type I) X-ray bursts occur when an accreted layer of matter on a neutron star undergoes a thermonuclear runaway (Hansen & van Horn 1975;Strohmayer & Bildsten 2006;Galloway & Keek 2021). The thin shell thermal instability that triggers these bursts occurs when the energy generation rate due to nuclear burning exceeds the local cooling rate in the accreted shell. Depending on the accreted fuel composition there can be a large number of nuclear reactions that contribute to the energy production that powers bursts, however, two reactions are thought to be the primary triggers for the thermal instability. The first is the highly temperaturesensitive triple-α reaction, which burns three helium nuclei to carbon. The other relevant process is the CNO cycle that burns hydrogen to helium via a series of (p, γ) reactions on carbon and nitrogen (Fowler & Hoyle 1965). Also operating in the cycle are several rate-limiting β + decays dependent on slower weak reaction rates. The cycle is closed by the (p, α) reaction that converts 15 N back to 12 C. This physics leads to two regimes in which the CNO cycle may operate in this context. Above temperatures of about 8 × 10 7 K, the rate of the so-called hot-CNO cycle is limited by the β + decays that take 15 O and 14 O to 15 N and 14 N, respectively. In this regime the energy generation rate becomes temperature insensitive, that is, thermally stable. However, for temperatures below ≈ 8×10 7 K the CNO energy generation rate remains temperature sensitive, so that, in principle, a hydrogenburning thermal instability can operate if the accreting shell is cool enough. The implications of these physical processes for the production of X-ray bursts were explored in several early, "classic" papers. Fujimoto et al. (1981) were among the first to delineate the possible bursting regimes based on the mass accretion rate, but also see Joss (1978) and Taam & Picklum (1979). They identified three bursting regimes with decreasing mass accretion rate. At higher accretion rates a helium shell grows via accretion and thermally stable CNO cycle burning that converts some of the accreted hydrogen to helium. The accretion rate is high enough that the base of the shell reaches ignition conditions before all the hydro-gen can be burned to helium. The triple-α instability is initiated in a shell with a significant hydrogen abundance, leading to so-called mixed H/He bursts. The presence of hydrogen allows for the more complex set of nuclear reactions known as the rapid-proton capture process to occur, which can enhance and delay the energy release, leading to longer duration bursts (Schatz et al. 2001). The well-studied "clocked" bursts from GS 1826−238 are prototypical of this type (Ubertini et al. 1999;Galloway et al. 2004;Heger et al. 2007;Zamfir et al. 2012). For lower accretion rates there will be a rate such that, prior to ignition, the CNO burning has just had sufficient time to burn all the hydrogen in the accreting shell. At this critical accretion rate the helium burning is initiated in a pure helium shell. These "pure helium" bursts are characterized by a rapid, intense energy release, are typically of shorter duration than the mixed H/He bursts, and often reach the Eddington limit, as exhibited by photospheric radius expansion (PRE). The bright PRE bursts observed from the accreting millisecond X-ray pulsar (AMXP) SAX J1808.4−3658 are examples of such bursts (Galloway & Cumming 2006;in't Zand et al. 2013;Bult et al. 2019a). At low accretion rates, temperatures in the accumulating shell may be low enough that steady CNO hydrogenburning essentially switches off, but can proceed in the unstable, temperature-sensitive regime. This can, in principle, lead to unstable ignition of the hydrogen in the accumulating shell. Two possible paths have been discussed in this case. First, ignition of hydrogen raises the temperature in the shell, and if the column depth is large enough the heated shell may cross the helium instability curve, producing a prompt, mixed H/He burst. Second, if the hydrogen ignition depth is too shallow to cross the helium instability curve, then fuel will continue to accumulate until helium ignition can occur. At these low accretion rates it is also likely that gravitational sedimentation of heavier elements relative to hydrogen and helium will play a role in setting the conditions for unstable burning (Peng et al. 2007). However, observational evidence for this hydrogen ignition regime is limited, as there have been to our knowledge few published reports of X-ray bursts that can be clearly attributed to unstable hydrogen shell flashes. In one such case, Boirin et al. (2007) reported on the first observations of triple, short recurrence time (SRT) bursts from the high inclination, eclipsing source EXO 0748−676. They suggested that the initial bursts of singles, pairs or triples (they call these the long waiting time, LWT, bursts), could be attributed to either helium-triggered, mixed H/He bursts at moderate accretion rates (10% of Eddington), or perhaps hydrogentriggered bursts at lower accretion rates (1% of Eddington). Because the LWT bursts appeared somewhat under-luminous compared with mixed H/He bursts in 1-d Kepler models and the well-known example of such bursts from GS 1826−238 they suggested that this might be explained by the latter, hydrogen-triggered mechanism. They further suggested that the SRT events, with waiting times close to 10-12 minutes, were likely caused by the re-ignition of unburned fuel, but they did not have a detailed explanation of how this occurs. More recently, Keek & Heger (2017) have outlined a theoretical mechanism to account for SRT bursts. Using detailed, 1-d Kepler hydrodynamic simulations they showed that such events can be produced by opacitydriven convective mixing that transports fresh fuel to the ignition depth, and they also argued that this mechanism can produce simulated burst events that are "strikingly similar" (in their words) to the SRT bursts seen from EXO 0748−676. If this mechanism is indeed at work, then it would further argue for the higher accretion rate (10% of Eddington), helium-triggered scenario in EXO 0748−676, as warmer envelopes, naturally produced by higher accretion rates, were required to produce the SRT events in their burst simulations. Moreover, they also showed that the fraction of fuel burned in the LWT events dropped as the envelope became hotter, and this relatively low fuel burning fraction could also naturally explain the apparently under luminous LWT bursts noted by Boirin et al. (2007). Thus, while Boirin et al. (2007) suggest that a hydrogen-triggered mechanism is possible for the LWT bursts from EXO 0748−676, we would characterize the current, overall evidence in support of that conclusion as tentative, particularly given the remaining uncertainties in the distance and anisotropy factors for this source. Indeed, in support of this we note that in their recent review of the field, Galloway & Keek (2021) also comment that, "No observations matching case I or case II bursting have been identified." Here, cases I and II refer to the two hydrogen ignition paths at low accretion rates that we sketched above. In this paper we present a study of an apparently rarer class of weak X-ray bursts observed from SAX J1808.4−3658 (hereafter, J1808) that we argue show the hallmarks of being associated with the hydrogen ignition regime. This object was the first AMXP discovered (Wijnands & van der Klis 1998;Chakrabarty & Morgan 1998), and hosts a neutron star in a 2.1 hr orbit with a low-mass brown dwarf (Bildsten & Chakrabarty 2001). Its distance has been estimated at 3.5 ± 0.1 kpc (Galloway & Cumming 2006), and it is likely that the donor provides a hydrogen-rich mix of matter to the neutron star during outbursts (Galloway & Cumming 2006;Goodwin et al. 2019). To date, J1808 has been observed extensively during ten outbursts. While it is not our intention here to provide a broad observational overview of the source-for the purposes of this paper we focus on issues relevant to its thermonuclear bursting behavior-readers can find elsewhere some recent studies on coherent pulse timing (Sanna et al. 2017;Bult et al. 2020;Illiano et al. 2022), X-ray spectral properties (Di Salvo et al. 2019), and aperiodic timing behavior (Bult & van der Klis 2015;Sharma et al. 2022). Observations of J1808 have revealed two types of thermonuclear bursts that show dramatically different peak fluxes and fluences. The bright PRE bursts (mentioned above) show significantly higher total energy release and peak X-ray flux. The less frequently observed weak bursts produce much less energy and show peak fluxes about a factor of 25 less than the bright events, as such they are not Eddington-limited. When these weak bursts have been observed, they appear to be confined to earlier portions of the outbursts and occurred before the bright bursts were seen. This suggests there may be a window of occurrence for these bursts associated with the initial onset of accretion after a period of quiescence. This is particularly intriguing in the context of J1808 because it is known that the neutron star cools dramatically in quiescence (Heinke et al. 2009), and the unstable hydrogen-burning regime requires cooler temperatures in the accumulating layer. There has been more observational and theoretical research exploring the nature of the bright bursts than the weak class. Here we present a detailed study of one of these weak bursts that was observed with the Neutron Star Interior Composition Explorer (NICER) during the recent, 2019 August outburst from J1808. We also provide a briefer description of a similar burst observed with the Rossi Xray Timing Explorer (RXTE) in 2005 June. The paper is organized as follows. In §2 we introduce the NICER data and present light curves focusing on the initial part of the 2019 outburst, showing a single weak burst. We also present a spectral study of the persistent and burst emission (for the weak burst) in order to understand its energetics and to constrain the mass accretion rate and the likely accreted mass column at the time of its ignition. We present a discussion in §3 of a likely physical scenario that results in the weak burst, arguing that the initial accretion onto a cool neutron star at the onset of the outburst naturally places the accumulating layer in the thermally unstable regime for CNO hydrogen ignition. Here, we also describe the 2005 June RXTE event, and we also report a brief summary of NuSTAR observations that began on 2019 August 10 and in which several brighter bursts were detected. We conclude in §4 with a summary, a brief discussion of relevant uncertainties and other possible interpretations, and the outlook for future efforts. NICER OBSERVATIONS OF J1808 In late July 2019, it was reported that the optical flux from J1808 had increased, perhaps presaging a new Xray outburst (Russell et al. 2019;Goodwin et al. 2020). This initiated an extensive monitoring campaign with NICER, which began on August 1, 2019 (Bult et al. 2020). NICER is an X-ray observatory that operates on the International Space Station (ISS). It observes across the 0.2-12 keV X-ray band and provides low-background, high-throughput (≈ 1900 cm 2 at 1.5 keV), and high time resolution capabilities (Gendreau et al. 2012). The data obtained prior to the onset of the outburst, and up to and including the first observed X-ray burst are organized under observation IDs (OBSIDs), 205026010mm, and 25840101nn, where mm and nn run from 03-10 and 01-02, respectively. We used the standard screening criteria and NICERDAS version 8 to produce cleaned event lists. This means we retained only those epochs during which the pointing offset was < 54 ′′ , the Earth elevation angle was > 15 • , the elevation angle with respect to the bright Earth limb was > 30 • , and the instrument was not in the South Atlantic Anomaly (SAA). We used HEASOFT Version 6.29c to produce the light curves and spectra for the analyses reported here. The initial observations of the campaign did not reveal evidence of J1808 in X-ray outburst. The first indication that an accretion-driven flux was present occurred on August 6, 2019 at approximately 21:59 TT (Bult et al. 2019b). Figure 1 shows the light curve (0.4-7 keV) of the outburst over approximately 20 days from the observed onset of significant X-ray activity. Time zero in the plot refers to the time of outburst onset, 58701.91597 MJD (TT). The two detected X-ray bursts are evident as "spikes" in the count rate near days 3 and 14, respectively. The much brighter second burst (near day 14) was reported on by Bult et al. (2019a). Here we focus on a study of the much weaker first burst, which occurred at 58704.80764 MJD (TT), and is present in OBSID 2584010102. Persistent Spectrum, Fluence and Accreted Mass To explore the weak burst energetics and ignition conditions we aim to constrain the total accreted mass from the beginning of the outburst up to the onset of the first burst. To do this we model the spectrum of the persistent emission to determine its flux and then integrate from the outburst onset to just prior to the burst. This integral provides an estimate of the energy fluence produced via accretion, which can then be converted to an accreted mass using standard assumptions for the accretion luminosity produced by the release of gravitational potential energy of the accreted matter. In practice we find that the shape of the persistent spectrum gradually changes during this portion of the outburst, with the spectrum showing a modest hardening over time. We therefore measure the flux at a few intervals along the outburst rise, and use these measurements to estimate the flux per unit NICER count rate. We then use simple linear interpolation and the trapezoidal rule to integrate the flux from outburst onset to the first burst to estimate the energy fluence. The light curve in Figure 2 shows a close-up of the epoch around the first burst. We extracted a spectrum prior to the burst, the "pre-burst" interval (marked by the vertical dashed lines in Figure 2) and modeled its spectrum using XSPEC version 12.12.1 (Arnaud 1996). We produced response files with NICERDAS version 8, and we used the 3C50 background model, nibackgen3c50 (Remillard et al. 2022), to produce a background spectrum appropriate for spectral modeling within XSPEC. We employed a phenomenological model similar to that discussed by Patruno et al. (2009), that includes thermal disk, power-law, and blackbody continuum components. In addition, and similarly to Bult et al. (2019a), we find evidence for narrow-line emission near 1 keV, and we include a gaussian component to model this. In XSPEC notation the model has the form, phabs*(diskbb + powerlaw + bbodyrad + gaussian), where phabs represents the line of sight photoelectric absorption model parameterized by the column density of neutral hydrogen, n H . This absorption model uses cross sections from Verner et al. (1996) and the chemical abundances from Anders & Grevesse (1989). We fit this model across the 0.5 − 10 keV bandpass and find that it provides an excellent fit, with a minimum χ 2 = 117.9 for 112 degrees of freedom. The best-fitting model parameters are given in Table 1, and Figure 3 shows the unfolded photon spectrum (top), the observed count-rate spectrum and best-fitting model (middle), and the fit residuals in units of standard deviations (bottom). This model gives an unabsorbed flux (0.1 − 20 keV) of 7.06 ± 0.23 × 10 −10 erg cm −2 s −1 . If we extend the bandpass to estimate a bolometric flux we find a value (0.1 − 100 keV) of 7.9 × 10 −10 erg cm −2 s −1 . The average count rate in the fitted energy band (0.5 − 10 keV) is 181.9 ± 0.5 s −1 , so we estimate a flux per NICER count rate (0.5 − 10 keV) of 4.34 × 10 −12 erg cm −2 s −1 (counts s −1 ) −1 for this interval. The count rates were computed in 1 s intervals, and the vertical dashed and dotted lines denote the intervals used to extract spectra for the pre-and post-burst spectral modeling, respectively. Inset panel: The same data are used, but the time bins are 16 s, and the logarithmic scale highlights the offset in count rate between the pre-and post-burst emission. The dashed red line is a constant value fit to the pre-burst level, and is meant as a guide to the eye. We extracted spectra from two other OBSIDs along the outburst rise, 2050260109 and 2050260110, and analyzed these spectra in the same manner as for the preburst interval just described. Results of these spectral fits are also reported in Table 1. For these intervals we estimate flux per NICER count rate values of 2.85 × 10 −12 and 3.61 × 10 −12 erg cm −2 s −1 (counts s −1 ) −1 , respectively. For completeness we make a few additional comments regarding the 1 keV line component included in the spectral model. For the pre-burst interval (OBSID 2584010102), removing the gaussian line results in an increase in χ 2 of 31.3, and the ratio of the line normalization to its 1σ uncertainty is ≈ 4.2. The line is also evident in OBSID 2050260110, though at lower significance, with the ratio of the line normalization to its 1σ uncertainty now at 3.5. For OBSID 2050260109, the spectrum extracted closest to the outburst onset and at the lowest observed flux, we no longer find evidence for the line. When detected the line is narrow in the sense that it is unresolved and we can only place an upper limit on its width of ≈ 0.09 keV (3σ). Finally, in this work our primary focus is to model the X-ray spectrum to infer the broadband flux. Excluding the 1 keV line from the spectral fits only changes the inferred flux at the few percent level, so including it, or not, does not significantly alter our inferences regarding the source flux. We elected to include it since doing so provides a better overall statistical description of the data. To estimate the outburst fluence we use simple linear interpolation between data gaps, and we also apply linear interpolation of the flux per unit count rates, based on the spectral results discussed above. We employ the trapezoidal rule to integrate the total counts. We find a persistent emission energy fluence of E p = 7.92 × 10 −5 erg cm −2 , representing an estimate of the total energy associated with accretion from the outburst onset up to the initiation of the first observed burst. Assuming the observed, accretion-driven luminosity for spherical accretion, R are the surface red-shift, mass accretion rate (as measured at the neutron star surface), persistent emission anisotropy factor, observed bolometric flux, neutron star mass and radius, respectively. We can use this equation to estimate the total accreted mass required to produce the observed energy fluence (Johnston 2020). We emphasize that L p and f p are the luminosity and flux as measured by an observer far from the neutron star. The anisotropy factor, ξ p , can be thought of as the solid angle into which the radiation is emitted, normalized by 4π, thus, isotropic emission is characterized by ξ p = 1. We write the accreted mass column in the local neutron star frame as, where we use t ′ to emphasize that theṀ integral is over the time as measured at the neutron star surface. With the use of equation (1) this becomes, where t is the time measured in the observer's frame, and we note that dt = (1+z)dt ′ , and thus f p (t)(1+z)dt ′ = f p (t)dt = E p is just the observed energy fluence. If we assume d = 3.5 kpc (Galloway & Cumming 2006), and use E p = 7.92 × 10 −5 erg cm −2 , we can write the accreted column as, where R 10 is the neutron star radius in units of 10 km. For M = 1.4M ⊙ , R = 11 km, and adopting ξ p = 1 we find y a = 5.12 × 10 7 g cm −2 . With M = 2.0M ⊙ and R = 11 km we find that y a decreases slightly to 3.91 × 10 7 g cm −2 . We can also use equation (1) to estimate the mass accretion rate,Ṁ , at the time of the weak burst onset. Using the estimated pre-burst flux of 7.9 × 10 −10 erg cm −2 s −1 , and a distance of 3.5 kpc, we find, Using the same parameter assumptions as above, we find estimates ofṀ = 1.56 × 10 −10 and 1.38 × 10 −10 M ⊙ yr −1 , respectively. Burst Spectral Evolution: Peak Flux and Fluence We first segmented the burst light curve into intervals of approximately 500 counts using a 1/8 s time bin. We modeled the segmented spectra in the 0.5 -10 keV band by adding a blackbody component to the pre-burst persistent emission model. The parameters of the persistent emission model were frozen to their best fit values, given in Table 1, only allowing the added blackbody component to vary, so that our model is phabs(constant(diskbb + bbodyrad + powerlaw + gaussian) + bbodyrad). We first tried multiplying the persistent emission model by a constant (Worpel et al. 2013), but found it was not statistically necessary, as it was possible to get a good fit with it left frozen at 1.0. The resulting evolution of the bolometric flux, the free parameters of the blackbody temperature and blackbody radius (at 3.5 kpc), along with the resulting χ 2 are shown in Figure 4. We found a peak bolometric burst flux of f b = 6.98 ± 0.50 × 10 −9 erg s −1 cm −2 . Using trapezoidal numerical integration of the flux, we calculated a bolometric fluence of 7.05 ± 1.16 × 10 −8 erg cm −2 . The burst luminosity is defined as where ξ b characterizes the anisotropy of the burst emission. Adopting ξ b = 1, and with d = 3.5 kpc, we can then estimate that the total energy released during the burst was 1.03 × 10 38 ergs. Figure 4. Evolution of the weak X-ray burst derived from spectral modeling in the 0.5 − 10 keV band. We show from the top down: the bolometric flux, blackbody temperature, blackbody radius (at 3.5 kpc), and reduced χ 2 , respectively. The error bars indicate 1-σ confidence intervals. PHYSICAL SCENARIO AND INTERPRETATION Transient systems like J1808 provide an interesting laboratory to explore the different predicted regimes of nuclear burning on neutron stars. Deep X-ray spectroscopic studies of the object in quiescence suggest rapid cooling of the neutron star core, perhaps by a form of enhanced neutrino emission such as Direct Urca (Heinke et al. 2009), which also provides some tentative evidence for a more massive neutron star (≳ 2M ⊙ ) in the system. Using the surface effective temperature constraints in quiescence from Heinke et al. (2009) and the theoretical results of Potekhin et al. (1997), Mahmoodifar & Strohmayer (2013) estimated a core temperature for J1808 in the range from 7.2 − 7.7 × 10 6 K, for neutron star masses between 1.4 and 2.0 M ⊙ . Due to the high thermal conductivity in the core and crust (Brown & Cumming 2009), it is thus very likely that when accretion begins in J1808 after a period of quiescence, the accumulating layer starts out at temperatures ≲ 1 × 10 7 K, that is, well below the temperature at which CNO cycle hydrogen-burning becomes thermally stable (Fujimoto et al. 1981;Cumming 2004;Galloway & Keek 2021). In this temperature-sensitive regime hydrogenburning proceeds at very low levels, and the thermal profile of the accumulating layer will be set principally by compressional heating. This is a much less efficient heat source than the energy released from hot-CNO cycle burning in the stable burning regime. Fujimoto et al. (1981) estimate the accretion rate required to maintain a stable hydrogen-burning shell (see their Table 1,Ṁ st (B)) as 2.7 × 10 −10 M ⊙ yr −1 , for a neutron star mass and radius of 1.41M ⊙ and 6.57 km. We note that the somewhat older neutron star models employed by Fujimoto et al. (1981) have rather small radii compared to that suggested by more recent modeling (Miller et al. 2019;Riley et al. 2019). For a more typical radius of, say, 11 km (which we employed above), we would expect that the estimated rate would increase modestly by ≈ 10%, which would bring the value to ≈ 3 × 10 −10 M ⊙ yr −1 . Expressed as a fraction of the Eddington accretion rate,Ṁ Edd , and adopting the value ofṀ Edd = 1.8 × 10 −8 M ⊙ yr −1 (Cumming 2004), this is then equivalent toṀ = 0.0167Ṁ Edd . Note also that Cumming (2004) quotes a value ofṀ ≳ 0.01Ṁ Edd for stable, hot-CNO cycle hydrogen-burning. In addition, Cooper & Narayan (2007) used a two-zone model to carry out a linear stability analysis to specifically explore the conditions under which hydrogen-triggered bursts can occur at low accretion rates, and found that they occur for rates ≲ 0.003Ṁ Edd . Above, we estimated an accretion rate at the time of the weak NICER burst in the range from ≈ 1.38−1.56× 10 −10 M ⊙ yr −1 (Ṁ ≈ 0.0077 − 0.0087Ṁ Edd ). This is less than the required rates estimated by Cumming (2004) and Fujimoto et al. (1981) for stable CNO burning, but slightly higher than the accretion rate obtained by Cooper & Narayan (2007). These considerations provide strong evidence that in the initial outburst stage, accretion onto J1808 proceeds in anṀ range consistent with what Fujimoto et al. (1981) refer to as case 3 shell flashes. In this regime the accumulating layer remains cool enough that CNO hydrogen-burning proceeds in the temperature-sensitive regime, that is, very little hydrogen is burned until the layer reaches the conditions for unstable ignition. Indeed, following Fujimoto et al. (1981, see their equation 11), we would estimate that only about 1 − 2% of the hydrogen would be burned prior to ignition. Further insights are provided by our estimates of the total column of matter accreted at the time of the burst, and its total energy fluence. In §2 above we estimated the accreted column to be in the range ≈ 3.91 − 5.12 × 10 7 g cm −2 , and we measured an energy fluence in the burst of ≈ 1 × 10 38 erg (both at 3.5 kpc). For the following discussion we refer the reader to the illustrative hydrogen ignition curves presented by Cumming (2004, Fig. 4). Based on these curves, we can estimate that a column of this size would be ignited at a temperature in the range from ≈ 4 − 5 × 10 7 K. What happens upon ignition of the hydrogen? The unstable burning will quickly heat the layer, raising the temperature to at least that at which the CNO energy generation rate saturates, but likely somewhat higher. Fujimoto et al. (1981) estimate that only a small fraction, ∆X, of the hydrogen needs to burn in order to raise the temperature. For a temperature change of 10 8 K, they estimate ∆X ≈ 0.002 (see their equation 12). After ignition of the hydrogen, two subsequent paths have been described in the literature. First, if the ignition column is small enough, then an increase in its temperature may not cause it to cross the helium ignition curve, and additional accretion and/or an increase in the helium fraction is required before it will ignite. Alternatively, for deeper ignition columns, a temperature increase of a few 10 8 K would render the shell unstable to helium ignition, promptly producing a mixed H/He burst. We note that the work of Cooper & Narayan (2007) and Peng et al. (2007) also predict these two paths, and their calculations provide estimates of the hydrogen ignition columns that are broadly consistent with our estimate of the column accreted at the time of the weak burst. For example, at an accretion rate ofṁ = 0.002ṁ Edd Cooper & Narayan (2007, see their Fig. 4, right column) find behavior consistent with the first scenario, a sequence of weak hydrogen flashes occurs until the helium column grows sufficiently to reach ignition conditions. These calculations also provide an estimate of the temperature increase produced by the unstable hydrogen ignition, and suggest that changes of ∼ 2 × 10 8 K are likely. Peng et al. (2007, see their Fig. 7) also find a regime where hydrogen ignition does not lead to prompt ignition of a He burst. They also explore the effect of sedimentation on hydrogen-triggered bursts, which enhances the amount of CNO nuclei at the ignition depth and causes a sharper temperature rise. Sedimentation is likely to play an important role in setting the ignition conditions for the weak burst given the low estimated accretion rate. Measurement of the burst fluence enables us to estimate the fraction, f h , of accreted hydrogen needed to burn in order to produce that much energy. For an energy release (per gram) of E h = 6.4×10 18 erg g −1 (Clayton 1983), we would require m h = 1.6 × 10 19 (1 + z) g of hydrogen to burn, where the factor of (1 + z) is included because we are interested in the energy released at the neutron star surface. Expressed as a column on the neutron star, y h = m h /4πR 2 , and assuming R = 11 km, we find y h = 1.05 × 10 6 (1 + z) g cm −2 . The amount of hydrogen present in the accreted column is y a X, where X is the mass fraction of hydrogen in the accreted material. We thus have, where Y a is the estimated accreted column in units of 10 7 g cm −2 . Taking Y a in the range from 3.9 − 5.1, a fractional hydrogen abundance in the accreted fuel of X = 0.7, and using the same M and R assumptions employed above to evaluate (1 + z), we find f h in the range from 0.04 − 0.06. Note that this value should be considered a lower limit, as it assumes that the estimated total accreted column produced only a single such burst, and the mass fraction would likely be reduced further if sedimentation is present (Peng et al. 2007, see below for additional discussion regarding potentially missed bursts). This value is larger than the estimate given by Fujimoto et al. (1981) for the fraction of hydrogen needed to burn to raise the temperature of the fuel by ∆T = 10 8 K; we don't know the actual temperature increase however, and the estimate of Fujimoto et al. (1981) should be thought of as a lower limit to our estimate from the measured burst energy fluence, since burning will continue at the stable CNO burning rate. Alternatively, we can ask the question, how much energy would we expect in the burst if the entire accreted column were to burn to heavy elements? The energy release per gram, Q nuc , would depend on the details of the nuclear burning pathways, however, employing the value of Q nuc = (1.3+5.8X)×10 18 erg g −1 (Galloway & Cumming 2006), and again adopting X = 0.7 we would expect ≈ 3.2 − 4.2 × 10 39 ergs liberated at the neutron star surface by burning all the fuel. This estimate also assumes that the total accreted column produces a single burst. This is a factor of 30 − 40 larger than the observed fluence in the weak X-ray burst, and also argues that the weak burst is likely not a mixed H/He burst. Rather, our analysis suggests that it likely represents the unstable ignition of a modest fraction of the hydrogen in the accreted layer, which constitutes strong observational evidence for such a weak "hydrogen-only" flash. Interestingly, in their two zone model Cooper & Narayan (2007) compute the peak energy fluxes produced during such weak hydrogen flashes. The range of fluxes that their model can produce is summarized in their Fig. 7 (bottom panel), where the peak flux is given as a fraction of the Eddington flux. Working backward, we measured a peak flux during the weak X-ray burst of ≈ 6.9 × 10 −9 erg cm −2 s −1 . If we scale this by the peak flux (2.3 × 10 −7 erg cm −2 s −1 ) of the Eddington-limited burst observed later in the outburst (Bult et al. 2019a Stable burning after the burst? Previous theoretical studies concluded that the ignition of the hydrogen flash will raise the temperature in the layer to at least the stable burning regime, and likely higher. Thus, hot-CNO cycle burning would be expected to continue for some period of time after the unstable ignition. Can we see evidence for such stable burning in the NICER data? Interestingly, there is a clear "offset" between the pre-and post-burst flux levels. This offset can be seen in Figure 2. Note that the inset panel uses a larger time bin size and log scale to emphasize the persistent count rate levels, to more clearly highlight the offset. We also plot the average count rate value for the pre-burst level (red dashed line) as a guide to the eye. To explore this question further we used the same spectral model to characterize the postburst data as we used for the pre-burst and other persistent emission intervals. The time interval used for the post-burst spectral extraction is marked by the vertical dotted lines in Figure 2 (main panel). We first tried to fit the post-burst spectrum using the same spectral shape as obtained from the pre-burst interval, allowing for the constant, f a parameter to make up the flux difference. This did not provide an acceptable fit, and suggests the presence of an additional spectral component in the post-burst interval. To explore this further we subtracted the pre-burst spectrum from the post-burst and found that the remaining excess could be well fit by a soft thermal spectrum, characterized as a blackbody with kT = 0.51 ± 0.02 keV, normalization of 82.5 ± 12.0, and bolometric flux of 6.1 ± 0.2 × 10 −11 erg cm −2 s −1 . This is equivalent to a luminosity of ≈ 8.9 × 10 34 erg s −1 (at 3.5 kpc). If the hydrogen burns stably at the same rate as it is accreted, then we would estimate a hydrogen-burning luminosity of L h = XṁE h , where X is the mass fraction of hydrogen in the accreted fuel,ṁ is the mass accretion rate at the burst onset, and E h is the energy production per gram due to hydrogen-burning. Witḣ m = 1.4×10 −10 M ⊙ yr −1 , X = 0.7, and E h = 6.4×10 18 erg g −1 , we would predict a stable hydrogen-burning luminosity of ≈ 4 × 10 34 erg s −1 , which is a good fraction of the measured offset. Perhaps a better estimate can be obtained by evaluating the energy production rate associated with the saturated, hot-CNO burning rate as, L CN O = 4πR 2 y a ϵ CN O , where ϵ CN O , y a , and R, are the energy production rate due to hot-CNO burning, the accreted column depth, and the neutron star radius, respectively. With ϵ CN O = 5.8×10 13 (Z CN O /0.01) erg g −1 s −1 (Cumming & Bildsten 2000), y a = 4.5×10 7 g cm −2 , and R = 11 km, we find L CN O ≈ 4 × 10 34 (Z CN O /0.01) erg s −1 . Here, Z CN O is the abundance of the CNO catalyzing elements. Employing the solar value Z CN O = 0.016, we find L CN O = 6.4 × 10 34 erg s −1 , however, as noted above, at these low accretion rates sedimentation is very likely to be effective in enhancing the abundance of CNO elements near the base of the accreted fuel layer. For example, Peng et al. (2007, see their Figs. 2 & 3) report enhancements in CNO element abundances by factors of 2 to 5, depending on the accretion rate. Based on these estimates it appears plausible that most or all of the observed flux offset can be accounted for by quasi-steady, hot-CNO burning of hydrogen. We note that the thermal nature of the spectral excess, and its ≈ 0.5 keV temperature, similar to that at late times during the weak burst, is also consistent with this interpretation. This conclusion is also consistent with the hydrogen flash temperature and flux evolution calculations of Cooper & Narayan (2007). As an example, the hydrogen flashes shown in their Fig. 4 indicate that at ignition the flux rises abruptly, but then shows a "plateau-like" phase which decays over a timescale of several hours. The average flux levels near the beginning of these events are approximately consistent with the stable hydrogen-burning luminosity we estimated above. Once ignited, these flashes are burning hydrogen to helium in the fuel layer at essentially the saturated, hot-CNO cycle rate. We suggest that the two-zone model of Cooper & Narayan (2007) (with H and He zones) probably does not adequately track and resolve the fast, initial hydrogen-burning when the thermal instability is initiated, but better predicts the longer timescale, thermally stable burning. The hydrogen-only ignition modeled by Peng et al. (2007, see their Fig. 7) also appears at least approximately similar to what is observed for the weak NICER burst. Indeed, the ratio of the peak burst bolometric flux to the persistent, pre-burst flux is 7 × 10 −9 /7.9 × 10 −10 ≈ 8.8, which is similar to the peak value of F cool /F acc for the initial, burst-like flux increase shown in their Figure 7 (middle panel), and the overall burst duration appears consistent with the observed burst as well. More detailed radially resolved, and perhaps multidimensional calculations will likely be needed to more accurately track the rapid hydrogen ignition phase which we suggest may account for the weak NICER burst. To briefly summarize, the weak NICER burst and postburst flux offset appear to be consistent with the onset of a hydrogen-triggered shell flash in the cool, temperaturesensitive regime of the CNO cycle. The ignition column was likely shallow enough that the subsequent temperature increase was not sufficient to also promptly ignite a helium-burning instability. Missed bursts? While NICER was able to begin observations quite close to the onset of accretion in the 2019 August outburst, the overall on-source coverage from onset to the time of the first observed burst was still rather modest, with a duty-cycle of about 4%. Thus, if other bursts occurred it is conceivable that the NICER observations simply missed them. However, based on our estimate of the size of the accreted column, as well as current theoretical estimates of the hydrogen ignition curve, we argue that likely only a few such bursts might have been missed. Firstly, while we don't know the precise temperature trajectory of the initial accumulating layer, it cannot plausibly be ≲ 2 × 10 7 K because at such low temperatures only columns much larger (≳ 2 − 3 × 10 8 gm cm −2 ) than our estimate of the accreted column at the time of the weak burst (3.9−5.1×10 7 g cm −2 ) would be needed to ignite unstable burning, and such an ignition would also very likely lead to a bright, mixed H/He burst, which was not observed, though could have perhaps been missed. Secondly, as the temperature of the fuel layer increases the size of the unstable column decreases, however, above temperatures of about 8 × 10 7 K the hydrogen-burning will stabilize, precluding bursts. This sets a minimum combustible column for hydrogen ignition which is, using the ignition curve in Cumming (2004) as a guide, ≈ 1 × 10 7 g cm −2 . Based on our estimated accreted column this would set a limit of not more than about five such bursts potentially being produced, as that would just about exhaust the total column accreted at the time of the weak burst. Another benchmark can be set by the accretion rate. We estimated a value ofṁ = 1.4 − 1.6 × 10 −10 M ⊙ yr −1 at the time of the weak burst. If we take half of this value as more representative of the mean rate during the 72 hrs prior to the weak burst, we can estimate the time required to accrete the minimum unstable column of 1 × 10 7 g cm −2 . Forṁ = 7 × 10 −11 M ⊙ yr −1 , and assuming a radius R = 11 km, we find it would take 9.5 hr to accumulate such a column. Since the weak burst was observed after about 2.9 days, this also suggests an upper limit of ∼ 7 to the total number of such weak bursts. We suggest that the actual temperature trajectory is probably somewhere between the two extremes described above, perhaps consistent with an unstable column on the order of ∼ 2 − 3 × 10 7 gm cm −2 . If this is correct it would suggest that the NICER observations may have missed one or two such weak bursts. Other examples: RXTE observations of the 2005 outburst We searched the literature and previous observations of J1808 to try and identify similar examples of weak bursts. We found a quite similar event early in the 2005 June outburst that was observed with RXTE. We show in Figure 5 the light curve of this outburst as obtained from RXTE pointed observations. This burst occurred on 2 June at approximately 00:42:30 TT, and is evident near 0.25 days in the figure. We carried out a time resolved spectral analysis of this event, and found qualitatively similar properties for this burst as for the weak NICER burst. It reaches a peak bolometric flux of 1.54 ± 0.11 × 10 −8 erg cm −2 s −1 , about a factor of 2 greater than the NICER burst. It also had a peak blackbody temperature of 1.25 keV, which is about 25% larger than that of the NICER burst. We note that this burst appears in the Multi-Instrument Burst Archive (MIN-BAR) catalog, with a reported peak bolometric flux of 1.6 × 10 −8 erg cm −2 s −1 , and a fluence of 1.67 × 10 −7 erg cm −2 ). Figure 5. Light curve from RXTE data (PCU 2, 3-30 keV) of the 2005 outburst from J1808. Note the logarithmic scale. A weak X-ray burst is seen early in this outburst. Much brighter and energetic bursts are seen near days 4 and 8. Note that the burst near day 8 was truncated by the RXTE exposure, and almost certainly the brightest part of this event was missed. The first evidence of active accretion for this outburst was provided by RXTE/PCA Galactic bulge scan observations on 31 May at 23:00:00 UTC, and indicated a persistent 2 -10 keV X-ray flux level of ≈ 3 mCrab (Markwardt et al. 2005). This flux value is similar to that measured with NICER for OBSID 2050260109 during the 2019 outburst (see Table 1). The X-ray burst was observed approximately 25.7 hr later, and MIN-BAR reports a persistent flux at the time of the burst of 8.6 × 10 −10 erg cm −2 s −1 , just a bit larger than the value estimated prior to the 2019 NICER burst (again, see Table 1). We can use the pre-burst flux value reported by MINBAR and the earliest RXTE observations of the 2005 outburst reported by Markwardt et al. (2005) and Wijnands et al. (2005) to estimate the persistent, accretion-driven fluence prior to the weak 2005 burst. Evaluating a simple trapezoidal sum gives a value of 3.8 × 10 −5 erg cm −2 that is approximately half of the estimated fluence prior to the 2019 NICER event. This then suggests a total accreted column just prior to the 2005 RXTE event of about half that estimated for the 2019 NICER burst. Simply scaling our value estimated for the 2019 NICER burst suggests a range of 2.0 − 2.6 × 10 7 g cm −2 for the total accreted column prior to the 2005 RXTE event. Subsequent bursts detected with NuSTAR Additional observations of J1808 were collected with NuSTAR between 2019 August 10 and 11 . While these data do not cover the time of the weak X-ray burst observed with NICER, NuSTAR did catch two subsequent bursts, providing some additional, interesting context to this early phase of the outburst. We processed the NuSTAR data (ObsID 90501335002) using nustardas version 2.1.2. Source data were extracted in the 3 − 79 keV energy range from a 40 ′′ circular region centered on the source coordinates. The background was extracted using the same approach, but with the extraction region positioned in the background field. The NuSTAR light curve reveals two X-ray bursts, the first of which occurred 24.8 hours after the weak NICER burst, while the second occurred another 11 hours later. This light curve is shown in Figure 6. We emphasize that though some of the NICER exposure was simultaneous with NuSTAR, this did not include these two bursts, and they were only observed with NuSTAR. We first investigate the persistent emission by extracting a spectrum from a 100 second window just prior to the first NuSTAR burst. As can be seen in Figure 6, this epoch was simultaneously observed with NICER, so we also extracted the contemporaneous NICER spectrum to obtain broadband energy coverage. We model this spectrum using the same persistent emission used previously (see Table 1), allowing for a constant cross calibration factor between NICER and FPMA/B of NuSTAR. In keeping with the analysis of the NICER burst, we extrapolated the best spectral model over 0.1 − 100 keV to find a bolometric flux estimate of 1.47 ± 0.05 × 10 −9 erg s −1 cm −2 . From the recurrence times between the observed bursts, we obtain an estimate of the fluence due to the accretion of 1.3×10 −4 erg cm −2 and 5.8×10 −5 erg cm −2 for the two bursts, respectively. Converting these measurements to column depths, we use equation 4 to find 8.4 × 10 7 and 3.7 × 10 7 g cm −2 , respectively, where we again assumed a 1.4 M ⊙ neutron star mass and an 11 km stellar radius. These column depths are of the same order as the one we calculated for the initial NICER burst. Indeed, given the observed 11 hr recurrence time between the two NuSTAR bursts, and the relatively constant persistent flux (and hence accretion rate), it is conceivable that a similar burst was missed in the gap between the weak NICER burst and the first NuSTAR burst. If so, then the accretion fluence for the two NuS-TAR bursts would be essentially consistent with each other. To explore the burst spectra, we proceeded by dividing the bursts into multiples of 1/8 seconds such that each bin contains at least 500 counts. We extract a spectrum for each bin and model it using an absorbed blackbody in addition to the fixed persistent emission. The inferred burst properties obtained from these fits are listed in Table 2. The two NuSTAR bursts had fluences of 5 × 10 −7 erg cm −2 and 3 × 10 −7 erg cm −2 , respectively. This means that they are about a factor of 4 -7 times more energetic than the weak X-ray burst observed with NICER. The first NuSTAR burst reached a peak flux ten times greater than that of the weak NICER burst, and it was also significantly "hotter," reaching a peak blackbody temperature of 2.3 keV. At the same time, these bursts remain much fainter than the Eddington-limited bursts observed at later times in the outbursts of J1808, which typically have fluences of 2 ∼ 4 × 10 −6 erg cm −2 (Galloway et al. 2008;in't Zand et al. 2013). SUMMARY, CAVEATS & OUTLOOK Based on the considerations above we suggest a scenario similar to that discussed in the work of Cooper & Narayan (2007) and Peng et al. (2007) as a working hypothesis to account for the weak bursts observed by NICER and RXTE during the early days of the 2019 and 2005 outbursts of J1808. As accretion begins, the neutron star is cool enough and the accretion rate is low enough that CNO hydrogen-burning in the accumulating layer occurs in the temperature-sensitive regime. At these lower temperatures, ≲ 5 × 10 7 K, very little hydrogen is burned. Significant burning of hydrogen will only begin when the accumulated column reaches the conditions for the thermal instability to set in. For a temperature of ≈ 5 × 10 7 K this will occur at a column depth of about 3 × 10 7 g cm −2 . This value is not too dissimilar from the column estimated just prior to the 2005 event. When the initial accumulating layer reaches ignition depth the hydrogen instability occurs, triggering a hydrogen flash. We suggest that the initial rapid increase in the nuclear energy generation rate ultimately results in the "heat pulse" that is observed as the weak X-ray burst, however, we think that more sophisticated, multi-dimensional theoretical calculations of the timedependent nuclear energy generation coupled with the subsequent heat and radiation transport, will be needed to test the details of this hypothesis. After the initial hydrogen ignition, the burning layer will reach a high enough temperature that subsequent hydrogen-burning can proceed at the thermally stable level appropriate to the hot-CNO cycle. Above, we have argued that the observed offset between the pre-and post-burst flux levels of the 2019 event is consistent with this "quasi-steady" burning phase. This source of heat will keep the layer warm enough for burning to continue for a time, likely measured in hours if conditions are not too dissimilar from those modeled by Cooper & Narayan (2007). During this time the quasi-stable burning will increase the helium fraction of the layer. Given the gaps in NICER coverage after the weak burst, we cannot say how long this "quasi-steady" burning may have persisted, but we note that observations ≈ 3.5 hrs after the burst show a count rate and flux approximately consistent with the pre-burst level. For the conditions described above, that is, a hydrogen ignition column of ≈ 3×10 7 g cm −2 , such an initial hydrogen flash is unlikely to produce a prompt helium ignition, simply because at that column depth the helium will not be thermally unstable (Cumming 2004). As accretion continues, the hydrogen layer or layers that initially flashed will be pushed deeper, to higher column depths. The freshly accreted material above it will also reach the hydrogen ignition depth, and if so, produce another hydrogen flash, assuming its temperature is low enough. In this way, a sequence of hydrogen flashes could be produced. Eventually, the heliumenriched layers will likely reach column depths where the helium will ignite, producing more energetic, mixed H/He bursts. We suggest that the observed NuSTAR bursts are the result of this process. The steadily increasing accretion rate will also be an important variable, as this will tend to increase the temperature of the accreting layers. More complete theoretical modeling of this process will have to include the time-varying accretion rate (Johnston et al. 2018). If the above scenario is approximately correct we can try to speculate further regarding a few other details of the observations. The 2005 event observed with RXTE was the earlier event in terms of the time since outburst onset, occurring approximately 1 day after onset. Other things being equal one would expect the accreting layer to be cooler than at later times, such as the 2.9 days post-outburst from the 2019 event. A cooler shell will have a larger unstable column, so that this could perhaps explain the fact that the RXTE event is the more energetic of the two weak bursts. This also provides some tentative evidence that the 2019 NICER event may have been preceded by at least one additional weak burst that was missed. Remaining Uncertainties and Alternative Interpretations In estimating accretion rates and accreted columns we allowed for variation in the neutron star mass, however, Peak flux (erg s −1 cm −2 ) 7 × 10 −9 7 × 10 −8 4 × 10 −8 3 × 10 −7 Burst fluence (erg cm −2 ) 7 × 10 −8 5 × 10 −7 3 × 10 −7 2 × 10 −6 Accretion fluence (erg cm −2 ) 8 × 10 −5 1.3 × 10 −4 5.8 × 10 −5 · · · Peak kT (keV) 1.0 2.3 1.7 2.5 Note-The properties of the second NICER burst are taken from Bult et al. (2019a) as an example of a bright Eddington-limited X-ray burst from J1808 there are other uncertainties which complicate such estimates. These include the source distance, anisotropy factors, bolometric corrections, and the line-of-sight absorption. We note that the more recent work of Goodwin et al. (2019) reports a slightly closer distance of 3.3 +0.3 −0.2 kpc for J1808. While their quoted uncertainty range includes the 3.5 kpc value we have adopted, a decrease from 3.5 to 3.3 kpc would reduce our estimates by a factor of 0.9. These authors also provide estimates of the anisotropy factors for both persistent and burst emission, finding ξ p = 0.87 +0.12 −0.10 and ξ b = 0.74 +0.10 −0.10 . Applying these values would also reduce the estimated accretion rate and column, by a factor of 0.87, and decrease our estimate of the burst peak luminosity and fluence, by a factor of 0.74. Adopting the best values reported by Goodwin et al. (2019) for both d and ξ p would reduce the estimated accretion rate and accreted column by a factor of 0.77. We have argued that the weak bursts result from, principally, hydrogen-burning, but are there other possibilities involving the unstable burning of helium? One scenario that can produce weak or underluminous bursts is the phenomenon of short recurrence time (SRT) bursts (Keek et al. 2010). The idea behind SRT events is that they burn fuel remaining from a preceding burst. In the present case, a preceding, larger burst would have had to occur (and been missed) for this idea to be workable. In principle, this could account for the observed weak bursts, but there are some difficulties with this interpretation. First, the estimated accreted columns are uncomfortably low. This scenario would require that a relatively bright, mixed H/He burst would have occurred prior to the observed weak events, and been missed in each case. As discussed above, this would require relatively large ignition columns, likely ≳ 2 × 10 8 g cm −2 , which is much larger than the estimated columns present just prior to each weak event. Our accretion column estimates would have to be underestimated by factors of 4 -5 for this to be more plausible. Second, J1808 is not currently known to produce SRT events. There has been reasonably good coverage of past J1808 outbursts, and no SRT events have been definitively observed. For example, the compilation of SRT burst observations by Keek et al. (2010) does not include J1808, and we also note that the 401 Hz spin frequency for J1808 is less than the faster, ≳ 500 Hz, spins associated with some of the known SRT sources. Third, the flux offset between the pre-and post-burst emission seems to make more sense in the context of stable hydrogen-burning than what might be expected from an SRT event, for which one would not typically expect to find such a flux offset. We note also that the theoretical mechanism of opacity-driven convective mixing explored by Keek & Heger (2017) to account for SRT bursts occurs for ignition in relatively hot envelopes, which seems less applicable to the low accretion rate regime near burst onset that we have described above. It is difficult to completely rule out the SRT scenario, but we think the considerations above argue against it. We have argued that the early accretion outburst evolution onto a "cool" neutron star in J1808 provides a unique environment to explore the physics of nuclear burning on neutron stars, and most interestingly, the ignition of unstable hydrogen-burning in the temperaturesensitive regime of the CNO cycle. We suggest that the weak bursts seen by NICER and RXTE in the 2019 and 2005 outbursts, respectively, may result from this process. More complete, continuous observational coverage of the first 4-5 days of subsequent outbursts from J1808 could definitively test this hypothesis. Such data would also provide for detailed physical comparisons with new theoretical efforts to track the outcome of time-varying accretion onto neutron stars and the subsequent nuclear burning of the accreted matter. This could provide interesting constraints on such things as the accretion rate, the thermal profile of the accreting matter and the nuclear energy generation and subsequent heat transport in the accreted layers.
2023-04-04T01:16:18.760Z
2023-03-31T00:00:00.000
{ "year": 2023, "sha1": "91d9919c62a3c4fae07cae4759966c855506261c", "oa_license": "CCBY", "oa_url": "https://doi.org/10.3847/1538-4357/acc24f", "oa_status": "GOLD", "pdf_src": "Arxiv", "pdf_hash": "91d9919c62a3c4fae07cae4759966c855506261c", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
250316876
pes2o/s2orc
v3-fos-license
The effect of psyllium (Plantago ovata Forsk) fibres on the mechanical and physicochemical characteristics of plant-based sausages Psyllium is a source of natural dietary fibre with recognised health benefits that can be used as a hydrocolloid with functional food applications. The purpose of this study was to determine the effect of different levels of Plantago ovata fibres in plant-based sausages on their composition, physicochemical, and mechanical properties. Proximate composition was studied. Water activity (aw), water release, pH, colour measurement, texture profile analysis (TPA), and Warner–Bratzler Shear Force (WBSF) were determined to establish the physicochemical and textural properties of sausages. A plant-based sausages microstructure study and a sensory study were carried out to better understand conformation and to determine their acceptance. The results showed that sausages had high ash and carbohydrate contents but, above all, a low-fat content. The use of psyllium increased water-holding capacity. The results also indicated that employing Plantago ovata white (PW) fibre can minimise mechanical problems and reduce colour changes. However, PW fibre showed less retained water, which was why chickpea starch further developed and was more gelatinised. At the same time, the plant-based sausages with PW fibre obtained the best overall score with the fewest colour changes in the sensory evaluation. Nevertheless, further studies are recommended to improve the texture and acceptability of these plant-based sausages. Supplementary Information The online version contains supplementary material available at 10.1007/s00217-022-04063-2. Introduction Currently, there is an increase in the consumption of veggie diets. The three diets forming part of the veggie world are flexitarian (predominantly plant-based diets with occasional portions of meat or fish), vegetarian (can include egg and dairy products), and vegan diets (excludes all animalsourced foods) [1,2]. As the Lantern study [2] indicated, 7.8% of the Spanish population followed one of these diet types in 2017, and the consumption of these diets increased by 27% in 2019. However, this percentage is lower than it is in countries like Germany and England [3], and these diets are on the increase in Europe and other Western countries [4]. Between 2017 and 2018, a report on food consumption in Spain recorded a 2.6% reduction in consumed meat [5]. One of the main reasons for choosing this diet type could be because vegan and vegetarian diets are associated with lower risks of chronic diseases for adults [6], and a wide range of health benefits like improving glycaemic control, blood lipids, body weight, and blood pressure [7]. Vegetarians and vegans describe their motives as "ideological" and primarily mention environmental concerns, animal welfare, and other ethical considerations as their reasons for choosing to eat these diet types [3,8]. Meat production is mainly responsible for environmental pressures, such as pollution and unsustainable use of resources [9,10]. According to an FAO report [11], meat is not essential in diet, since a large number of vegetarians have a nutritionally adequate diet. Therefore, Kamani et al. [12] concluded that meat should be substituted for totally vegetable origin food, so that the resulting product would be more similar to the original product in sensorial and textural acceptability terms with an adequate nutritional value. This makes replacing meat with plant-based meat substitutes an interesting alternative [9]. However, meat analogues are not always successful, which is mainly related to their low sensory quality [13]. Consequently, the great challenge for the food industry is to preserve the sensory and texture quality of analogous meat products. Various plant proteins can be used to help to overcome this problem, such as proteins from legumes, cereals, oilseeds, and soya, because these ingredients have functional properties, such as emulsifying, and water and oil absorption capacities, and high nutritional values [14]. Another problem with today's meat substitutes is that they are three-to-four times more expensive than meat products [13]. These proteins are used in different ways as textured proteins, flours, and concentrated or isolated proteins. Some studies have shown that some hydrocolloids are capable to improve the physical and sensory food properties. It is a well-known fact that dietary fibres (DFs) have technological functions, such as water absorption and water retention, and minimise production costs without affecting the sensory properties of the final product [15]. Furthermore, they have opened possibilities to design new fibreenriched products and generate new textures for a variety of applications [16]. Psyllium (Plantago ovata) is a source of natural DF [17] with good water absorbability and gelling properties [18]. This means that it can be used as a hydrocolloid with functional applications in new food production [19]. Furthermore, much research has indicated psyllium health benefits for diabetes, constipation, colon cancer prevention, diarrhoea, inflammatory bowel disease (ulcerative colitis), irritable bowel syndrome symptoms, abdominal pain, obesity, and hypercholesterolaemia [19][20][21]. There are several studies in which psyllium is used as a source of DF in the preparation of meat batters and sausages to improve texture and organoleptic properties or reduce fat [22][23][24][25][26]; however, no studies related to the use of this colloid to totally replace meat in sausages have been found. On the other hand, plant-based meat analogues are generally elaborated by extrusion [27][28][29]. In this study, three commercial types of P. ovata DFs, two from husk (one in husk form and other in powder) and one from the seed, were used to elaborate a plant-based sausage without using the extrusion technology due to it high costs. These DFs has been studied in the previous studies and showed different technofunctional properties, highlighting their different particle size and their hydration properties [18,30]. The main objective of this study was to determine and compare the effect of using three different commercial DFs from P. ovata at concentrations from 0 to 6% on the physicochemical, textural, and sensory characteristics of plantbased sausages. Materials All the ingredients used to prepare samples were supplied by the company Productos Pilarica S.A., Paterna, Spain. The main characteristics of P. ovata DF samples are shown in Table 1. Preparation of plant-based sausages The control sausage formulation was that used by Majzoobi et al. [31] with minor modifications. First, the texturised pea protein was hydrated in a ratio 1:50 during 30 min. Then, the plant-based sausages were prepared by mixing 37% cold water, 32% hydrated texturised pea protein, 13.2% whole chickpea flour, 10.5% olive oil, 4.2% potato starch, 1.3% salt, and 1.8% seasonings in a food mincer (Moulinex Multimoulinette, AT714G32, Moulinex, SEB Group, France) for 10 min. To study the effect of adding psyllium, the three different P. ovata fibres were added to the formulation at 0%, 3%, 4%, 5%, and 6% w/w: Plantago Husk (PH), Plantago Powder (PP), and Plantago White (PW). Table S1 shows the experimental design. The mixture was stuffed inside a 2.5 cm-diameter previously hydrated artificial casing (Productos Pilarica S.A., Paterna, Spain). Samples were cooked in a water bath at 80 °C for 20 min until 75 °C were reached in the centre of the samples. Then, samples were cooled in cold water ( Plant-based sausages analysis All the samples were analysed in triplicate for each analysis. Proximate composition of the plant-based sausages Samples' moisture (g water/100 g sample) was determined according to AOAC [32]. One gram of each sample was placed in a vacuum oven for drying (Vaciotem, J.P. Selecta, Spain) at 70 °C until constant weight. Crude fat quantification was performed by ether extraction with an Ankom XT10 Extraction System (NY, USA) [33]. The Dumas method in a Leco CN628 Elemental Analyzer (Leco Corporation, St. Joseph, MI, USA) determined the crude protein (nitrogen content × 6.25) according to Method 990.03 of AOAC International [34]. The crude ash content was determined by Method 923.03 [34]. A 1 g sample was incinerated at high pressure in a microwave oven (Muffle P Selecta Mod.367PE) for 3 h at 550 °C, and ash was gravimetrically quantified. Carbohydrates were calculated by difference. Water activity (a w ) Samples' water activity (a w ) was analysed by the AquaLab PRE LabFerrer equipment (Pullman, USA). Water release Plant-based sausages' water release was determined according to Majzoobi et al. [31]. One gram of each sample was cut into a thin slice and placed between two filter papers (Whatman No. 1) of known weight. Then, the samples between filters were pressed with a 1 kg weight for 20 min at 25 °C. The results were expressed as a percentage of water release. pH The pH of the plant-based sausages was measured using a Crison Basic 20+ pH meter (Crison S.A., Barcelona, Spain) with a puncture probe (Crison 5231). Colour measurement Samples' colour was measured using a Konica Minolta CM-700d colorimeter (Konica Minolta CM-700d/600d series, Tokyo, Japan) with a standard D65 illuminate and a 10° visual angle. A measurement was taken for both the powders of the previously ground P. ovata fibre samples (Minimoka GR-020, Coffemotion S.L., Lérida, Spain) to present the same granulometry and plant-based sausages. A reflectance glass (CR-A51, Minolta Camera, Japan) was placed between the sample and the colorimeter lens. The measurement window was 6 mm in diameter. For both powders and plantbased sausages, the results were expressed as in the CIELab system. The total colour difference (ΔE) was calculated according to: ΔE* = [(ΔL*) 2 + (Δa*) 2 + (Δb*) 2 ] 1/2 . For the P. ovata fibre samples, the powder with the same granulometry was placed inside a circular aluminium sample holder (17.7 mm diameter × 9.53 mm high). ΔE 1 was determined to observe the colour differences between PH and the other fibres (PP and PW). ΔE 2 was performed for the differences between PP and PW. For plant-based sausage colour, two sausages of each formulation were measured on both the internal and external sides. ΔE 1 was used to observe the differences between the internal and external colours at each concentration. To observe the colour difference due to the addition of P. ovata fibres (PH, PP and PW), ΔE 2 (internal and external) was determined in relation to the control sample. Texture analysis The plant-based sausages' texture was measured using a TA-XT2 Texture Analyser (Stable Micro Systems Ltd., Godalming, UK) with the Texture Exponent software (version 6.1.12.0). A texture profile analysis (TPA) was performed as described in Kamani et al. [12]. Samples (2 cm long × 2.5 mm diameter) were compressed to 50% strain of their original height using a steel probe (45 mm diameter). A time of 5 s was allowed between the two compression cycles and the test speed was 50 mm/min. The attributes calculated from the force-deformation curve were hardness (N), adhesiveness (N.s), springiness (mm), cohesiveness (dimensionless), resilience (dimensionless), and chewiness (N). With the same texture analyser, Warner-Bratzler Shear Force (WBSF) was performed according to Jin et al. [35] using a shearing V-shaped blade. The plant-based sausage samples (4 cm long × 2.5 mm diameter) were sheared at a crosshead speed of 100 mm/min. Firmness (N) was measured as the maximum peak force of shearing on the deformation curve. Confocal scanning laser microscopy (CLSM) was conducted using a ZEISS 780 microscope coupled to an Axio Observer Z1 inverted microscope (Carl Zeiss, Germany). To visualise samples, the C-Apochromat 20X/1.2 W water immersion objective was used. Images were obtained and stored at a resolution of 1024 × 1024 pixels by the microscope software (ZEN). The employed stains were Rhodamine B and Calcofluor White (Fluka, Sigma-Aldrich, Missouri, USA). Rhodamine B stained proteins and carbohydrates (starch granules) and was excited with diode line 488 and detected at 580 nm. Calcofluor White stained polysaccharides and was excited with diode line 405 and detected at between 410 and 477 nm. To observe and study samples, tissue sections (20 μm thick) were obtained using a cryostat (CM 1950, Leica Biosystems, Nussloch, Germany). The portion of tissue was placed on a slide. Then, 20 μL of Rhodamine solution were added and left to rest for 5 min. The same procedure was followed for Calcofluor White, and samples were covered with a glass coverslip. Sensory analysis The sensory evaluation of the plant-based sausages was carried out by ten trained panellists. Sensory tests were run to describe differences in the addition of psyllium fibres, and to select the best-valued fibre and concentration. Water was served as 'flavour cleaners' to avoid the sense-adaptation effect. Seven attributes of the cooked plant-based sausages were evaluated: general texture, chewiness, juiciness, gumminess, colour, visual aspect, and overall acceptability. Each sausage was cut into 2 cm after removing the casing. The trained panellists were separated by at least 2 m following COVID-19 regulations. The design involved randomising the order that samples were served in. Each sensory attribute was represented with a 9-box scale, where the general texture was labelled from "I do not like" to "I like very much", chewiness from "tender" to "leathery", juiciness from "dry" to "juicy", gumminess from "gritty" to "rubbery", colour and the visual aspect from "unappetising" to "very appetising", and overall acceptability from "totally rejectable" to "totally acceptable". Panellists marked the box at the intensity level that they believed best characterised each sample. Statistical analysis All the analytical determinations were made in at least triplicate. Version 17.2.04 of the Statgraphics Centurion XVII Software was applied to perform the analysis of variance (one-way ANOVA), with a 95% confidence level. The LSD test was followed to evaluate differences between samples (p < 0.05). A correlation analysis was run at the 95% significance level among all the studied parameters (Statgraphics Centurion XVII). Proximate composition Samples' proximate composition is presented in Table 2. The addition of the various levels of psyllium fibres had a significant effect on moisture (p < 0.05). The samples with the highest moisture were control and PH 3%, with no significant differences between them (p > 0.05). A general drop in moisture was shown as the concentration of psyllium fibres rose, and this decrease became more intense for fibres PP and PW. According to Stephan et al. [36], the moisture of vegan meat analogues is higher, because more water is required to produce them than conventional meat products, and also due to the hydration of dried proteins and hydrocolloid powders. For this reason, the moisture values of all the studied meat-free sausages were higher than those found by Stephan et al. [36] for German sausages (56.5%). However, the water content of vegan sausages made with isolated pea protein, as reported by Stephan et al. [36], was higher than for all the samples tested in this study, but similar to that of meat sausages in which animal fat had been replaced, like those studied by Grasso et al. [37], who used sunflower seed flour as a fat replacer in frankfurters. The highest protein content was obtained for samples PW 6% and, PP 5% and 6%. A fibre-concentration interaction was observed: when concentration rose, so did the protein content for fibres PP and PW, and this increase was intenser with the PP fibre. Nonetheless, no significant differences were found at the 6% concentration for PP and PW (p > 0.05). When adding the PH fibre, protein content significantly lowered (p < 0.05) and protein content lowered as the concentration rose. These results are similar to those by Wang et al. [38] for sausages made with Lentinula edodes as a replacer of pork lean meat at the 50% and 75% replacement levels. The control exhibited the highest fat content. However, this content was lower than in the meat-free sausages with different hydrocolloids made by Majzoobi et al. [31], but higher than the fat content in the meat emulsion formulated with guar-xanthan gum mixture by Rather et al. [39]. In all the studied samples, fat content dropped as the concentration of psyllium fibres rose, and the fat content of the plant-based sausages was very low. The lowest ash content was for the control sample, although all these values were higher than not only all the vegan and vegetarian recipes [31,36], but also for meat products [37,40]. It is worth noting that the ash content of these sausages was due to the contribution of the minerals that they could represent. This could also be related by the addition of psyllium fibres, because a rise in the fibre concentration resulted in higher ash content. This effect was stronger from the 5% concentration with PP fibre addition, because significant differences were observed between 5 and 6% of the PP fibres, and between 5 and 6% of fibres PH and PW (p < 0.05). A concentration-used fibre interaction was observed for carbohydrates content. The sample with the lowest carbohydrate content was the control, and carbohydrate content rose as the fibre concentration increased. This increase was greater when PW fibre was added. Therefore, the 5% and 6% PW samples had the highest carbohydrate content, with no significant differences between them (p > 0.05). Nevertheless, the 5% and 6% concentrations of samples PH and PW obtained significantly lower values than the PW samples (p < 0.05). Water activity, water release, and pH The water activity, water release, and pH of the plant-based sausages are listed in Table 3. Nasonova and Tunieva [41] indicated that using fat replacer did not significantly affect a w . The results observed in this study also affirmed that a meat-free sausage can display a similar water activity to both meat sausages and sausages formulated with fat replacers. This parameter is vital for food microbial stability. One good result was that the a w of all the studied samples was slightly lower than the results reported by Stephan et al. [36] for different vegan and vegetarian sausages, and those in Wang et al. [38] for sausages formulated with L. edodes as a fat replacer. The water release of the samples with added psyllium fibres came close to those obtained by Majzoobi et al. [31] for meat-free sausages with different hydrocolloid levels. The water release value was significant for the control sample (p < 0.05). As water release correlates negatively with water-holding capacity, a lower water release value is considered a desirable characteristic in sausages [31], because it can result in a product's better texture and 14.0 (0.5) a quality. The present study found an increase in the concentration of the three tested psyllium fibres (PH, PP, and PW), which resulted in decreased water release, which means and water-holding capacity. It is a well-known fact that dietary fibres possess technological functions, such as water absorption and water retention [15,18]. The range of pH values went from 5.89 to 6.56 (Table 3). A concentration-added fibre interaction took place, and increasing the concentration of all the tested added fibres led to a higher pH for sausages. This increase was greater for fibres PH and PP because of the significant differences between the 6% PP and PH samples and the PW 6% sample (p < 0.05). The pH values in the sausages with the added psyllium fibres were comparable to not only those obtained by Majzoobi et al. [31] for meat-free sausages with different hydrocolloid levels, but also to those found by Stephan et al. [36] for vegetarian sausage analogues and meat sausages like German boiled sausages. Jridi et al. [42] and Majzoobi et al. [31] reported slightly lower pH when adding hydrocolloids. On the contrary, pH increased in our study, which could be due to the pH of the psyllium fibres being between 5.10 and 6.14 [18]. Table 4 depicts the colour parameters of the psyllium fibre samples. The sample with the significantly highest lightness (L*) value was the PW fibre (p < 0.05). Redness (a*) and yellowness (b*) also significantly differed among the three samples, and the PP fibre obtained the highest a* and b* (p < 0.05). According to Bodart et al. [43], colour differences (ΔE 1 and ΔE 2 ) among all samples were humanly appreciable, over 3, and the biggest colour difference was between fibre samples PP and PW. The colour parameters of our sausage samples appear in Table 5. Figure 1 shows the sausage samples herein prepared. For external colour, the samples with the highest lightness (L*) values were the control and the samples with less added fibre. Adding fibres influenced colour, and the L* values significantly lowered with increasing fibre content. A fibre addition-concentration interaction took place internally and externally as the PP fibre addition more intensely lowered L*. This could be related to the colour of the PP fibre sample which had the highest darkness value (Table 4). These results are comparable to those obtained by Grasso et al. [37], who added sunflower seeds to frankfurters as a fat replacer. Those authors also reported a drop in L* when adding sunflower seeds. Colour measurement Our redness (a*) values were lower than those reported by Stephan et al. [36] for the vegetarian sausage analogue, but similar to the meat-free sausages made with different gums by Majzoobi et al. [31]. The plant-based sausages with less a* were those samples to which PP fibre was added, but with small differences between samples and the control for all the tested samples for both external and internal colours. In this case, the fibre with the highest a* was PP (Table 4). This result in sausages could be due to a colour change that occurred during cooking and could also be related to the interaction with other sausage ingredients. The sample with the highest external yellowness (b*) value was the control sausage. A fibre-concentration interaction was observed both internally and externally, and a b* lowered when fibre concentration increased. The drop in the b* value was significantly more marked at the PP fibre concentrations of 4%, 5%, and 6% (p < 0.05). However, the b* value of the PP fibre was the highest (Table 4), which was also the case with the a* value. No significant differences were observed between the samples with 4% and 5% of fibres PW and PH (p > 0.05), but significant differences appeared between the samples with 6% of fibres PW and PH (p < 0.05). These results were slightly higher than those reported by Stephan et al. [36] for vegan sausage analogues, but are comparable to those obtained by Wang et al. [38] for sausages with pork lean meat replaced with L. edodes. ΔE 1 was performed to observe the differences between internal and external colours at each concentration ( Table 5). These results generally showed that ΔE 1 was higher with a rising fibre concentration, but only for PH 3%, PH 6%, and PP 6% [43]. Figure 2a and b depicts the colour differences (ΔE 2 ) between addition of fibres and the control (internal and external colours). They show that ΔE 2 generally increased as the fibre concentration rose. Fibre samples PP had a higher ΔE 2 from the 3% concentration (see Fig. 1). However, the samples with a lower ΔE 2 value were those formulated with the PW fibre, which was perceptible only at the 5% and 6% concentrations, because the ΔE 2 values were above 3 (see Fig. 1) [43]. It should be noted that colour differences were bigger superficially than internally, except for the plant-based sausages with 5% and 6% PH. This could be due to the shape and particle size of the husk fibre (PH), as reported by Noguerol et al. [18]. Authors like Grasso et al. [37], Pintado et al. [44], and Henning et al. [45] have also stated colour alteration when adding [46], adding fibre minimises colour differences in the food matrix, which is important for avoiding possible consumer rejection, because liking food with an appropriate appearance could favour consumers' healthier product consumption. Table 6 indicates the effect of adding different P. ovata fibres levels on the textural attributes of the plant-based sausages. The control sausage obtained the lowest values for all the studied parameters, except adhesiveness. This finding implies that adding psyllium fibres modifies products' mechanical properties. As for the force required to cut a sausage, the samples with the highest firmness and hardness values were those made with the PW fibre. A concentration-fibre interaction occurred for hardness, cohesiveness, resilience, and chewiness as a higher content of the fibre samples assumedly increases these parameters. With hardness, the increase was significantly greater when PW fibre was added at all the concentrations (p < 0.05). Significant differences were observed for cohesiveness, resilience, and chewiness among all the plant-based sausages (p < 0.05), and adding the PW fibre resulted in significantly higher values for all the studied concentrations of these parameters (p < 0.05). Table 7 depicts the Pearson correlation among mechanical properties (TPA) and physicochemical parameters and carbohydrates' content. Significant Pearson correlations were observed among carbohydrate content and hardness, cohesiveness, resilience, and chewiness. This result indicates that adding psyllium fibres modifies the textural parameters of sausages. Significant negative correlations were found between the water release and TPA parameters (hardness, cohesiveness, resilience, and chewiness) (Table 7). Hence, plant-based sausage texture is related to the water-holding capacity of added fibres. Textural properties All the samples obtained similar springiness values, although significant differences were found among the plantbased sausages (p < 0.05). Stephan et al. [36] reported similar elasticity values for both meat and meat-free sausages, whereas Kamani et al. [12] indicated that non-meat proteins could hold more water and fat, which reduces springiness. Therefore, fibre content could also be included in this statement based on the results herein obtained. However, the elasticity values of Majzoobi et al. [31] for meat-free sausages were slightly higher, but the hardness values of meatfree sausages with xanthan were similar to the samples with PW fibre at the 4%, 5%, and 6% concentrations. According to Grasso et al. [37], sausages' textural behaviour can be related to composition (mainly protein and fibre content), but, in this case, we examined the mechanical properties of plant-based sausages. Microstructure To visualise the structure of the plant-based sausages, the lowest (3%) and highest (6%) concentrations of each psyllium fibre type were selected, as well as the control sample. No differences were found in the microstructural observation between both concentrations. Images correspond to the 6% concentration. Figures 3 and 4 show the distribution of the ingredients in the plant-based sausages studied by CLSM (confocal scanning laser microscopy). Polysaccharides, such as vegetal walls and fibre, were observed in blue by staining agent Calcofluor (Fig. 3). A very complex matrix is observed in the first row; vegetal tissue, probably from chickpea flour, and is dispersed among the matrix, together with partially gelatinised potato starch granules and oil droplets. Although the different psyllium fibres were not clearly identified, probably because psyllium interacted with other components in the matrix, its presence increased the consistency of the continuous phase, as reflected by the different firmness and hardness values (Table 6). Moreover, Yao et al. [47] indicated that the moisture content had a profound effect on fibre formation, showing that meat analogues extruded at 60.11% moisture had well-defined fibre orientation. Similar value to that shown in this study for sausages made with 5 and 6% PW ( Table 2). Stained cell walls surrounding chickpea starch granules are observed in Fig. 3 (second row). Starch granules and protein were red-stained with staining agent Rhodamine (Fig. 4). Protein is observed in the background and is less intensely stained. In all the samples, potato starch granules are larger and more gelatinised than chickpea granules, which are smaller and surrounded by cell walls. The plant-based sausages with PH fibre showed the most packed and least gelatinised chickpea starch granules, perhaps because PH fibre had the highest WHC and WRC values [18], which means less water available to hydrate starch. Beikzadeh et al. [48] indicated that some gums prevent starch granules from swelling, chain gums avoid the interaction between starch polymers, and also between protein and starch, and results in product texture softening. This falls in line with the results obtained in the present study, because the sausages made with PH fibre had lower firmness values ( Table 6). The control and PP sausages presented a similar structure to the PH samples. However, the PW sausages showed the most deformed, broken, and loose chickpea starch granules. In fact, the chickpea tissue in the formulation with PW appeared disintegrated and, consequently, starch granules were more swollen and freely interacted with other matrix components. This might indicate that PW fibre retains less water and the PW samples would have more water available to interact with other components and, therefore, chickpea starch would develops more, which would makes it more gelatinised. Consequently, this could correlate with samples' hardness, and the sausages that contained the PW fibre were those with the highest firmness, hardness, and chewiness values ( Table 6). Figure 5 shows the structure of the plant-based sausages using Cryo-FESEM. Once again, a complex matrix was observed with gelatinised potato starch in the matrix (first row). Details of the chickpea starch granules (second row) confirm that granules were packed and almost intact in both the control and PH and PP samples. This was not the case with the PW samples, where the chickpea cell tissue breakdown and granules gelatinisation were once again observed. Figure 6 shows samples' sensory attributes. The sensory panel gave lower colour and visual aspect scores to the samples made with PP fibre than the samples prepared with fibres PH and PW (Fig. 6a). The samples with the PW fibre scored the best and were perceived to be more similar to the control sample. This scenario could be related to the colour parameters of the fibre and sausage samples (Tables 3 and 4, respectively), where the darkest fibre sample was PP, and the plant-based sausages with lower L*, a*, and b* values were also those made with PP fibre. The highest ΔE 2 values were between the control sample and the sausages with PP, generally at high concentrations (5% and 6%) (Fig. 2). This result was expected, because they confirmed that marked colour changes can imply rejection, as indicated by Deliza et al. [35]. Sensory analysis Regarding the texture of the plant-based sausages (Fig. 6b), the samples with high juiciness values were the control, PW 3%, PW 5%, PH 5%, PP 3%, and PP 4%. However, samples PW 3% and PW 4% were evaluated as the gummiest samples. The sample with the highest chewiness value was PW 6%, because this parameter was analysed by a TPA (Table 6). Consequently, these results revealed that samples PW 3%, 4%, 5%, and 6% and PP 3% and 6% offered a generally good texture. It was concluded that this texture must be improved to achieve a similar texture to meat products, because it is known that meat replacers or avoiders do not improve product texture, and texture is one of the main reasons why omnivores reject such products. Fig. 6 a Colour and visual aspect, b juiciness, gumminess, chewiness, and general texture, and c overall acceptability of the plant-based sausages produced with 0%, 3%, 4%, 5%, and 6% of Plantago Husk (PH), Plantago Powder (PP), and Plantago White (PW) Finally, the overall acceptability values of the 4% and 6% PW plant-based sausages were higher than for the other samples (Fig. 6c). These PW fibre concentrations seemed suitably desirable, which indicates that this fibre would be acceptable for preparing plant-based sausages, although further research is recommended to improve texture. Limitations As these are products mainly address those consumers who wish to reduce or avoid meat consumption, a sensory analysis should be carried out with this diet type. Noguerol et al. [3] indicated that product texture is less important for vegan and vegetarian consumers than for omnivores. Moreover, in the present study, only psyllium fibres were used as gum to modify texture. General aspect, colour, and gummies were the attributes with the highest scores (> 6 on a scale from 0 to 9). Thus, future studies should include a combination with other ingredients in an attempt to improve general texture. Conclusions The plant-based sausages made in this study had high ash and carbohydrate contents due to the addition of P. ovata fibres. Above all, a lower fat content can be highlighted compared to other meat and meat-free sausages. The use of psyllium fibres also increases the water-holding capacity which, as herein observed, improves the texture of plantbased sausages. However, chewability and colour changes could pose major problems for these sausages. The results of this study show that employing PW fibre can minimise these problems, because hardness and chewiness increase and colour changes are almost imperceptible compared to the control. However, PP fibre is rejected, particularly for the colour it confers sausages. The sensory evaluation showed that these three fibres can be used to prepare plantbased sausages. Moreover, PW fibre can be highlighted for obtaining the overall best score and for the fewest colour changes. Nevertheless, it would be desirable for future studies to research with other fibres, gelling agents, or their combination, to improve the texture and acceptability of these plant-based sausages.
2022-07-07T13:33:05.431Z
2022-07-07T00:00:00.000
{ "year": 2022, "sha1": "e86b943e8b49a92b9a66f7296829f8ca157945a2", "oa_license": null, "oa_url": "https://link.springer.com/content/pdf/10.1007/s00217-022-04063-2.pdf", "oa_status": "BRONZE", "pdf_src": "PubMedCentral", "pdf_hash": "c9cbef7ff9fd59dd4fc0704e2841b797161ec953", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "extfieldsofstudy": [ "Medicine" ] }
14617535
pes2o/s2orc
v3-fos-license
The effect of acute hyperglycemia on retinal thickness and ocular refraction in healthy subjects Purpose To quantify the retinal thickness and the refractive error of the healthy human eye during hyperglycemia by means of optical coherence tomography (OCT) and Hartmann–Shack aberrometry. Methods Hyperglycemia was induced in five healthy subjects who were given a standard oral glucose tolerance test (OGTT) after a subcutaneous injection of somatostatin. Main outcome parameters were the central, pericentral and peripheral thickness of the fovea, measured by means of optical coherence tomography (OCT3). Ocular refractive error was determined with Hartmann-Shack aberrometry. Measurements at baseline and during maximal hyperglycemia were analyzed, and a change was considered clinically significant if the difference between the measurements exceeded the threshold of 50 μm for retinal thickness and 0.2 D for refractive error. Results During hyperglycemia (mean blood glucose level at baseline: 4.0 mmol/l; mean maximal blood glucose level: 18.4 mmol/l) no significant changes could be found in the central, pericentral, or peripheral foveal thickness in any of the five subjects. One of the subjects had a hyperopic shift of 0.4 D, but no significant change in refractive error was found in any of the other subjects. Conclusions The present study shows that in healthy subjects induced hyperglycemia does not affect retinal thickness, but it can cause a small hyperopic shift of refraction. Patients with diabetes mellitus (DM) often experience subjective symptoms of blurred vision associated to hyperglycemia. The nature and origin of this phenomenon are still unclear. Blurred vision during hyperglycemia could be a result of transient refractive alterations due to changes in the lens [5,12,15,25,27,36,37], but it could also be caused by changes in the retina. Macular edema, or retinal thickening due to abnormal fluid accumulation within the macula, is a common cause of visual loss [1,14,22]. The degree of retinal thickening has been found to be significantly correlated with visual acuity [24]. Furthermore, a change in retinal thickness, resulting in a change in axial eye length, could also induce a change in ocular refractive error. For instance, it can be calculated that with a 50 μm increase in retinal thickness, the ocular refractive error becomes 0.15 D more hyperopic [30]. It is unclear whether the thickness of the different retinal areas, such as the foveal area, the pericentral foveal area, and the peripheral foveal area, changes during acute hyperglycemia and suppression of insulin. A change in retinal thickness and/or ocular refractive error could explain the subjective symptoms of blurred vision in patients with DM and hyperglycemia. Therefore, in the present study the effect of hyperglycemia on retinal thickness and ocular refractive error was investigated in healthy subjects during suppression of endogenous insulin. Retinal thickness was measured by means of optical coherence tomography (OCT), which is a non-invasive technique that provides cross-sectional retinal images, and produces an objective measurement of the retinal thickness, independent of the refractive status of the eye [10,11,29]. Furthermore, aberrometry was used to measure the ocular refractive error. This technique makes it possible to detect small changes in ocular refraction [19]. Subjects and methods Five healthy subjects (two males and three females) participated in the study. The mean age of the subjects was 24.8 years (range 21.2-32.6), and their mean body mass index (BMI) was 24.2 kg/m 2 (range 21.4-29.7). The subjects were screened during a first visit, which included medical history-taking, a physical examination (measurement of visual acuity, weight, height and blood pressure) and collecting a fasting blood sample. Exclusion criteria were a history of DM (or a fasting plasma glucose >5.5 mmol/l), a BMI of >30 kg/m 2 , elevated blood pressure (>140 / 85 mmHg), a visual acuity of <0.5 (Snellen) or a history of ocular pathology. The investigators of the ocular parameters (NW and MD) were not informed about the blood glucose levels. Furthermore, the investigators who induced hyperglycemia (EE and SS) were not informed about the results of the ocular measurements. The study protocol was approved by the Medical Ethics Committee of the VU University Medical Center in Amsterdam, and written informed consent was obtained from all subjects after the purpose and nature of the study had been explained to them. Procedure to induce hyperglycemia After a 10-hour overnight fast, the subjects were given a subcutaneous injection of a low dose (100 μg) of synthetic somatostatin (Sandostatin, Novartis, Basel, Switzerland) in order to suppress endogenous insulin secretion. Each subject underwent an oral glucose tolerance test (OGTT) (75 g glucose) 30 minutes after the somatostatin injection, and blood glucose levels were measured with a blood glucose analyzer (HemoCue Diagnostics BV, Oisterwijk, the Netherlands). Endogenous insulin levels were measured by means of immunometric assays (Luminescence, Bayer Diagnostics, Mijdrecht, the Netherlands) in the Endocrinology Laboratory at the Department of Clinical Chemistry of the VU University Medical Center. The subjects remained in fasting state during the entire procedure. Ocular measurements Retinal thickness was measured with the Stratus OCT (Model 3000, Carl Zeiss Meditec, Dublin, CA, USA), which combines a low-coherence scanning interferometer (wavelength 820 nm) with a video camera to visualize the fundus of the eye. The fast macular thickness OCT scan protocol was used to obtain six cross-sectional macular scans, 6 mm in length, which were positioned at equally spaced angular orientations (30°) centred on the fovea. The cross-sectional images were analyzed with OCT3 mapping software that uses an edge-detection technique to locate the vitreoretinal interface and the anterior surface of the retinal pigment epithelium. Retinal thickness was defined as the distance between these two surfaces. Two OCT scans were made of each subject before, and every 30 minutes during the period of hyperglycemia. In order to quantify the retinal thickness, the foveal map constructed by the software was divided into nine Early Treatment Diabetic Retinopathy Study (ETDRS) areas [6]: the central fovea (central circle with a diameter of 1 mm), and the pericentral area (donutshaped ring with an inner diameter of 1 mm and an outer diameter of 3 mm) and peripheral area (donut-shaped ring with an inner diameter of 3 mm and an outer diameter of 6 mm), both of which were divided into four quadrants. Retinal thickness was calculated for all separate areas, and for the average pericentral and peripheral regions. Ocular refractive error was determined with an IRX3 aberrometer (Imagine Eye Optics, Paris, France), which performs wavefront analysis of the eye according to the Hartmann-Shack principle [19]. After pupil dilation and paralysis of accommodation with 1.0% cyclopentolate and 5.0% phenylephrine eye-drops, a series of three aberrometry measurements was made before, and every 30 minutes during the hyperglycemic condition. From these measurements, the equivalent refractive error was calculated as: equivalent refractive error (ERE) = sphere + (cylinder / 2). The measurements at baseline and during maximal hyperglycemia were analyzed, and any change was consid-ered to be meaningful if the difference between the measurements was greater than the threshold of 50 μm for retinal thickness and 0.2 diopters (D) for ERE. The threshold of 50 μm exceeded the 95% confidence interval for the detection of a change in retinal thickness, which has been reported to be approximately 40 μm [4,20,28]. A refractive change of more than 0.2 D also surpasses the precision (defined as 95% confidence interval) of the aberrometer for measuring sphere, cylinder, and consequently ERE [3,31]. In each subject, the significance of a change was obtained from the precision of the measurement instruments and the difference in the ocular parameters at baseline and during hyperglycemia. In the whole group, the significance of a change could be determined by means of Wilcoxon matched pairs signed rank sum tests. P-values ≤0.05 were considered to be statistically significant. Results The changes in blood glucose after the administration of somatostatin and glucose are shown in Fig. 1 Figure 2 shows the normalized ERE of the five subjects during hyperglycemia. Mean ERE at baseline was 0.6 D (SD 0.6) and 0.7 D (SD 0.6) during maximal hyperglycemia; no significant change was found in the group as a whole. A small, but significant hyperopic shift of 0.4 D (SD 0.2) in ERE was measured in subject 01 during maximal hyperglycemia, compared to the start of the procedure (p< 0.001). No significant change in ERE was found in any of the other subjects. Normalized retinal thickness parameters are shown in Fig. 3a (central foveal area), Fig. 3b (average pericentral foveal area) and Fig. 3c (average peripheral foveal area). Average central foveal thickness, average pericentral foveal thickness, and average peripheral foveal thickness at baseline were 202 μm (SD 8), 277 μm (SD 5), and 243 μm (SD 8). During maximal hyperglycemia, average central foveal thickness, average pericentral foveal thickness, and average peripheral foveal thickness were 203 μm (SD 7), 275 μm (SD 3), and 242 μm (SD 9). No significant differences were found in the group as a whole. Furthermore, none of the subjects had any significant changes in the thickness of the central fovea, the pericentral fovea, or the peripheral fovea during maximal hyperglycemia, compared to baseline. The nine separate ETDRS areas were not affected by hyperglycemia. At baseline and during hyperglycemia, any change in retinal thickness that occurred in the various areas was less than 15 μm, which was within the previously defined threshold of 50 μm. Each measured area has been indicated by a dark grey area on the retinal maps. No significant changes in retinal parameters were found in any of the subjects. The oral glucose load was administered at T 0 Discussion Blurred vision is a symptom that occurs frequently in patients with DM and hyperglycemia. The underlying mechanism is still unclear, and therefore the present study was carried out in an attempt to identify a possible cause of this symptom. The effect of reproducible hyperglycemia on retinal thickness and refractive error was studied in healthy young subjects who did not suffer from the systemic effects of DM. No changes in the thickness of the central, pericentral or peripheral foveal areas were found in any of the subjects during hyperglycemia. In addition, no significant change was measured in any of the nine different ETDRS areas of the macula. In their study, Jeppesen et al. [13] also found no significant difference in retinal thickness in healthy subjects during normo-insulinaemic hyperglycemia. Before and 180 minutes after the start of a hyperglycemic clamp they measured the average thickness of the retina, and found that retinal thickness was not affected by hyperglycemia. Although in the present study retinal thickness was measured under different circumstances than in the study of Jeppesen et al. (hypo-insulinaemic hyperglycemia instead of normo-insulinaemic hyperglycemia), the results confirm those of Jeppesen et al. Retinal thickness has been reported to change in patients with long-term DM and retinopathy. A morphological change in the retina may even occur in the early stages of diabetic retinopathy [2,7,8,18,21,23,26,32,33,35,38]. These changes in retinal thickness are usually due to abnormal fluid accumulation resulting from a breakdown of the bloodretinal barrier [34]. Goebel et al. [8] measured retinal thickness by means of OCT in 136 patients with different stages of diabetic retinopathy and with a mean DM duration of 16 years. Mean foveal thickness was 307±136 μm in the diabetic subjects, compared to 153±15 μm in healthy subjects. It seems that only long-term hyperglycemia and/or long-term fluctuations in blood glucose levels have any significant influence on the blood-retina barrier and retinal thickness. From the findings of the present study, it appears that the blood-retina barrier does not seem to be affected by a single episode of acute hyperglycemia. Nevertheless, the fact that no change in retinal thickness could be determined does not exclude the possibility that there could be early dysfunction of the blood retina barrier. Other means of examination could evidence such a dysfunction of the blood retina barrier following acute hyperglycemia. A factor that could have biased the results of this study is the administration of a synthetic somatostatin analogue to the subjects. Somatostatin is a peptide hormone that inhibits several hormones, including IGF-1 and insulin. IGF-1 is a growth factor that is produced by the hypoxic retina to mediate angiogenesis, resulting in neovascularisation. Somatostatin analogues not only inhibit neovascularisation in patients with advanced diabetic retinopathy, but also stabilize the blood-retinal barrier in patients with diabetic macular edema [16,17]. It could have been possible that in the present study an increase in retinal thickness during hyperglycemia was prevented by somatostatin. Nevertheless, the efficacy of synthetic somatostatin in the treatment of advanced diabetic retinopathy was investigated by Grant et al. [9]. With maximally tolerated doses of somatostatin (ranging from 200 to 5000 μg/day), after a period of 15 months one out of 22 eyes required panretinal photocoagulation, compared to nine of 24 eyes that were not treated with somatostatin. From the results of the Grant et al. study, it seems that only frequent, large doses of somatostatin over a long period of time have any significant effect on the progression of diabetic retinopathy. Although the effect of somatostatin on the healthy retina has not been investigated yet, it seems to be unlikely that the results of the present study were biased by the administration of one single, low dose (100 μg) of somatostatin. In conclusion, the results of this study indicate that in healthy subjects, hyperglycemia does not cause any change in retinal thickness. Furthermore, ocular refraction in general was not affected by hyperglycemia. However, there were interindividual variations, as illustrated by subject 01, who had a hyperopic shift of refraction during hyperglycemia. Therefore, it seems that a refractive change during hyperglycemia cannot be explained by a change in retinal thickness. It could well be that other refractive components, such as the lens, are involved in causing blurred vision and refractive alterations during hyperglycemia. Open Access This article is distributed under the terms of the Creative Commons Attribution Noncommercial License which permits any noncommercial use, distribution, and reproduction in any medium, provided the original author(s) and source are credited.
2014-10-01T00:00:00.000Z
2008-01-25T00:00:00.000
{ "year": 2008, "sha1": "a61cdd2271865fdb1d2df17b21a8f5fc7649d510", "oa_license": "CCBYNC", "oa_url": "https://link.springer.com/content/pdf/10.1007/s00417-007-0729-8.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "a61cdd2271865fdb1d2df17b21a8f5fc7649d510", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
116053118
pes2o/s2orc
v3-fos-license
Strength and Durability Study of Concrete Structures Using Aramid-Fiber-Reinforced Polymer Fiber-reinforced polymer (FRP) is an important material used for strengthening and retrofitting of reinforced concrete structures. Commonly used fibers are glass, carbon, and aramid fibers. The durability of structures can be extended by selecting an appropriate method of strengthening. FRP wrapping is one of the easiest methods for repair, retrofit, and maintenance of structural elements. Deterioration of structures may be due to moisture content, salt water, or contact with alkali solutions. Using FRP, additional strength can be gained by structural elements. This paper investigates the durability of aramid-fiber-wrapped concrete cube specimens subjected to acid attack and temperature rise. The study focuses on the durability of aramid-fiber-wrapped concrete by considering the compressive strength parameter of the concrete cube. Concrete cubes are prepared as specimens with a double wrapping of aramid fibers. Diluted hydrochloric acid solution is used for immersion of specimens for curing periods of 7, 30, and 70 days. The aramid-fiber wrapping reduces weight loss by 40% and improves compressive strength by 140%. In a fire resistance test, the specimens were kept in a hot air oven at a temperature of 200 ◦C at different time intervals. Even after fire attack, weight loss in specimens reduced by 60%, with about 150% enhancement in compressive strength due to aramid fiber. Introduction Fiber-reinforced polymers are extensively used in the strengthening and retrofitting of structurally deficient infrastructures.The polymer is typically an epoxy, vinyl-ester, or polyester thermosetting plastic, with a phenol formaldehyde resin.When two or more materials with different physical and chemical properties come together, they form such composites.FRP materials are of high strength-to-weight ratios and are therefore used in seismic retrofit, since an increase in weight will lead to an increase in seismic force. Concrete is the most extensively used building material in the construction industry, but it faces some problems, such as damage from earthquakes and cracking due to shrinkage and expansion.Due to these problems, concrete suffers from moisture attack, resulting in corrosion of steel reinforcement and loss of structural strength.Such damage can be repaired using FRP materials.Structures can also be strengthened to accommodate changes in load variation or code revisions. There is an insufficient database on FRP materials, which makes it difficult for civil engineers and practitioners to use FRP materials on a regular basis.Composites of FRP are manufactured from endless fibers (carbon, glass, and aramid) inserted in the matrix of thermosetting resins of epoxy, vinyl ester, or polyester.The resins bind these fibers together to transfer the load in between (Frigione et al. [1]).Karbhari et al. [2] studied the durability of FRP as internal reinforcement for external strengthening, seismic retrofit, bridge decks, structural profiles, and panels.FRP wrapping is very easy to handle and can be installed rapidly.Proper attention is required for the bond between the FRP material and the concrete surface.Compared with steel plates, FRP is more durable, has no risk of corrosion, and is highly resistant to the aggressive environment, as investigated by Nur Hajarul Falahi et al. [3].Fossetti and Minafo [4] investigated the compressive strength of clay brick masonry column having Basalt Fiber Reinforced Cementitious Matrix (BFRCM) and steel wire collaring as reinforcement.Installation of FRP is one of the easiest retrofitting techniques due to its high speed and less complex nature.It also creates fewer disturbances to the occupants.FRP avoids chemical attack and temperature rises, reduces permeability, and improves strength. Strength and Durability of Concrete Structures Using FRP Concrete is one of the key materials used in construction over the years.Researchers are working on the durability of concrete to save existing structures.Durability can be improved by strengthening existing structures using various techniques.FRP wrapping is one method of strengthening used widely nowadays.Strength and durability studies of FRP under acid attack and temperature changes provide a good insight into the superiority and benefits of FRP in concrete construction, as well as its limitations.Generally, FRP is of high strength, providing good resistance to chemical attack and corrosion in extreme weather conditions.Performance of FRP is affected by its mechanical properties, method of fabrication, and the types of material used.When structures are exposed to acid attack and temperature changes, the durability and high resistance of FRP in an aggressive environment make it superior compared to conventional concrete. Due to extensive use of FRP in the construction industry, its durability is an important factor in the selection of proper material for the purpose of strengthening.Anandakumar et al. [5] studied the durability of basalt-fiber-reinforced polymer (BFRP) for retrofitting of RC piles.BFRP was wrapped around concrete cubes and tested with acid immersion and for fire resistance.Hashim et al. [6] considered the durability of material at two locations.Firstly, the material used has to satisfy durability requirements, and secondly, so does the bond interface between FRP material and concrete surface.Some studies are limited to degradation of interfacial bonding in between CFRP and concrete due to environmental exposure.Choi et al. [7] worked on the durability of CFRP material affected by environmental changes.Same materials behave in different fashions under different environmental conditions.Accordingly, relative assessment was done to investigate a new technique for finding the durability of CFRP material designed for the same application. According to Zaman et al. [8], FRP materials are vulnerable to heat and moisture when subjected to changes in the environment.The reaction of FRP against heat is one of the important factors as far as temperature effects are concerned.Zeng-Zhu Zhu et al. [9] investigated the durability performance of glass-fiber-reinforced polymer (GFRP) and carbon-fiber-reinforced polymer (CFRP) under conditions of humidity, temperature rise, wet and dry cycle, freeze-thaw effect, ultraviolet radiation, and natural exposure.Byars et al. [10] discussed the durability of FRP material in an aggressive environment.Various types of fiber, such as glass fiber, carbon fiber, and aramid fibers were taken into consideration.The effect of moisture, acid attack, and temperature changes on properties of fibers is elucidated.Hamad et al. [11] and Hawileh et al. [12] investigated the mechanical behavior of FRP under elevated temperature.Material can lose its strength as a result of a rise in temperature.Use of FRP can prohibit such losses in tensile strength caused by elevated temperature.Also, the effect of a temperature rise on epoxy adhesive is taken into consideration.According to Hawileh et al. [12], FRP material fails at elevated temperatures with different modes.At temperatures of 100 to 150 • C, it fails in brittle rupture.When the temperature is further raised up to 200 to 250 • C, epoxy adhesive softens.At 300 • C, the adhesive gets burned and the specimen fails. Aramid-Fiber-Reinforced Polymer The aramid fiber originates from aromatic polyamide (aramids) and depends on paraphenylene teraphthalamide, which introduces an amide group and benzene rings into polyamide molecules together.Due to strong inter-chain bonding and a high level of crystallization, modulus and tenacity of these fibers are very high (Chen and Zhou [13]).In aramid fibers, 85% of amide linkages are directly attached to two aromatic rings.These fibers have 5-10% more mechanical properties than synthetic fibers.Such fibers are typically used in composite structures for application in aircraft, marine and automobile, rope for offshore oil rigs, and bulletproof vests.Aramid fibers are abrasion-resistant under cyclic loading.They are five times stronger than steel and also heat-resistant (Jassal and Ghosh [14]).The tensile strength is between 2400 and 3600 N/mm 2 with percentage elongation of 2.2% to 4.4%.The tensile modulus is 60 to 120 GPa.Granata and Parvin [15] worked on Kevlar fiber, which is a type of aramid fiber for the strengthening of the beam-column joint.Shell chemical epoxy was used as an adhesive in this study. Pereira and Revilock [16] used an aramid fiber named Kevlar fabric of tensile strength 55% greater than E-glass fiber and shear strength 180% stronger than E-glass fiber.The bulk density (mass per unit of volume) and linear density (mass per unit of length) of the fabric are 1.44 g/cm 3 and 1.656 × 10 3 g/cm 3 , respectively.A woven bidirectional aramid fabric of plain weave style is used in this study.The areal weight of this fiber is 300 g/m 2 .The thickness of the dry fabric is 0.25 mm. Figure 1 shows the texture of aramid fiber. Fibers 2019, 7, x FOR PEER REVIEW 3 of 11 synthetic fibers.Such fibers are typically used in composite structures for application in aircraft, marine and automobile, rope for offshore oil rigs, and bulletproof vests.Aramid fibers are abrasion-resistant under cyclic loading.They are five times stronger than steel and also heat-resistant (Jassal and Ghosh [14]).The tensile strength is between 2400 and 3600 N/mm 2 with percentage elongation of 2.2% to 4.4%.The tensile modulus is 60 to 120 GPa.Granata and Parvin [15] worked on Kevlar fiber, which is a type of aramid fiber for the strengthening of the beam-column joint.Shell chemical epoxy was used as an adhesive in this study.Pereira and Revilock [16] used an aramid fiber named Kevlar fabric of tensile strength 55% greater than E-glass fiber and shear strength 180% stronger than E-glass fiber.The bulk density (mass per unit of volume) and linear density (mass per unit of length) of the fabric are 1.44 g/cm 3 and 1.656 × 10 3 g/cm 3 , respectively.A woven bidirectional aramid fabric of plain weave style is used in this study.The areal weight of this fiber is 300 g/m 2 .The thickness of the dry fabric is 0.25 mm. Figure 1 shows the texture of aramid fiber.This paper studies the effect of aramid-fiber-reinforced polymer wrapping on the strength of a concrete cube subjected to acid attack and rise in temperature.Comparisons are made between wrapped specimens and unwrapped (controlled) specimens.Results are drawn in terms of reduction in weight loss and improvement in compressive strength due to aramid-fiber wrapping.In structures, several members sustain compressive loads.As per IS 456-2000, design stress is derived from the compressive strength of cube.For design purposes, the compressive strength of concrete in the structure shall be assumed to be 0.67 times its characteristic strength, considering shape and size effect.Concrete is strong in compression and its strength can be further improved against chemical and fire attack using FRP. Methodology A concrete mix design is prepared for M30-grade concrete.The code used for concrete mix design is IS 10262-2009 "Concrete Mix Proportioning-Guidelines". Table 1 shows the design mix proportion for M30-grade concrete.As FRP material is to be wrapped around concrete cubes, the surface characteristics of the concrete are very important in order to furnish proper bonds in the contact area.The fabric material is cut as per the requirement of the area to be covered.Cement paste, if any, is removed and cubes are coated with a mixed proportion of resin and hardener as 100:30.The properties of the resins and This paper studies the effect of aramid-fiber-reinforced polymer wrapping on the strength of a concrete cube subjected to acid attack and rise in temperature.Comparisons are made between wrapped specimens and unwrapped (controlled) specimens.Results are drawn in terms of reduction in weight loss and improvement in compressive strength due to aramid-fiber wrapping.In structures, several members sustain compressive loads.As per IS 456-2000, design stress is derived from the compressive strength of cube.For design purposes, the compressive strength of concrete in the structure shall be assumed to be 0.67 times its characteristic strength, considering shape and size effect.Concrete is strong in compression and its strength can be further improved against chemical and fire attack using FRP. Methodology A concrete mix design is prepared for M30-grade concrete.The code used for concrete mix design is IS 10262-2009 "Concrete Mix Proportioning-Guidelines". Table 1 shows the design mix proportion for M30-grade concrete.As FRP material is to be wrapped around concrete cubes, the surface characteristics of the concrete are very important in order to furnish proper bonds in the contact area.The fabric material is cut as per the requirement of the area to be covered.Cement paste, if any, is removed and cubes are coated with a mixed proportion of resin and hardener as 100:30.The properties of the resins and hardeners are shown in Tables 2-4.Finally, the aramid fiber is wrapped around cubes.Entrapped air bubbles between the fabric and the surface of the cube specimen, if any, are removed. Acid Resistance Test A total of 18 concrete cube specimens were cast with M30-grade concrete.Nine are conventional concrete cubes (controlled specimens) and the remaining nine cubes are double-wrapped with aramid fiber.These specimens are cured in water for 28 days.After curing, these specimens are dried out for 36 h and their initial weight is taken.After this, the specimens are immersed in 2% diluted hydrochloric acid (HCl) acid solution.Casting program of the cube is done as per Table 5.The properties of diluted HCl solution are shown in Table 6.These specimens were removed from the solution as per the times given in Table 5 and the final weight of the specimens was noted.All the specimens were tested for compressive load as per IS 516-1959.See Figure 2 for compressive strength tests on the specimens.Comparisons were drawn for controlled specimens and double-wrapped aramid fiber specimens in terms of weight loss and strength loss, as shown in Table 7. Comparative charts are shown in Figures 3 and 4. controlled specimens and double-wrapped aramid fiber specimens in terms of weight loss and strength loss, as shown in Table 7. Comparative charts are shown in Figures 3 and 4. Fire Resistance Test Concrete cubes were cast and cured for 28 days.In the fire resistant test, the initial weight of the concrete cube specimens were taken.The specimens were then kept in the oven at 200 • C for time intervals of 1 h and 2 h, as shown in Figure 5.The final weight was taken for each specimen after a specific time interval.These specimens were tested for compressive strength.Comparisons were drawn between controlled specimens and aramid fiber double-wrapped specimens.Table 8 shows details of the fire resistance test after 1 h.Accordingly, comparative charts were drawn for weight loss and compressive strength of each specimen, as shown in Figures 6 and 7.The remaining specimens were kept for 2 h in the oven at a temperature of 200 • C. Results are tabulated in Table 9.Based on these results, compressive strength and weight loss are compared in Figures 8 and 9. specific time interval.These specimens were tested for compressive strength.Comparisons were drawn between controlled specimens and aramid fiber double-wrapped specimens.Table 8 shows details of the fire resistance test after 1 h.Accordingly, comparative charts were drawn for weight loss and compressive strength of each specimen, as shown in Figures 6 and 7.The remaining specimens were kept for 2 h in the oven at a temperature of 200 °C.Results are tabulated in Table 9.Based on these results, compressive strength and weight loss are compared in Figures 8 and 9. Discussion A concrete cube specimen fails when the concrete is crushed.The maximum reading before the load gets reversed is taken as compressive load, and corresponding compressive strength is calculated.In the case of the acid resistance test, the average weight loss in the controlled specimens after 7 days, 30 days, and 70 days were 0.18%, 0.26%, and 0.33%, where average compressive strength was 35.06 MPa, 34.63 MPa, and 34.27 MPa, respectively.In the case of the aramid fiber double-wrapped Fibers 2019, 7, 11 9 of 11 specimens, the average weight loss was 0.043%, 0.093%, and 0.13%, and average compressive strengths were 50.97MPa, 49.71 MPa, and 48.93 MPa, for 7 days, 30 days, and 70 days, respectively. When specimens were subjected to the fire resistance test, average weight loss in the controlled specimen after 1 h of heating was 0.129% and average compressive strength was 35.27 MPa.In aramid-fiber double-wrapped specimens, average weight loss was just 0.085% and average compressive strength was 53.01 MPa. Figure 6 indicates that due to the double wrapping of aramid fiber, significant decreases in weight loss were found.Also, the compressive strength of the specimens increased when aramid-fiber wrapping was done, as shown in Figure 7.At 2 h, weight loss and compressive strength were almost the same as at 1 h, as shown in Figures 8 and 9. Modes of failure of the concrete cube before and after wrapping of aramid fiber are shown in Figure 10.These cubes are modeled in SAP 2000 NL for validation of results, as shown in Figure 11.It is observed that the maximum displacement of the controlled specimen is 0.210 mm.For double-wrapped specimens, maximum displacement is 0.281 mm.Due to confinement of fabric, load carrying capacity is increased, with an increase in displacement before failure.Maximum and minimum stresses in the controlled specimens are 47.43 MPa and 32.49MPa, respectively.After wrapping of specimen, maximum stress is increased to 67.02 MPa and minimum stress is 32.56 MPa, as per SAP 2000 NL. A concrete cube specimen fails when the concrete is crushed.The maximum reading before the load gets reversed is taken as compressive load, and corresponding compressive strength is calculated.In the case of the acid resistance test, the average weight loss in the controlled specimens after 7 days, 30 days, and 70 days were 0.18%, 0.26%, and 0.33%, where average compressive strength was 35.06 MPa, 34.63 MPa, and 34.27 MPa, respectively.In the case of the aramid fiber double-wrapped specimens, the average weight loss was 0.043%, 0.093%, and 0.13%, and average compressive strengths were 50.97MPa, 49.71 MPa, and 48.93 MPa, for 7 days, 30 days, and 70 days, respectively. When specimens were subjected to the fire resistance test, average weight loss in the controlled specimen after 1 h of heating was 0.129% and average compressive strength was 35.27 MPa.In aramid-fiber double-wrapped specimens, average weight loss was just 0.085% and average compressive strength was 53.01 MPa. Figure 6 indicates that due to the double wrapping of aramid fiber, significant decreases in weight loss were found.Also, the compressive strength of the specimens increased when aramid-fiber wrapping was done, as shown in Figure 7.At 2 h, weight loss and compressive strength were almost the same as at 1 h, as shown in Figures 8 and 9. Modes of failure of the concrete cube before and after wrapping of aramid fiber are shown in Figure 10.These cubes are modeled in SAP 2000 NL for validation of results, as shown in Figure 11.It is observed that the maximum displacement of the controlled specimen is 0.210 mm.For double-wrapped specimens, maximum displacement is 0.281 mm.Due to confinement of fabric, load carrying capacity is increased, with an increase in displacement before failure.Maximum and minimum stresses in the controlled specimens are 47.43 MPa and 32.49MPa, respectively.After wrapping of specimen, maximum stress is increased to 67.02 MPa and minimum stress is 32.56 MPa, as per SAP 2000 NL.A concrete cube specimen fails when the concrete is crushed.The maximum reading before the load gets reversed is taken as compressive load, and corresponding compressive strength is calculated.In the case of the acid resistance test, the average weight loss in the controlled specimens after 7 days, 30 days, and 70 days were 0.18%, 0.26%, and 0.33%, where average compressive strength was 35.06 MPa, 34.63 MPa, and 34.27 MPa, respectively.In the case of the aramid fiber double-wrapped specimens, the average weight loss was 0.043%, 0.093%, and 0.13%, and average compressive strengths were 50.97MPa, 49.71 MPa, and 48.93 MPa, for 7 days, 30 days, and 70 days, respectively. When specimens were subjected to the fire resistance test, average weight loss in the controlled specimen after 1 h of heating was 0.129% and average compressive strength was 35.27 MPa.In aramid-fiber double-wrapped specimens, average weight loss was just 0.085% and average compressive strength was 53.01 MPa. Figure 6 indicates that due to the double wrapping of aramid fiber, significant decreases in weight loss were found.Also, the compressive strength of the specimens increased when aramid-fiber wrapping was done, as shown in Figure 7.At 2 h, weight loss and compressive strength were almost the same as at 1 h, as shown in Figures 8 and 9. Modes of failure of the concrete cube before and after wrapping of aramid fiber are shown in Figure 10.These cubes are modeled in SAP 2000 NL for validation of results, as shown in Figure 11.It is observed that the maximum displacement of the controlled specimen is 0.210 mm.For double-wrapped specimens, maximum displacement is 0.281 mm.Due to confinement of fabric, load carrying capacity is increased, with an increase in displacement before failure.Maximum and minimum stresses in the controlled specimens are 47.43 MPa and 32.49MPa, respectively.After wrapping of specimen, maximum stress is increased to 67.02 MPa and minimum stress is 32.56 MPa, as per SAP 2000 NL. Conclusions From the experimental program, following conclusions are drawn: 1. In the acid resistance test, weight loss can be reduced by about 26% to 40% using double-wrapped aramid fiber.2. Even after acid attack, compressive strength of specimens is increased by 142% at the end of 70 days by aramid-fiber wrapping.In the fire resistance test, weight loss of specimens can be reduced by 60% using aramid fiber. 4. With a rise in temperature at 200 • C for 1 or 2 h, the compressive strength of specimens is increased by 150% when wrapped with aramid fiber. 5. Concrete cubes double-wrapped with aramid fiber show greater compressive strength and less weight loss when subjected to acid attack and thermal effects than the controlled cube specimens.6. Aramid fiber can be used as a strengthening material for reinforced concrete elements subjected to compressive load, as it enhances durability and increases the life of the element.7. This research paper is restricted to the wrapping of aramid fiber around concrete cubes in a double layer.The effect of a number of layers on the strengthening of a short column can be explored in further investigations. Figure 3 . Figure 3. Weight loss in the acid resistance test. Figure 3 . Figure 3. Weight loss in the acid resistance test. Figure 4 . Figure 4. Compressive strength in the acid resistance test. Figure 3 . Figure 3. Weight loss in the acid resistance test. Figure 3 . Figure 3. Weight loss in the acid resistance test. Figure 4 . Figure 4. Compressive strength in the acid resistance test. Figure 4 . Figure 4. Compressive strength in the acid resistance test. Figure 10 . Figure 10.Concrete cube failure (a) without wrapping and (b) with double wrapping. Figure 10 . Figure 10.Concrete cube failure (a) without wrapping and (b) with double wrapping. Figure 11 . Figure 11.Concrete cube modeled in SAP 2000 NL (a) without wrapping and (b) with double wrapping. Figure 11 . Figure 11.Concrete cube modeled in SAP 2000 NL (a) without wrapping and (b) with double wrapping. Table 1 . Proportions of design mix. Table 1 . Proportions of design mix. Table 2 . Properties of HINPOXY C resin. Table 5 . Casting of cubes in 2% diluted HCl solution. Table 6 . Properties of the diluted acid solution. Table 7 . Acid resistance test results.
2019-03-05T02:44:56.445Z
2019-01-26T00:00:00.000
{ "year": 2019, "sha1": "acb707de09f855d315059798b1251a2e591115fc", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2079-6439/7/2/11/pdf?version=1548483070", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "acb707de09f855d315059798b1251a2e591115fc", "s2fieldsofstudy": [ "Materials Science", "Engineering" ], "extfieldsofstudy": [ "Engineering" ] }